text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
A Framework for Improving Business Intelligence through Master Data Management
For reporting and for analytics purposes, financial banks highly demand real-time delivery of quality operational information. This means working with high volumes of data regularly, that would need to be managed to produce the best quality of data for analytics and reporting purposes. Through the empirical study, it was found that banks are still faced with challenges of data management, maintaining and creating quality data. Business Intelligence (BI) entails timely analysis and decision Abstract
Introduction
For reporting and for analytics purposes, financial banks highly demand real-time delivery of quality operational information.This means working with high volumes of data regularly, that would need to be managed to produce the best quality of data for analytics and reporting purposes.Through the empirical study, it was found that banks are still faced with challenges of data management, maintaining and creating quality data.Business Intelligence (BI) entails timely analysis and decision
Abstract
For reporting and for analytics purposes, financial banks highly demand real-time delivery of quality operational information.This paper argues that for banks to improve their decision making and performance management, business intelligence and data management ought to be core activities.Conversely, Business Intelligence (BI) and Master Data Management (MDM) are activities with most challenges as financial banks grapple with ways to make profitable informed decisions that align with the projected business targets and goals.In this paper, the argument is that a profound understanding of master data management, as an activity, is important to the delivery of real-time quality information.Subsequently, business intelligence should improve.To this point, this paper explores and describes how an understanding of MDM activities may lead to improved business intelligence in a financial bank.In this paper, MDM and BI are seen as activity systems.However, this paper focuses on the MDM activity and not the BI activity.Activity theory is used as a lens to analyze the activity system.The paper concludes by conceptualizing a framework for improving business intelligence through a sound master data management.______________________________________________________________________________________________________________ ______________ Ray M. Kekwaletswe and Tshegofatso Lesole (2016), Journal of South African Business Research, DOI: 10.5171/2016.473749 making with relevant, accurate, adequate, correct and complete data.Business Intelligence is a discipline that draws business decision making through performed analytics and reporting based on the data an organization has.Wu et al.,(2007) define BI as a business management term used to describe applications and technologies that are used to gather access to, and analyse data and information about the organization to help management to make business decisions.Ghazanfari et al., (2011) add on to say that the theme of BI has two 'divisions' from technical, system-enabler and managerial viewpoints -tracing two broad patterns, managerial and technical approaches to BI. Thomsen (2003) posits that BI as a term replaces decision support, executive information systems and management information systems.On the other hand, Nelson and Phillips (2003) believe that Business Intelligence (BI) communities bring clarity and reasoning to unreadable data to empower good decision making.This is through data that have been rationalized, centralized and mastered for reporting on the business (Rudin and Cressy, 2003); Langseth and Vivatrat (2003) say that there are essential components that fulfill BI namely: (1) Realtime data warehousing, (2) Data mining, (3) Automatic learning and refinement, and (4) Data visualization.Decision making is aided by data; therefore, it is valuable for organizations to manage their data to gain desired results when performing business analytics and reporting.Data are of high quality if are fit for their intended use in operations, decision-making, and planning (Juran, 1964).Data Management as defined by Mosley (2007), is the development, execution and supervision of plans, policies, programs and practices that control, protect, deliver and enhance the value of data and information assets.The effectiveness of BI lies in the ability to present business information in a timely manner and this is dependent on the data availability (Clack et al., 2007).Data Management helps in controlling and coordinating the usage of relevant and reliable data.Swanton et al.,(2007) suggest that Business Intelligence can take advantage of data management and further more Master Data Management (MDM) to improve the latter's practice.They go on to say the ultimate goal of MDM strategies is to harmonize data to facilitate analytics for business intelligence.They are of the opinion that the analytics that MDM could provide include (1) Measuring data quality and improving on metrics reporting throughout all levels of business, and (2) Creating a sift that will assist in identifying problems and getting them resolved.The next section discusses master data management and approaches.
Master Data Management
Master data management is the technology, tools, and processes required to create and maintain consistent and accurate lists of master data (Haselden and Wolter, 2006).Some of the processes in MDM include source identification, data collection, data transformation, normalization, rule administration, error detection and correction, data consolidation, data storage, data distribution, and data governance.The tools include usage of data networks, file systems, a data warehouse, data marts, an operational data store, data mining, data analysis, data virtualization, data federation and data visualization.Master Data Management is also seen as a collection of the best data management practices (Loshin, 2009).It should be a system of business processes and technology components that ensures information about business objects, such as materials, products, employees, customers, suppliers, and assets (Swanton et al., 2007).These approaches have different actions that can happen in them.Figure 1 shows the different practices that pertain in each MDM approach .To have a more solid MDM strategy that would assist improve BI, two (2) MDM approaches should be practiced simultaneously.The Business Approach would look at a plan of action on MDM activities and processes with the analogy of the organization and what it needs.It will focus on the Business Objectives, Business Process Policies, Business Rules, Change Management and Resource (people) Roles and Responsibilities in ensuring and controlling a set MDM strategy.The Technical Approach would look at the technologies, tools and technical rules and guidelines that will ensure a single truth of the data 'mastered' and an efficient MDM.The advantage of using the two approaches simultaneously is the commonality that they both promote data governance and data quality, which is a mandatory outcome for a good MDM strategy.Combining the two approaches will tackle the people, processes and technology aspect of MDM.
This paper looks at BI as an activity that could be improved through a thorough understanding of MDM.The rest of the paper is organized as follows: first the background to the research problem is discussed, followed by the theoretical framework and research methods.Thirdly, the case study findings are discussed.This is followed by conceptualizing a framework for improved business intelligence through master data management.Lastly, the paper is concluded.
Context and study Location
The study was conducted at a financial bank whose main business is to allocate funds from investors/savers to borrowers efficiently.The bank provides personal, commercial and corporate banking services to more than 6 million customers across South Africa.Remaining on the cutting edge of banking by offering customers' innovative services is one of the bank's missions.Currently, with the exponential growth of data and innovative services provided to customers in the banking environment, there is a high demand in the for real time delivery of quality operational information for reporting, analytics and to enhance customer service.
Challenges and issues experienced by the bank
A preliminary observation showed that the bank is faced with challenges of managing data.Banks are posed with the challenge to gain insight on how to make business agile, through data usage.The demand to use ______________________________________________________________________________________________________________ ______________ Ray M. Kekwaletswe and Tshegofatso Lesole (2016), Journal of South African Business Research, DOI: 10.5171/2016.473749 data to deliver accurate information in real time for Business intelligence is also as crucial but challenging.This demand is increasing exponentially.South African banks generally have the fastest growing use and demand for quality data; however, they are often faced with data management challenges that tend to hinder business intelligence.The commonly experienced issues are data inaccessibility, data duplication, poor data quality, data inaccuracy, data incompleteness and data irrelevance are being experienced, particularly in business decision making and reporting.
The challenges experienced at the bank included inadequate data governance that causes redundant data storage, inconsistent data due to duplicate/multiple data storage in the organization and incorrect data sourcing for reporting which is normally due to the multiple data storage locations in the organization; secondly, the business needs and the chosen MDM methodologies and technologies used are not in sync.
Inappropriate data methodologies and technologies to assist with data analytics are often used.
The next section discusses the underpinning theory and the research methods used to address the research problem.
Theoretical Framework And Research Methods
This section discusses the theory underpinning the study and the research approaches undertaken for the study.It discusses Activity Theory, research paradigm, strategy, design, as well as the data collection techniques.
Theoretical Framework
This paper argues that Master Data Management as an activity may improve business intelligence in banks; therefore, Activity Theory (AT) was seen as an appropriate theoretical framework to underpin the study.Morf and Weber (2000) posit that AT is a conceptual framework based on the idea that activity is primary, that doing precedes thinking and that goals, images, cognitive models, intentions and abstract notions like 'definition' and 'determinant' grow out of people doing things'.Activity Theory has four elements which helped guide the study: tools in use, subject of study, objective of study and the outcome.Informed by Engestrom (2001) Empirical data were collected around these activities; which formed the objectives of the study as themes.These themes were: Identifying data management activities and processes within the case study; Analysis of the current MDM practices and actions within the studied bank; Determining an efficient MDM approach that would best suit the bank's current environment; Analysis of BI activities in the case study.
Research Approach and Paradigm
In research, there are two (2) types of research approaches that could be used, Qualitative and Quantitative.Shank (2002) defines qualitative research as "a form of systematic empirical inquiry into meaning".Ospina (2004) adds on to say, by systematic it means "planned, ordered and public", following rules agreed upon by members of the qualitative research community.
Maxwell (2013) says, qualitative is research that is intended to help one better understand the following: the meanings, perspectives of people you study -seeing the world from another point of view not the researcher's point of view and how these perspectives are shaped by -their physical, social and cultural context and the specific processes that are involved in maintaining the phenomena and relationships.Anderson (2006) summarises the difference between qualitative and quantitative research by simply saying that qualitative research is subjective and the other is objective.
A qualitative approach was chosen for the study.As highlighted by Conger (1998), Brynam et al (1988) This means qualitative approach allowed us to dive deeper into the case to understand the meaning and significance of MDM and BI.It also gave us the ability to follow unexpected leads during the study, explore currently carried out processes effectively, and to study representative elements of the case and its social meaning.This study approach also ties well with the interpretive paradigm.Obrien (1998) explains interpretive paradigm as an emphasis on the relationship between socially-engendered concept formation and language.
Interpretive paradigm is underpinned by observation and interpretation, thus to observe is to collect information about events, while to interpret is to make meaning of that information by drawing inferences or by judging the match between the information and some abstract pattern (Aikenhead, 1997).Interpretive paradigm was found appropriate to follow since the intention was to understand the subjective meanings experienced by the participants of the case study.
Research Location and Strategy
This was a case study of one of the leading banks in South Africa.Yin (2003) defines a Case Study Research Method as an "empirical inquiry that investigates a contemporary phenomenon within its reallife context.The studied bank possesses large volumes of data with a complex BI environment; this study also helped draw views of a banking industry in South Africa and provided a more realistic approach to understanding issues and problems with respect to MDM and BI activities.A case study was seen appropriate as it was seen to be a more interactive research approach.It allowed the researcher to spend time at the study location and interact with participants to understand and observe how business intelligence and data management is conducted in the banking industry.______________________________________________________________________________________________________________ ______________ Ray M. Kekwaletswe and Tshegofatso Lesole (2016), Journal of South African Business Research, DOI: 10.5171/2016.473749
Participants
The study participants were drawn from the following units: (1) The Data Warehouse Team -which provides data to the bank's reporting team and should ideally hold the master data, (2) The Business Intelligence Team -these are the report generators/creators, (3) The Analytics Team from the varied business units (core banking services) -this is a group of individuals whose mandate is to analyse how the bank is financially and operationally performing and give suggestive activities to improve business, (4) The Production Team -manages end of day data feeds into the warehouse or warehouses, (5) The Branch Consultantsthey capture data on to the bank's systems at branch level.
Data Collection Techniques
In collecting empirical evidence for the study, the following were the data collection techniques used: Interviews: For this study, semi-structured face to face interviews were individually conducted with 14 participants whose average number of year that the respondents held at their current role and within the bank was seven (7) years that work and were affected by Data Management and Business Intelligence in the bank.
Observation: A naturalistic observation was done in terms of thick description of MDM and BI activities, e.g., how data are captured, stored, managed and used for reporting and analytics, including who is acting in which role, when and where, etc.There was engagement with limited interaction and intervention with research participants and events; interventions only occurred when clarity of activities and actions was needed.
Discussion and Interpretation of Findings
The discussion and interpretation of findings is done in this section, per the three themes: (1) Data Management Activities and processes within the banking environment; (2) current MDM practices and actions in a banking environment; and (3) Business Intelligence activities in a banking environment.The three activities are discussed below:
Analysis of Data Management Activities and processes within the banking environment
Data Management is an activity performed by a community of employees in difference teams within the bank.Kuutii (1991) states that an activity contains a number of actions, with the same actions featured in different activities to show their significance.Figure 2 In a data management activity, a branch consultant performing 'Customer Profile Opening' action could simultaneously be satisfying data quality, business process rules, data standardization and conforming to the needed data structures.The different identified functions of data management, figure 3, shows the actions in the activity and how they are interlinked and coordinated.The data management actions include: -(1) data entry, (2) follow data policies developed for standardization, (3) through given data security measures granted to them perform this role, (4) data quality measures that are put on the system restrict how the data are captureduser authentications and log in credentials created for all given access to those that use the data, (5) at the end of creating a customer profile and account, the data are stored onto their respective repository.Some of the actions that could be performed at the back-end of the system include data quality assurance, which could be initiated by different set rules on the batches and on the data storages (tools), data security measures (rules and data policy measures) that are set by the data stewards and owners to assist in sharing and accessing data in the data storages, data sharing, data ownership, and data standardisation, including how they are stored in the different data storages and accesses.
Figure 3 shows the actions that are performed at the back-end of the system.These include data quality assurance that could be initiated by different set rules on the batches and on the data storages (tools); data security measures (rules and data policy measures) that are set by the data stewards and owners to assist in sharing and accessing data in the data storages; data sharing, data ownership, data standardisation and the different data storages and data accesses.The data management activity included data entry, data storing, data sharing, data security and data quality assurance actions.All the participants emphasized the importance of data quality in the bank, particularly for decision making and reporting; they also mentioned that data quality is compromised in many ways than one.It was highlighted in this theme that data quality assurance needs to be continuously monitored and improved in order to get the best quality data constantly -which currently is not the case.Most data storage business units do not conform to the same rules in regulating the data they store and/or maintain.
Analysis of the current MDM practices and actions in a banking environment
Master data management is a collection of the best data management practices (Loshin, 2009.The practice includes the people that perform the data action, the tools used to perform the action, the policies and rules defined to govern the action, the infrastructure used to sustain the activity and its actions e.g. the business applications (data entry points) and integration and sharing the use of accurate, timely, consistent and complete master data.
There are multiple identified data storages that keep different but associated data in the bank; (1) Integrated Deposit Storagepart of the book keeping system that stores all customer and deposit account systems and feeds the deposit and demand deposit front-end systems; (2) Integrated Loan Storage -also part of the book keeping system that stores customer and loan (short & long term loans) accounts data and feeds the Integrated loan processing and online collection front-end systems, (3) Card Storage -also part of the book keeping system that stores credit card account data, it feeds the front-end card system; (4) Relationship Profitability Storage -this data storage holds all customer static information and feeds the customer information, Relationship and Product Management front-end systems; and (5) Online Delivery System -this data storage holds all data that are captured on external data entry points, which feeds front-end systems like ATM, Switch Transactions, Branch Automations and Online Applications.
All the preceding data storages are, in their rights, treated as master databases.Each business unit takes its data storage as the master database.This is in contradiction to what Karel (2011) sees MDM to be; he posits MDM to be the business capability charged with finally delivering that elusive single trusted view of critical enterprise data.Loshin (2009) also says master data can also be referred to using these terms like "critical business objects", "business entities", and "business concepts"; this essentially referring to common data themes.The bank needs to have a single view of their "business concepts"customer, account, inventory and so onnot the current multiple views of account data.
The studied bank currently follows only one MDM approach -the technical MDMand this proves to be a challenge for the bank.As business objectives and goals change, the technology does not necessarily change.And if it does, there is no formal communication downstream to the technical teams about these changes.MDM should be a system of business processes and technology components that should ensure information about business objects, such as materials, products, employees, customers, suppliers, and assets (Swanton et al., 2007).The two MDM approaches have different actions that can happen in them.Business/Managerial MDM Approach focuses on the processes required to create and maintain consistent and accurate lists of master data (Haselden and Wolter, 2006), and this is the MDM approach that the studied bank did not implement.The Technical MDM Approach focuses on the tools required to create and maintain consistent and accurate lists of master data (ibid., 2006).
With the technical MDM approach, the middle management and general staff (these could be the ETL Developer, data analysts) are forced to constantly analyze the data that are stored in the different ______________________________________________________________________________________________________________ ______________ Ray M. Kekwaletswe and Tshegofatso Lesole (2016), Journal of South African Business Research, DOI: 10.5171/2016.473749 data storages manually.The data are checked for correctness and validity -also known as data quality assessment and assurance.
The employees would determine "like" data in the data storages and integrate these data onto a central point.However, without a buy-in or communication from top managementthose that know and understand organizational objectives-these desires are as good as nothing and may lead to incorrect capturing and storage of data.A combination of the business and technical MDM approaches could work for the studied bank to help control the data management activities performed.
Analysis of Business Intelligence activities in a banking environment
To recap, this paper aimed to conceptualize a framework for an improved Business Intelligence.This means articulating ways to improve in delivery of quality operational information for reporting and analytics (outcome of activity).BI actions that are carried out at the studied bank include: (1) Analyzing data sources and integration-drawing adequate and relevant data from the multiple data sources timelessly is very difficult and time consuming, for the BI developers and data analysts; (2) Data profiling and qualitythese actions are informed by assessing data and making sure it is accurate for what it is needed for.Analysis of the candidate data sources for a data warehouse clarifies the structure, content, relationships and derivation rules of the data (Loshin, 2009).In the bank, data profiling and quality is only done when the BI team has sourced the data and the data is prepared reporting instead of at the initiation phase of when the data is captured by bank employees/ consultatants..
A Framework For Improved Bi Through Mdm
This paper conceptualises five actions for the master data management activity.business objectives, business process policies and business rules, to the rules that are built to capture and store data.In understanding the results hoped for by the organization, the business objectives could be the guidelines that become the foundation of the Master Data Management activities and practices.
Action 2: Data Governance -the activities to govern and standardize data into the different data entry systems and data storages within the bank that could be followed are: (a) Data Quality -There should be measures on the organization's front-end systems, data storages and data transformation rules that are aligned to the business needs and rules;(b) Data Stewardship -Build a community of data stewards from different subject areas of the organization -EXCO, IT and general employees at branch.These data stewards will be responsible looking at data integrity issues and measures within the entire bank.They need to look at data needs at a strategic level (business level) and what rules or ways would enable the bank to get these data; (c) Data Profiling -all the organization's current data sources must be examined.
The quality of data values within these data sources should be evaluated by comparing it to the desired data profiles.A uniform way of storing data in the organization especially where the data entry systems are the same should be introduced.This could be done through data structure definition of all the data sources; on extraction of data from the different data sources to "master" data sources to perform a data quality assessment of data in each data source based on the set rules that are also in line with business rules (to retrieved "expected" data).This is to ensure the "master" data source only receives the same kind of data content and format; and (e) Metadata Managementhave meta-models define the structures of metadata.Centralization of metadata encourages the reuse of data and avoids data user confusion of usage of data.
Step 3: Data Integration -The entire organization's data should be integrated and stored at a central point within the bank; this is by sharing and merging common and key business function data from existing repositories.
Step 4: Availing data for Use (analytics and reporting) -This is the security measures and intelligent way of availing high quality and single view of data to the entire bank's community for both internal and external use.
Step 5: Continuous Improvement Process of Step 1-4-To ensure continuous return on investment on master data management and improved Business intelligence, it is important to continuously evaluate the above given steps and enhance them when gaps are identified.The business objectives, needs and rules should constantly be reviewed to ensure that the 'master' data in place are still relevantly matched to them.
The levels which an efficient MDM activity system should consist of are shown in figure 4. The varied actions need to be continuously updated and followed in order to improve business intelligence in a banking environment.
Figure 1 :
Figure 1: MDM approaches and actions (Lesole and Kekwaletswe, 2014; Haselden and Wolter, 2006) shows the different actions to creating a new profile and a new account.It shows the data managment process and procedures involved in the activity.Journal of South African Business Research ______________________________________________________________________________________________________________ ______________ Ray M. Kekwaletswe and Tshegofatso Lesole (2016), Journal of South African Business Research, DOI: 10.5171/2016.473749
Figure 2 :
Figure 2: New Profile and Account creation activity -Front end data management actions
Figure 3 :
Figure 3: Activity to create New Profile and Account -Back end data management actions (Source: Lesole and Kekwaletswe, 2014) data can help promote and improve data source standardization.Data profiling and standardization rules should be in sync and should be enforced mainly at all data entry systems.This could be done by building a standardization job (function) ______________________________________________________________________________________________________________ ______________ Ray M. Kekwaletswe and Tshegofatso Lesole (2016), Journal of South African Business Research, DOI: 10.5171/2016.473749
Figure 4 :
Figure 4: A Framework for Improving Business Intelligence through Master Data Management (Adapted from Lesole and Kekwaletswe, 2014)The framework, conceptualised as figure4, shows the convergence of variedactions (MDM activity) which, when performed correctly, may lead to improved business intelligence.It is inferred that the data challenges and issues observed during the | 5,993.4 | 2016-04-18T00:00:00.000 | [
"Business",
"Computer Science"
] |
GlycoPOST realizes FAIR principles for glycomics mass spectrometry data
Abstract For the reproducibility and sustainability of scientific research, FAIRness (Findable, Accessible, Interoperable and Re-usable), with respect to the release of raw data obtained by researchers, is one of the most important principles underpinning the future of open science. In genomics and transcriptomics, the sharing of raw data from next-generation sequencers is made possible through public repositories. In addition, in proteomics, the deposition of raw data from mass spectrometry (MS) experiments into repositories is becoming standardized. However, a standard repository for such MS data had not yet been established in glycomics. With the increasing number of glycomics MS data, therefore, we have developed GlycoPOST (https://glycopost.glycosmos.org/), a repository for raw MS data generated from glycomics experiments. In just the first year since the release of GlycoPOST, 73 projects have already been registered by researchers around the world, and the number of registered projects is continuously growing, making a significant contribution to the future FAIRness of the glycomics field. GlycoPOST is a free resource to the community and accepts (and will continue to accept in the future) raw data regardless of vendor-specific formats.
INTRODUCTION
For reproducibility and sustainability of scientific research, the public sharing of raw data obtained by researchers is of paramount significance (1). The FAIRness (Findable, Accessible, Interoperable and Re-usable) of datasets is the most important principle that supports open science in the future (1)(2)(3). For genomics, the sharing of raw data from next generation sequencers (NGS) is implemented through public repositories (4). In addition, registration of gene expression data such as RNAseq into data repositories is becoming standardized (5,6). Furthermore, mass spectrometry (MS) has become the method of choice for the qualitative and quantitative characterization of complex protein and glycan mixtures (7)(8)(9)(10)(11), and thus a need for a repository for sharing such data has been recognized. For data in the field of proteomics, qualitative and quantitative mass spectrometry-based analyses are performed and reported. These studies may characterize relatively simple systems, such as protein complexes or much more complex mixtures, such as cell organelles, full cell lysates or different organs. Thus standards such as for data processing, common data formats and issuance of common accession numbers for submitting raw data to repositories are being promoted under the global activity called the ProteomeXchange (PX) consortium (12). In the proteome field, there are several repositories approved by this PX Consortium all over the world (13)(14)(15)(16), and each of them operates its own repository according to their respective region and specific data format. As a result, all proteome MS data are accessible from the ProteomeCentral portal site managed by the PX Consortium, where over 20,000 projects are currently registered (17).
Proteomics analysis may also include the characterization of post-translational modifications (PTMs) including glycosylation. However, in most cases such PTMs are simply added in the annotations as text. With the recent development of a glycan structure repository GlyTouCan (18), this information should also be linked with glycan and glycomics data. Therefore, in the glycomics field, the Minimum Information Required for A Glycomics Experiment (MI-RAGE) initiative began with the recommendation of minimum information required to be reported when publicizing glycomics experiments (19). MIRAGE standards for 'minimum information required for a glycomics experiment' and proposes guidelines for many of the experimental techniques used when working with glycans. The first of these guidelines was for MS experiments, where the minimum information needed to be reported was delineated including the type of instrument used, its parameters, peak lists with characterized structures and raw data. This guideline helped standardize the metadata required for the registration of MS data in a repository. UniCarb-DR was recently announced as a repository for characterized glycans by MS, storing peak list information, and GlycoPOST was mentioned as the raw data repository (20). In this manuscript, we describe the details on the usage of GlycoPOST.
We believe that there is an urgent need for an official repository for glycomics MS raw data, so we have developed and since operated a repository called GlycoPOST. GlycoPOST has been made available for over a year so far, and through our efforts to approach the glycomics community, 50 users have registered, >70 projects have been created, and over 2000 files have now been deposited in Gly-coPOST, totalling 700 GB of data. As the use of MS in the glycoscience field is expected to grow further in the future, the number of projects registered with GlycoPOST is very likely to increase.
DATABASE DESCRIPTION
GlycoPOST accepts MS data from glycomics experiments and issues an accession number to provide traceability for reuse and reanalysis of the data. This system is an adaptation of the jPOST repository system (14), which has already proven to be a stable MS data repository for proteomics. Basically, the technology implemented in the jPOST repository has been implemented in GlycoPOST as well, and it inherits the usability of the jPOST repository. In addition, the GlycoPOST system has been designed to make it easy to input various metadata such as experimental conditions and instrument settings specific to glycomics. Metadata such as experimental conditions are set to comply with the MIRAGE guidelines, and thus we can claim that Gly-coPOST contributes to standardization in the glycomics field ( Figure 1). As illustrated in this figure, GlycoPOST is a part of the GlyCosmos portal (21), which also includes UniCarb-DR and GlyTouCan (18) as fellow repository systems. GlyTouCan is the international glycan structure repository, and it assigns accession numbers to individual glycans. UniCarb-DR is a repository of peak lists, and the raw data is registered in GlycoPOST. Due to this relationship between UniCarb-DR and GlycoPOST, we have implemented a combined user registration system that handles user information for both repositories.
MIRAGE guidelines
MIRAGE (Minimum Information Required for A Glycomics Experiment) is a set of guidelines established by the MIRAGE committee to specify the minimum information required for reporting glycan-related experiments, such as sample preparation (doi: 10 GlycoPOST has adopted the guidelines for the portion of MIRAGE that is relevant to mass spectrometry experiments for glycomics. To make it easier for users to enter and manage metadata, the input section for metadata has been divided into the following five sections, 'Sample preparation', 'General features', 'Ion sources', 'Ion transfer optics', and 'Spectrum and peak list generation and annotation', each of which can be registered in GlycoPOST as a reusable 'preset'. As long as the experimental conditions are the same, the user can use a previously created preset as is, or they can change some parts of it and create another preset. Note that the content of each of the following presets are all based on the current version of the MIRAGE guidelines and are subject to change based on any updates to these guidelines. The latest version of the details of this information is available at https://glycopost.glycosmos.org/ help#mirage.
Preset 1: Sample preparation
The sample preparation section is designed to include all aspects of sample generation, purification and modifications of the biological and/or synthetic material analyzed. Users input biologically derived material and/or chemically derived material as sample origin, and enzymatic and/or chemical treatments as sample processing for isolation. In addition, enzymatic and/or chemical modifications, and purification steps are needed to be registered.
Preset 2: General features
In this preset, global descriptions such as the used instrumentation, any particular customizations, and general instrument control parameters such as instrument control software. This includes the software name and version information.
Preset 3: Ion sources
This preset is used for summarizing all the parameters for ion generation including controls of in-source fragmentation, the degree of fragmentation, as well as other more common parameters such as capillary voltage or laser intensity settings.
Preset 4: Ion transfer optics
This preset requires instrumental details related to the processes after ions are generated such as transport, gas phase reactions and detection of ions.
Preset 5: Spectrum and peak list generation and annotation
The software used to generate peak list files from mass spectrometry raw data files and software and/or databases used to annotate each spectrum are needed to be input. This category is optional because it is often not possible to obtain this data.
In addition, UniCarb-DR (https://unicarb-dr.glycosmos. org/) provides a web tool that allows users to enter MIRAGE-related information for their experiments, which Nucleic Acids Research, 2021, Vol. 49, Database issue D1525 Figure 1. Schematic representation of the GlycoPOST environment. GlycoPOST has been developed under the GlyCosmos project, and the GlycoPOST system was adapted from the repository system of the jPOST project. The metadata to be registered follows the MIRAGE guidelines and the Excel format for the metadata input used by UniCarb-DR is also importable.
produces an Excel file formatted in a specific format with the required information (20). GlycoPOST has a function to import the data from this Excel spreadsheet and automatically create presets. Conversely, users can also export the preset data from GlycoPOST and download an Excel spreadsheet in the same format.
Thus, we made efforts to ensure compatibility with other glycomics data repositories. Furthermore, by adopting the MIRAGE guidelines, we can ensure the quality of the metadata registered in GlycoPOST and that it is compatible with databases in related fields.
Project creation and file upload
In general, users would register their data as a single project, which receives a unique accession number. Each project can contain one or more raw data and must be linked with metadata as defined by the presets described previously. Each project is required to be linked with at least Presets 1-4 for describing samples, experiments, and instruments, and once registered, and one accession number will be issued. After a project is generated, any metadata for sample preparation, general features, ion sources, ion transfer, spectrum and peak list registered as a preset will be linked to the MS raw data files to be registered ( Figure 2). The same metadata information can be linked to all files at once by dragand-drop of the raw data files into the browser with presets selected beforehand, greatly reducing the registration operation for users with many files to register.
After linking the metadata profiles as presets with the deposited data files, the user can upload the files to the repository. GlycoPOST utilizes the PRESTO system (https:// prestotools.github.io/) for uploading data. The upload process of this system splits the file into smaller pieces, called 'chunks', which are then uploaded to the repository in par-allel. In the process of data transmission over the Internet, it is known that the longer the distance of the data communication route, the greater the delay (the delay before the data actually starts to be transferred). This often results in very slow data transfer rates between physically distant locations. This delay problem can be remedied by uploading small chunks in parallel. This data transfer system is already implemented in the jPOST repository, and it has shown to have a positive correlation between file size and transfer time, with an average transfer rate of about 5MB/s, which is fast enough to take only about four minutes to upload a 1GB file (14). Thus, it has been found that the file transfer speed is, in most cases, independent of the distance from where the user deposits the data in this system.
Data publication and download
When the deposited dataset is determined to be valid against the MIRAGE guideline criteria, the users can lock and exit the submission process, at which time a Glyco-POST identifier is generated as an accession number. The submitted data and metadata are automatically checked by our system and if the dataset is determined to be incomplete, an accession number will not be assigned and the dataset cannot be announced. At this stage of a 'project' submission, datasets submitted to the repository are in 'embargo', meaning it is set as private, and it will be automatically published on a 'publication date' set by the users themselves. During this embargo period, users will be issued a dedicated URL and password that will allow anyone with this information, such as journal editors and peer reviewers, to access the project. Users can also revise a temporarily locked project in response to reviewers' comments and revised data will be assigned a revision number. Relationships between data files and metadata. Presets that follow the MIRAGE guidelines are registered in GlycoPOST, and each preset is linked to raw data files obtained from mass spectrometry. In addition to the raw data, a project is created that contains the peak list and result files, and an accession number is issued for that project.
Datasets of published projects can be downloaded without restrictions. Users can also search for keywords found in any of the fields registered under presets or projects.
System implementation
The web application for GlycoPOST was built using the React framework (https://reactjs.org/), and the proprietary PRESTO system is used for file uploads. This eliminates the need for FTP and external software for file uploads, allowing the entire process from project creation to file uploads to be completed within a single web browser, contributing to an improved user experience.
DISCUSSION
The alpha version of GlycoPOST was launched in December 2018, with the beta version released in April 2019, and its official release in March 2020. During this time, it has been used by many users, with over 70 projects registered, of which >20 are in the public domain. Over half of the registered projects are based on ESI-MS/MS analysis, but others include ESI-MS, MALDI-MS and MALDI-MS/MS, which are the instrumentation listed by the MI-RAGE guidelines. However, other technologies can be selected under 'Not specified' for the time being. As more data using these other technologies are deposited, they will be added to the predefined list. The numbers of datasets deposited using positive and negative mode were about half and half. Regarding glycan labeling and derivatization, currently there is no controlled vocabulary, so users have entered free text to describe this under the sample processing section. Those who used glycan labeling will have specified this information, but unlabeled glycans would not be mentioned. All the required metadata have been specified by all users since the official release of GlycoPOST in April, 2020. The number of accesses and downloads have increased in general, as shown in Figure 3. Although the server itself is located in Japan, it has attracted a lot of attention as it is used by researchers worldwide not only Asia but also North America and Europe. We assume that this is due in large part to the need that it fulfills and its usability.
All metadata submitted to this repository, especially the experimental procedures described in the current five presets, are not currently represented as any ontology or controlled vocabulary. This is due to the fact that although the MIRAGE guidelines exist, no repository could fully ac-commodate the guidelines until now. Because many other glycomics-related databases already use ontologies to represent glycan-related information (18,21), by increasing the use of ontologies in GlycoPOST in the future and expressing them in a unified framework such as the Resource Description Framework (RDF) data model, it should become possible to integrate the data in GlycoPOST with other glycan-related databases (2,3,22). Moreover, MIRAGE has yet to publish a glycoproteomics guideline, but there are plans to make one available soon in collaboration with the HUPO Proteomics Standards Initiative (PSI) (23). As soon as these guidelines are complete and announced, we plan on implementing functionality to accept the relevant metadata in GlycoPOST to be able to accept glycoproteomics data. This will prepare us to apply to the ProteomeXchange as a fellow member. The MIRAGE guidelines will delineate the metadata for glycan structure information, which will be linked with GlyTouCan and other related glycan resources; this is currently lacking in proteomics repositories.
Currently, UniCarb-DR and GlycoPOST are independent systems except for user information. In the near future, the data between these repositories will be shared, so that the raw data registered in GlycoPOST can be mapped to the peak lists and glycans registered in UniCarb-DR, and viceversa. Moreover, these glycan data will be linked to GlyTou-Can accession numbers. As a result, the raw data in Glyco-POST can be visualized with the spectra of the glycan fragments registered in UniCarb-DR. Moreover, this workflow can be made more seamless by having users first register the raw data, peak lists and glycan data in GlycoPOST, which then automatically registers the glycan data into UniCarb-DR to take advantage of the latter's connection with Gly-TouCan. Then, by integrating all this information under a common framework, glycans can be searched throughout UniCarb-DR and GlycoPOST in the future. This will make re-analysis of the glycomics MS data in GlycoPOST easier, by allowing users to search for raw data containing a particular glycan.
Making data findable, accessible, interoperable and reusable will not only enhance its value as a public asset, but also contribute to the many studies that will help us create new value. In the field of glycomics, this concept of FAIRness is an important common philosophy that needs to be realized in the same way as in genomics and proteomics. This will allow for re-analysis of the data as detection algorithms and technologies improve, especially considering that de novo analysis is currently extremely difficult. Moreover, by working with journals to require the submission of raw data to a repository, the data will be more accessible for other users, where metadata will aid in searching for the most appropriate datasets and the guidelines ensure that the data is accessible in a standard format. We believe that Gly-coPOST will be a major contributor to these public roles. | 4,083.8 | 2020-11-11T00:00:00.000 | [
"Biology",
"Computer Science",
"Chemistry",
"Materials Science"
] |
Construction and optimization of inventory management system via cloud-edge collaborative computing in supply chain environment in the Internet of Things era
The present work aims to strengthen the core competitiveness of industrial enterprises in the supply chain environment, and enhance the efficiency of inventory management and the utilization rate of inventory resources. First, an analysis is performed on the supply and demand relationship between suppliers and manufacturers in the supply chain environment and the production mode of intelligent plant based on cloud manufacturing. It is found that the efficient management of spare parts inventory can effectively reduce costs and improve service levels. On this basis, different prediction methods are proposed for different data types of spare parts demand, which are all verified. Finally, the inventory management system based on cloud-edge collaborative computing is constructed, and the genetic algorithm is selected as a comparison to validate the performance of the system reported here. The experimental results indicate that prediction method based on weighted summation of eigenvalues and fitting proposed here has the smallest error and the best fitting effect in the demand prediction of machine spare parts, and the minimum error after fitting is only 2.2%. Besides, the spare parts demand prediction method can well complete the prediction in the face of three different types of time series of spare parts demand data, and the relative error of prediction is maintained at about 10%. This prediction system can meet the basic requirements of spare parts demand prediction and achieve higher prediction accuracy than the periodic prediction method. Moreover, the inventory management system based on cloudedge collaborative computing has shorter processing time, higher efficiency, better stability, and better overall performance than genetic algorithm. The research results provide reference and ideas for the application of edge computing in inventory management, which have certain reference significance and application value.
Introduction
With the rapid development of science and technology, the combination of manufacturing processes, industrial IoT (Internet of Things), advanced computing, and other technologies has become increasingly close. Meanwhile, the manufacturing mode has changed from a product-centric mode to a user-centric mode [1,2]. Due to the complexity of business processes in large manufacturing plants, it is necessary to coordinate the relationship between people, information system, and physical system, which causes the traditional imbalance between resource allocation and task planning [3]. Moreover, concepts such as the intelligent plant, intelligent transportation and smart city have emerged as AI (artificial intelligence) and computer technology develop fast. Lv et al. (2018) designed a new government service platform by using 3D (three-dimensional) geographic information system and cloud computing to effectively manage and use urban data. In addition, they achieved the 3D analysis and visualization of urban information through the smart city platform, which made the life of the masses more convenient [4]. This proves that the application of computer and AI technology has become a hot research topic. The development of China's industry in the next decade will shift from labor-intensive production to technology-intensive production, which will bring great progress in advanced technology. Correspondingly, domestic enterprises have begun to explore the transformation approach to adapt to market changes and meet government needs. The fast-growing IoT applications can produce enormous amounts of data at the network edge, effectively promoting the generation and development of edge computing. Edge computing is one of the crucial technologies to realize intelligent industry. In large manufacturing workshops, sensors, instruments, and intelligent devices can collect mass of machine data [5]. These kinds of data are the main sources of industrial big data. Moreover, it is difficult to effectively master and forecast market demands. To reduce the dependence on the accuracy of market demand forecasting and improve the efficiency of supply chain inventory management, it is necessary to improve inventory management efficiency to adapt to the changes in market demand. Besides, it is essential to use management methods to compensate for many negative impacts of market uncertainty [6]. In this case, the upstream and downstream enterprises of the supply chain must create a constant speed supply chain based on the network platform to reduce the inventory cost of the supply chain and meet the needs of customers in real time. Industrial big data is considered as a necessary means to further enlarge product profit margin. At present, industrial data platform is the paramount component of data storage, calculation and analysis for intelligent factories. With the increase in smart devices in smart factories, a large number of data such as RFID (radio frequency identification) is obtained, providing a rich data set for the manufacturing industry. As IoT applications develop rapidly, mass of data is generated at the edge of the network, effectively facilitating the emergence and development of edge computing. Consequently, in large manufacturing workshops, sensors, instruments, intelligent terminals, and other devices can collect a large amount of machine data, as the main source of industrial big data. Under the background of increasingly socialized mass production and global economic integration, all links of the supply chain, such as raw material supply, production, logistics, consumption, processing, distribution, and retail must cooperate closely. Nevertheless, the coordination and management in all links, including inventory management, are still relatively closed, significantly reducing the comprehensive benefits of the overall supply chain.
The industrial production data is investigated here based on the analysis of the related concepts and production modes of supply chain and cloud manufacturing. Then, the demand prediction method for different types of industrial spare parts and the inventory management system are proposed via cloud-edge collaborative computing. The purpose of this work is to optimize inventory management and utilization efficiency by predicting the demand for vulnerable spare parts, and improve the performance of inventory management system with the advantage of cloud-edge collaboration computing. Moreover, cloud computing and IoT technology are utilized to explore the implementation method of refining the traditional inventory management of the supply chain. The innovation of this study is that corresponding demand prediction methods are studied separately according to three demand modes of vulnerable spare parts, namely periodic demand, stationary demand, and trend demand. Specifically, the simple exponential smoothing method is used to predict demand of stationary spare parts. The quadratic exponential smoothing method is selected to predict the linear demand, and the feature synthesis method is proposed for forecasting the spare parts with periodic demand mode. On this basis, edge computing is employed to develop a cloud-edge collaborative computing architecture, to optimize the spare parts prediction algorithm and improve inventory management efficiency and pertinence.
Overview and status of supply chain inventory management
IoT technology is the combination of intelligent recognition technology, wireless sensor technology, ubiquitous computing technology, and network technology. The global IoT network is still in the stage of concept, demonstration, and test, many key technologies need to be further studied, and standardization norms need to be further developed. However, it has triggered the third wave of information industry development in the world after computers and the Internet, which is an impactful upgrade of the application of information technology to human production and life. Supply chain is a kind of complete and functional network chain consisting of suppliers, manufacturers, distributors, retailers and end users centering on the business center enterprise and formed through controlling feed-forward information flow and the feedback of material flow and information flow [7]. There are diversified research works about supply chain inventory management. Bornkamp (2019) emphasized the importance of supply chain in his research. The author believed that the renegotiation of the UK-EU relationship would most likely take several years, but European distributors had to assess their current inventory management to mitigate future disruptions. Moreover, with the political pattern continuing to change, the growing e-commerce market would bring trade growth, so managing availability and distribution of inventory was critical to reducing overall costs, improving cash flows and increasing flexibility in supply chain operations, in order to effectively serve the European market [8]. Aaha et al. [9] analyzed six professional education courses offered by THE Council of Supply Chain Management Professionals, including senior certified professional forecaster, certified production and inventory management professionals, certified supply management professionals, and supply chain professionals. They took personal interests and organizational interests as the two main standards, and took the professional education plan as an alternative [9]. Evidently, people have gradually realized the significance of supply chain, and delved into supply chain deeply and professionally. Fig 1 reveals the basic structure of the supply chain.
The supply chain not only aggerates the logistics information and funds of suppliers and users, but also forms its own value. In the distribution link of the supply chain, the appreciation in products has been achieved through packaging, processing, transportation, and delivery. Supply chain inventory management is the process of defining the overall goal of supply chain inventory management and reviewing the inventory strategy of enterprises on supply chain nodes. The supply chain inventory management aims to sustain the optimal overall supply chain inventory and reduce the total inventory to respond to changing market demands. The improvement of the cost of supply chain inventory and supply chain can enhance the rapid response of inventory to the market.
Introduction to concepts related to cloud-edge collaboration for logistics management
Cloud manufacturing includes cloud-edge collaboration technology, AI service technology, container-based platform service technology, digital twins service technology, data security, and other related technologies. It is a novel type of digital, intelligent, and smart networked manufacturing with Chinese characteristics. Fig 2 reveals the overall schematic of the system of cloud manufacturing technology.
The foundation of intelligent cloud manufacturing is a ubiquitous and human-centered network, which integrates digital technology such as information manufacturing technology and intelligent technology comprehensively [10]. The cloud manufacturing system enables users to obtain manufacturing resources and capabilities according to their own needs anytime and anywhere through the cloud-based manufacturing service platform, and intelligently perform various activities throughout the life cycle.
In the industrial field, the IoT proactively identifies and remotely controls all physical devices in the cloud manufacturing scenario of existing network infrastructure, and obtains content in the physical world (real space) in the information world (cyberspace). The data reflects the whole life cycle of the corresponding physical equipment, and realizes the digital twins [11,12].
Internet technology facilitates the active and independent analysis of industrial product manufacturing process, generates intelligent perception and active prediction of the outside world, and forms a closed-loop process of automatic repair and complete feedback. With the emergence of intelligent control, industrial IoT can optimize all aspects of industrial systems, including intelligent manufacturing and business systems, real-time monitoring, supply chain collaboration, value-added services, and other business needs. The wide application of industrial IoT technology makes the production process more active and intelligent, which can accurately predict and effectively solve the potential obstacles, to effectively increase corporate profits [13,14].
The continuous development of the mobile Internet has brought new convenience for people's life and production, as well as more needs and challenges, such as higher requirement of timeliness, security, and reliability. Hence, edge computing is needed to improve cloud computing ability. Many problems such as single-point faults may occur in industrial applications. In addition to the unified control of the cloud, the edge nodes have the computing ability to independently make decisions and solve problems, which can improve factory productivity, while avoiding equipment failure. In IoT scenarios, edge computing focuses on solving problems of lightweight data size closer to the user's by transferring computing operation [15]. Therefore, it cannot completely replace cloud computing, but assists cloud computing to improve work efficiency. With the deepening of industry research and academic research, cloud collaboration is widely used in numerous fields such as medical treatment, industry, and finance. Cloud-edge collaborative architecture can balance the load and reduce the hardware requirements of edge devices, making the peripheral equipment more convenient while maintaining the capacity [16,17].
Demand prediction of vulnerable spare parts in IoT supply chain environment
In the cloud manufacturing scenario, the amount of data sent by the terminal equipment deployed in each plant is different for various plant equipment and actual business needs. Therefore, it is necessary to design a scheme for the edge server equipped in different plants to effectively reduce the procurement funds of enterprises and avoid the waste of limited resources. Based on this consideration, a demand prediction method is proposed for vulnerable spare parts, and it is combined with the cloud-edge cooperative inventory management system to improve the efficiency and quality of inventory management.
Timely maintenance and supply of spare parts are two important components of the aftersales service system provided by large equipment manufacturers in the service network [18]. Among them, the efficiency of spare parts inventory management determines whether spare parts can reach the demand in time, which directly affects the market competitiveness of service systems and manufacturers. In the IoT era, many consumption data and consumption behaviors based on IoT provide sufficient data basis for market demand prediction. Shen et al.
(2020) extracted knowledge from user generated content and depicted the differences between IT service companies' use of social media and users' expectations based on daily interaction between suppliers with customers [19]. The data analysis method is also adopted here to forecast the spare parts demand.
The purpose of inventory management is to deal with various changes and uncertainties in spare part supply to ensure the normal operation of spare part supply. According to the function and direction of spare parts, they can be divided into two categories: maintenance spare parts and service spare parts.
The function of maintenance spare parts is to ensure the normal operation of production equipment, while the function of service spare parts is to ensure the after-sales service of products. Different types of spare parts have diverse inventory management purposes and management methods [20]. In summary, in the case of low total cost of spare parts inventory, it is very practical to study how to optimize the inventory management system according to the actual situation of enterprises to achieve a significant improvement in service level. The spare parts inventory management strategy includes spare parts classification, spare parts demand analysis, spare parts shortage management, spare parts inventory mode, and inventory strategy.
The common vulnerable parts of pump trucks in industrial production are taken as the research object here to predict their needs, including conveying cylinder, concrete piston, and cutting ring, usually with relatively large demands. Through the analysis of the sales volume of concrete piston in different regions, the demand is classified into the following three categories: periodic demand time series, demand time series with rising trend, and stable demand time series [21].
The prediction based on periodic demand time series is first discussed. Spare parts with periodic changes in demand patterns include random components and periodic components in the past demand time series [22,23]. The proposed prediction method calculates the cycle length according to the time series of spare parts demand in the past, calculates the demand data of original spare parts according to the cycle length, and divides each segment and performs polynomial fitting. The polynomial function of each cycle is integrated to obtain a new polynomial function to extract periodic ports and remove random factors, which is used as a prediction model to predict the demand of the next period.
Generally, the demand data of a part may have a constant cycle, but the cycle means that the demand data of the cycle interval have similar fluctuations rather than being identical to each other [24,25]. To detect the period of a time series, the most important thing is to solve the problem of accurately measuring the similarity of time series. For the similarity measurement of two time series, most studies adopt the Euclidean distance method. The smaller the distance measure is, the more similar the two time series data are. The Euclidean distance can be expressed as Eq (1).
dðT; SÞ ¼ 1 n ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi In Eq (1), T is the target time series, S denotes the time series needed for similarity measurement, and n represents the length of two time series. Besides, t i or s i refers to a factor at a time in a time series.
Euclidean distance represents the proximity of distance between two time series, but does not reflect the dynamic trend. The similarity between the two data can also be proved by the fact that the overall trend of variability is consistent and uniformly correlated. Therefore, correlation coefficient can be used as another measure of similarity, which can be written as Eq (2).
In Eq (2), P TS refers to the correlation coefficient of time series. Meanwhile, d(T, S) stands for the Euclidean distance of two time series data, and f(T, S) is the similarity measure function.
Eq (3) indicates the detection of period length based on similarity.
In Eq (3), X stands for a given time series. Besides, D represents a fragment in a given time series, and C denotes the time length of the fragment.
In Eq (4), the value of a is 1, and the value of b is n/2. α signifies the threshold.
Once the duration length period of the time series is calculated, the whole time series can be divided into multiple duration periods according to the duration length period. The analytic expression of the function for each cycle is unknown, but the data points on each cycle are known. It is essential for the extraction of the periodic function from each cycle to analyze the known cycle so that the internal data points are matched with the function. Due to the different influence of external factors on spare parts demand in each period, the time period extracted by fitting function cannot represent the periodic trend of the whole time series [26]. Therefore, the fitting functions of each period are integrated to form a new adjustment function to remove the influence of random factors, which is used as the periodic equation of all periods (time series), as shown in Eq (5).
There are several methods to predict the continuous demand of spare parts for the nonperiodic demand time series, such as the exponential smoothing method and the weighted moving average method. The exponential smoothing method is an improvement of moving average method characterized by simple form, easy implementation, and high precision, which can accurately reflect the changes in demand data and is widely used in practice. Therefore, the exponential smoothing method is selected as the spare parts demand prediction method based on aperiodic demand time series here.
When the spare part data does not follow a linear trend, the demand model of exponential smoothing prediction is presented in Eq (6).
In Eq (6), a represents a smoothing constant, and m = tþ1 denotes the predicted value of the (t+1) period.
When the spare parts data does not meet the linear trend, the demand model for exponential smoothing prediction needs to be smoothed twice, as shown in Eqs (7) and (8), respectively.
Among Eqs (7) and (8), a denotes the smooth constant, and S t+1 stands for the smooth value of (t+1) period.
For the prediction of intermittent time series, the intermittent demand time of spare parts has two characteristics. (1) There is less demand. In other words, there is no demand during this period. (2) There is great volatility in demand value. These two characteristics cause a large prediction error of intermittent time series [27]. Furthermore, the time aggregation prediction method is used to predict the demand of intermittent time series.
Inventory management system based on cloud-edge collaboration
Core competitiveness is crucial to large equipment manufacturers, because efficient management of spare parts inventory can effectively reduce costs and improve service levels. The engineering machinery and equipment usually have a complex structure and various components and parts. However, the existing spare parts inventory management is still cumbersome and unsystematic, which determines inventory according to personal experience and plans demands according to inventory proportion, bringing great pressure to the production department and other related departments. The solution of traditional cloud computing architecture is to download the sensor data of various factory equipment, and use the data analysis technique of big data. Meanwhile, it transits the downloaded data to remote cloud servers through the data acquisition module, to improve work efficiency and competitiveness [28,29]. Here, the cloud-edge collaborative computing in industrial IoT is proposed to solve the rapid response problem of real-time control and data fast processing in large-scale manufacturing plants. Fig 5 provides the architecture of cloud-edge collaborative computing.
The deployment of industrial IoT in intelligent manufacturing environment mainly contains the equipment perception layer, data resource layer, service application layer, and operation and maintenance management layer, which work together to maintain all data links [30,31]. From the specific business point of view, the cloud components are mainly responsible for the formation of the model of the collected data, and the peripheral components are basically responsible for obtaining the model from the data dictionary, providing timely services for factory equipment in real time. Reducing the training time of models and networks can shorten the response time of the closed-loop system and improve the overall production quality of the plant equipment. OpenStpack and Starling X enable companies to build their own cloud-edge collaborative computing services using the most advanced open-source cloud computing platform and the latest distributed cloud computing platform, respectively.
The solution of traditional cloud computing architecture is to upload all kinds of sensor data from factory equipment, such as vibration, pressure, and temperature, to the cloud remote server through data acquisition module. Besides, it utilizes the popular big data analysis technology to establish the mathematical model of index data and factory equipment performance, to enhance the production quality, work efficiency, and market competitiveness of factory equipment. Taking the coal industry as an example, the mine is generally located in a remote location where it is difficult to implement network communication. Due to the characteristics of large scale, numerous varieties, low value density, and fast update and processing requirements of coal mine data, the traditional cloud computing architecture is inadequate, because it is easy to produce problems of single point faults and slow closed-loop response. Based on the above analysis, the cloud-edge collaborative computing architecture is selected for the industrial IoT to cope with the problems of fast real-time control response and fast data calculation in large manufacturing workshops. Fig 6 illustrates the workflow of cloud-edge collaborative computing architecture, where various data acquisition devices and user requests are collectively referred to as collectors. The smart endpoint simply pre-processes information from the collectors and sends it to the computing node in the edge server cluster [32]. Then, the I/O intensive virtual machine on the computing node receives the information and stores it in the database on the storage node.
The following is the specific processing of the edge server: 1) the intelligent terminal sends the collected data to the edge data storage module; 2) data processing module retrieves the corresponding data from the edge data storage module according to the user's request; 3) data processing module carries out lightweight big data analysis according to the model parameters provided by the data dictionary module. Besides, the edge data dictionary module is analyzed and synchronized; 4) the decision module outputs the processing results of the data processing module to the intelligent equipment and checks them accordingly.
The procedure of the remote centralized server is as follows: 1) the edge server and remote centralized data storage module synchronize incremental data; 2) data processing module retrieves data from the remote centralized data storage module according to user needs; 3) the data processing module conducts large-scale big data analysis according to the model parameters provided by the data dictionary module.
The analysis and synchronization of the remote data dictionary module are presented as follows: 1) edge server synchronizes incremental data with the remote centralized data storage module; 2) the data processing module retrieves data from remote centralized data storage module according to user needs; 3) the data processing module conducts large-scale big data analysis according to the model parameters provided by the data dictionary module. Meanwhile, analysis and synchronization are performed on the remote data dictionary module; 4) the remote data dictionary module synchronizes data processing with edge data dictionary module according to specific requirements.
Edge servers and remote centralized servers regularly analyze and use stored data, and the data dictionary is updated to ensure the correctness of the decision message.
Simulation and experimental design
Three time series prediction methods are provided for the demand prediction of vulnerable parts based on spare parts. The demand data of high-strength circular chains in the mining industry is used here for verification. The circular chain is also a spare part of construction machinery, and the experimental data comes from the network. In the simulation experiment, the genetic algorithm is introduced as a comparative algorithm to verify the performance of the inventory management system based on cloud-edge collaborative computing architecture. Table 1 indicates the task parameters under different configurations in this experiment.
Analysis of demand prediction results and the performance verification of inventory management system
Comparison results of the prediction method based on demand of vulnerable spare parts. After the prediction model is established, the predicted value of spare parts demand is calculated to be compared with the true value. The polynomial is established and fitted according to the period length. Fig 7 illustrates the relationship between fitting times and prediction errors. Fig 7 shows that the prediction error decreases first and then increases with the increase of fitting times. When the fitting time of the polynomial reaches 10, the prediction error begins to stabilize. After 13 times of fitting of the polynomial, the prediction error reaches a minimum of 11.7%, and then begins to increase. Based on this result, in the following simulation experiment, 13 times of fitting are used to obtain the fitting polynomial of each section when the polynomial regression model is used to fit the demand data of spare parts. The eigenvalues and the weighted fitting process of each cycle are shown in Fig 8. determination of polynomial degree. Through experimental analysis, the prediction accuracy is the highest when w = 0.1, w = 0.1, and w = 0.8. Fig 9A illustrates the mean value of the sum of eigenvalues and the weighted sum of eigenvalues shown in Fig 8A. Fig 9 provides the prediction results of spare parts demand based on the weighted synthesized eigenvalues.
According to Fig 9b, from a macro perspective, the prediction result based on the weighted sum is closer to the real value than the prediction based on the mean value of sum of the eigenvalues. From the perspective of error value, the highest prediction error based on the weighted eigenvalues is 34.9%, and the lowest is 2.2%. Through the comparison of error in Fig 9C, the average relative error based on the weighted fitting is lower than that based on the mean value of the sum of eigenvalues, the former is 11.7%, and the latter is 18.4%. Therefore, the prediction accuracy of the prediction model established by the weighted fitting method is higher. To sum up, the prediction method based on weighted fitting of eigenvalues has the smallest error and the best fitting effect in the demand prediction of machine spare parts.
Verification results of the perdition method based on vulnerable spare parts demand. The simulation experiment adopts the moving average period coefficient prediction of the prediction method based on weighted eigenvalues as a comparison with the true value. The specific results are presented in Fig 10. In Fig 10A, a n refers to the first set of eigenvalues, b n denotes the second set of eigenvalues, c n represents the eigenvalues after fitting, the value range of cycle length is 1~13, and the threshold is 10% of the mean value. Polynomial fitting is carried out for the first two data segments to obtain the periodic term of the data segment, which is used to predict the true value of the third cycle segment. When n = 10, the prediction error is the smallest, so the degree of the fitting polynomial is n = 10. The fitting polynomial function of each segment is obtained. From Fig 10, the average relative error between the actual value of spare parts demand and the predicted value is 9.4%. When the moving period coefficient method is used to predict the demand for spare parts, the average relative error between the predicted value and the actual value is 13.0%.
The proposed prediction method is also used to predict the demand of the circular chain, and the results are compared with those of the moving average period coefficient method, to further verify the advantages of this method. The comparison results are shown in Fig 11. From Fig 11, the average absolute error of the actual value and predicted value of spare parts demand based on the moving average period coefficient method is 286.8, and the average relative error is 12.8%. The average absolute error of the polynomial fitting model is 250.7, and the average relative error is 11.7%. Therefore, the proposed prediction mode has a better prediction effect.
PLOS ONE
The prediction results of exponential smoothing method and quadratic exponential smoothing method are shown in Fig 12. The simple smoothing index prediction method is used to investigate the spare parts demand data with the nonlinear trend. According to Fig 12A, the predicted value of spare parts demand by smoothing index prediction method is close to the actual value, and the average relative error is 18.0%. The quadratic smoothing index prediction method is aimed at the spare parts demand data with linear trend. From the results in Fig 12B, the predicted value of demand of the quadratic exponential smoothing method is close to the actual value, and the average relative error is 11.3%. In conclusion, the exponential smoothing method and quadratic exponential smoothing method both have high prediction accuracy in spare parts demand.
To sum up, the cycle length detection method based on similarity is adopted to calculate the cycle length. Then, the data is divided into several segments according to its cycle length, and polynomials are used to fit the data in the cycle segment. Moreover, the polynomials are synthesized to obtain a new polynomial function, which is used as the prediction model to predict the demand in the next cycle. The experimental results demonstrate that this prediction method can achieve high prediction accuracy.
Performance verification results of inventory management system based on cloud-edge collaborative computing. The algorithm of the inventory management system optimizes the resource allocation for virtual machines from the impact of virtual machines on the performance of physical machines and the impact of different configurations of virtual machines on task execution time. Table 2 machine performance algorithm has shorter processing time and higher efficiency than genetic algorithm. In terms of stability, the genetic algorithm fluctuates greatly, so the proposed algorithm has higher stability.
In conclusion, in the prediction of spare parts demand with strong periodicity, the prediction method based on weighted fitting of eigenvalues has the smallest error and the optimal fitting effect in the prediction of machine spare parts demand, and the lowest error after fitting is only 2.2%. For spare parts with non-periodic linear demand and spare parts with nonlinear demand, exponential smoothing method and quadratic exponential smoothing method are used for prediction respectively, and the prediction results are close to the actual value. The spare parts demand prediction method proposed here can well complete the prediction for three different types of time series of demand data of spare parts, and the relative error of prediction is maintained at about 10%. The prediction effect can meet the basic requirements of spare parts demand prediction, and the prediction accuracy is higher than that of periodic prediction method. Compared with genetic algorithm, the cloud-edge collaborative computing algorithm for inventory management system takes less processing time and has higher efficiency. In terms of stability, genetic algorithm fluctuates greatly, but the algorithm reported here is much more stable.
Conclusions
Efficient spare parts inventory management can reduce inventory costs, improve service level, and bring huge benefits to large equipment manufacturing enterprises. There are a variety of spare parts for large-scale equipment as well as many uncertain factors in the supply process. Therefore, it is essential to continuously update relevant technologies for higher efficiency of spare parts inventory management, to save inventory costs. Based on the supply chain background, the critical role of inventory management plan and spare parts demand relationship in improving the core competitiveness of enterprises. Secondly, according to different types of spare parts demand prediction data, different spare parts and demand prediction methods for vulnerable parts are proposed. In addition, the efficiency of inventory management is improved by predicting the demand for industrial vulnerable parts. For the three spare demand models of vulnerable parts, including periodic model, stationary model, and trend model, the corresponding demand forecasting methods are studied respectively. The simple exponential smoothing method is used to predict the spare parts with stable demands, while the quadratic exponential smoothing method is used to predict the demand for spare parts with linear trend. Meanwhile, the prediction method based on weighted fitting of eigenvalues is adopted to predict the periodical demand of machine spare parts. Finally, an inventory management system based on cloud-edge collaborative computing is proposed to reasonably allocate inventory resources and improve the utilization of inventory resources. The prediction method based on weighted fitting of eigenvalues proposed here has the smallest error and the best fitting effect in the demand prediction of machine spare parts, and the lowest error after fitting is only 2.2%. Exponential smoothing method and quadratic exponential smoothing method are used for spare parts with non-periodic linear demands and spare parts with nonlinear demands, respectively, and the prediction results are close to the actual values. In terms of completion time, the virtual machine performance algorithm reported here realizes shorter processing time and higher efficiency than genetic algorithm. In terms of stability, this research algorithm is much more stable than the genetic algorithm. Despite particular outcomes achieved in this work, due to the limitations of research level and some objective factors, there are still some deficiencies. On the one hand, there remains space for improvement in the relative error of the prediction method for vulnerable spare parts proposed here. It is expected to further improve the accuracy and efficiency of prediction by introducing the deep learning algorithm in future. On the other hand, there lacks the combination of the prediction method based on vulnerable spare parts and the inventory management system based on cloud-edge collaborative computing reported here. The follow-up work will make efforts to integrate spare parts demand forecasting and inventory resource management into one intelligent system. | 8,114.4 | 2021-11-03T00:00:00.000 | [
"Computer Science",
"Engineering",
"Business"
] |
Cross-modal semantic autoencoder with embedding consensus
Cross-modal retrieval has become a topic of popularity, since multi-data is heterogeneous and the similarities between different forms of information are worthy of attention. Traditional single-modal methods reconstruct the original information and lack of considering the semantic similarity between different data. In this work, a cross-modal semantic autoencoder with embedding consensus (CSAEC) is proposed, mapping the original data to a low-dimensional shared space to retain semantic information. Considering the similarity between the modalities, an automatic encoder is utilized to associate the feature projection to the semantic code vector. In addition, regularization and sparse constraints are applied to low-dimensional matrices to balance reconstruction errors. The high dimensional data is transformed into semantic code vector. Different models are constrained by parameters to achieve denoising. The experiments on four multi-modal data sets show that the query results are improved and effective cross-modal retrieval is achieved. Further, CSAEC can also be applied to fields related to computer and network such as deep and subspace learning. The model breaks through the obstacles in traditional methods, using deep learning methods innovatively to convert multi-modal data into abstract expression, which can get better accuracy and achieve better results in recognition.
Parameter settings. The spatial dimensions of WIKI, TVGraz, NUS-WIDE, and MIRFLICKR are set to 10, 20, 10, and 40, respectively. We constantly adjust parameters within the range of 0.001, 0.01, 0.1, 1, 10 to analyze the performance of CSAEC. For several other methods, we set the parameter values according to the corresponding data set. For the data set, we randomly divide it into parts, one of which is the test data, and the rest is the unlabeled pool for active selection. The random data partition is repeated for ten times and average results over them are reported as the final model evaluation.
Complexity analysis. We set n ≥ d . The complexity of eigenvalue decomposition is O(n 3 ) . When n is large, we can get the results with iterative algorithms to prove the precision of our proposed methods. The largest d eigenvalues may exit with different datasets. Obviously, the size of feature dimension influence the complexity. We just caculate with O(knd 3 ) and k is the number of iterations.
Mean average precision (MAP) results of different methods. Mean Average Precision (MAP) is
used to evaluate the validity of the retrieval results of different methods. R is the threshold for Precision-Recall (PR) Curves. Assuming that there are some positive examples in the datasets, we can get the corresponding values r. For each value of r, we can calculate the maximum precision when r > R . In order to verify the performance of CSAEC, two types of directional cross-pattern retrieval tasks were performed: image-text query and text-image query. If the labels of the two types of data points are the same, the information is considered to have the relevance.
The methods are compared on the WIKI dataset. It can be observed from Table 1 that the performance of the CSAEC method in this paper has improved significantly.
Average ranks by each algorithm provide avaluable comparison.Let r j i(m) denotes that the rank of jth of m algorithm applied to the ith dataset. Then the average rank of k algorithm can be expressed as R j m = 1 n i r j i(m) . Then establish a null assuming that all algorithms have strong similarities, which states that the ranks R j should be equaled. The Friedman test testifies whether the calculated average ranks should have significantly difference from the mean rank expected under the null hypothesis. When R = all , the Friedman statistic can be calculated as With four data sets and six algorithms, F F is distributed according to the F-distribution with (6 − 1) = 5 and (6 − 1) (4 − 1) = 15 degrees of freedom. The p-value calculated with F(5, 15) distribution has proved the null hypothesis can be rejected at a high level of significance. The reason may be that CSAEC uses the embedding matrix while preserving the original features and semantic information. Semantic information provides interactive information between modalities and information within each modality, while original feature information takes into account of the similarity between modalities.
As can be seen from Table 2, on TVGraz dataset, CSAEC also achieved the best results for the two types of retrieval tasks. Our method improves the performance of image query text tasks better than text query images. Compared with other methods, the query results are improved. Table 3 shows the MAP of each method on the NUS-WIDE dataset. The LGCFL and CSAEC methods perform better than CCA because both consider semantic information. The NUS-WIDE dataset is larger than the WIKI and MIRFLICKR datasets, so the semantic information has more interaction in NUS-WIDE, and similar information between different modal data can be found as much as possible.
On the MIRFLICKR dataset, it can be seen from Table 4 that the MAP value of this method is better than other methods, and the effect of JFSSL is second. The CSAEC method has the ability to retain both original .392 4 · 6 − 11.392 = 2.71 www.nature.com/scientificreports/ features and semantic information, and learns the feature code vector of the semantic tag space. This shows that CSAEC and JFSSL are effective for querying spatial information with labels.
Precision-Recall (PR) curves of different methods. It can be seen from Fig. 1 that for the image-text query task, the overall CSAEC query effect exceeds almost all other methods. On the MIRFLICKR dataset, the minimum accuracy of each method is higher. On the NUS-WIDE dataset, the performance advantage of CSAEC is more obvious. Overall, CSAEC improves the performance of image query text tasks. For text-image query tasks, CSAEC has higher recall rate than the other methods on the four benchmark data sets.
Parameter sensitivity. In Fig. 2, we analyze the impact of parameters. On the WIKI and NUS-WIDE datasets, the two parameter values are adjusted within the range of 0.001, 0.01, 0.1, 1, 10, and their changes are shown in Fig. 3. It can be seen that when the parameters change, the effect of CSAEC will be different, and its query performance is more sensitive than other methods. When the range is from 0.001 to 1, this method can get better results.
Loss analysis. Figure 3 shows the convergence loss curve of the method in this paper. We perform CSAEC over 10 iterations on all datasets. It can be seen that on WIKI and NUS-WIDE, as the number of iterations increases, the loss value continues to decrease. After fewer iterations, the loss has been reduced and stabilized, and the method is considered to be convergent in the end.
Discussion
The research on cross-modal retrieval technology has attracted much attention and is beginning to be put into practice. In addition, the semantic gap between the low-level features and high-level semantic features in the multi-modal dataset is a huge challenge. The bottleneck in accuracy and quality lies in the key factors. Researchers work on the construction of similarity constraints through category labels, but the methods are limited. Study the special correlation between multi-modal data is of great urgency. Semantic information is significant knowledge retained during querying. Different forms of data have different feature spaces, but they have the same semantic space. Data with the same semantics are related in various forms. Semantic information can be used not only to indicate the degree of association between multiple modalities, but also to indicate the connections within each modality.
In this work, an effective cross-modal retrieval method CSAEC is proposed. By embedding mapping consensus on multi-modal data, while retaining the original feature information and semantic information, a semantic code vector is obtained. The paired encoder-decoders are linearly symmetric, returning feature projections to the original data, minimizing reconstruction errors. Parameters are introduced in the objective function with regularization sparse constraints. Experiments show that the autoencoder effectively completes the query task and improves the retrieval performance.
Cross-modal retrieval technology involves basic knowledge related to mathematics, and statistics to meet the needs of the application. Also, CSAEC can be applied to fields related to computers and networks such as deep and subspace learning. Further, CSAEC will play a great role in the field of recognition and analysis. In the next step, characteristics of the human body, such as facial expression and body movement, can be used on the deep neural network model to perform simultaneous features on multiple modal learning. Datasets can be unified to the same feature space as semantic expression through multiple nonlinear transformations. CSAEC can restore more similarities between image and text information for feature extraction. The model takes into account of different modalities and the importance of tasks for machine learning. The model breaks through the obstacles in traditional methods, using deep learning methods innovatively to convert multi modal data into abstract expression, which can get better accuracy and achieve better results in recognition.
Methods
Related work. Cross-modal similarity learning has aroused great attention in the academic community.
However, the heterogeneity of data and the existence of semantic differences makes this problem challenging. At present, the two most common measurement methods are maximizing correlation and minimizing Euclidean distance 25 . The typical methods to maximize correlation are CCA 23 and improved methods, learning a latent space that maximizes the correlation between the projection features of the two modalities. Reference 26 used CCA to obtain the shared potential space of 2D and 3D facial images corresponding to people. PLS and BLM www.nature.com/scientificreports/ are methods to minimize Euclidean distance. Sharma and Jacobs 27 used PLS to achieve heterogeneous facial recognition in different poses, high-resolution and low-resolution facial images, and between photos and sketches. Bilinear models (BLM) are used for cross-media retrieval and heterogeneous face recognition 2 . An autoencoder is an unsupervised neural network model. It learns the hidden features of the input data, which is called encoding. Meanwhile, CSAEC reconstruct the original input data using the learned new features, which is called decoding. Autoencoders 28 are trained models for learning potential representations of a set of data. CSAEC uses training data sets to copy the input information to the output. Therefore, the underlying representation is a valid attribute. Some scholars have proposed deformation methods for autoencoders. Reference 15 correlated potential representations of two single-mode autoencoders. Kodirov et al. 16 learned the semantic code vectors of latent space. Lange et al. 29 combined the training of deep autoencoders (for learning www.nature.com/scientificreports/ compact feature spaces) with RL algorithms (for learning strategies). RL is short for Reinforcement Learning. Tara et al. 30 used the training set to apply the AE-BN mode. The traditional autoencoder simply seeked potential representations to reconstruct the original data, and the method conducted the similarity with semantic code vectors. Inspired of related work, we improve existing methods and constructs a set of cross-modal semantic autoencoder with embedding consensus (CSAEC). The process is shown in Fig. 4. The paired image-text data is uniformly mapped to a low-dimensional embedding space, the manifold structure is retained, and the original information is converted into corresponding semantic code vectors. The consensus matrix and semantic code matrix are continuously updated. Further, by learning the image and text projection matrices, the encoders are used to associate them with corresponding semantic codes, and the decoder is reprojected back to the highdimensional data. In addition, regularization and sparse constraints are performed on the decoder. Balanced parameters are used to reconstruct the original features. As a result, the method performs effectively on the retrieval of multi-modal information. Mapping consensus mainly deals with the problem of multi-mapping disagreements. Since every data point is different, according to the mapping process, the representation of the same data point can be mapped into the latent embedding space. In this occasion, mapping conflict may occur. The reason is that the data point is unique which leads to different mapping results. The aim of mapping consensus is preserving validity of mappings and avoid mapping conflict. Considering of a fixed object N) for each value of d, where U i is the definite representation of this point in latent embedding space and ϕ d is the latent embedding mapping for d-th dimension. Embedding consensus matrix realizes the unity of each pair of image and text information mapping results, and further learns the semantic code vector. Manifold dimension reduction preserves the local geometry of the original data points. To prevent the results from being affected by the noise data, the parameter γ d i is introduced. When sum up all of the d 1, 2, . . . , D) , the D d=1 γ d i can be transformed into diag(γ i ) . So we get is a low-dimensional embedding matrix, which retains the manifold structure of the original information. Let Figure 4. The process of CSAEC. We map the datasets to an embedding space, learn projections by multimodal semantic autoencoder and reconstruct original features. (V T) is the original data matrix, U i is a lowdimensional consensus vector of embedding consensus ϕ d , W is a low-dimensional embedding matrix, C is the corresponding semantic code. Two encoders P v , P t project image and text data into low-dimensional space A, and two decoders reproject A back to high-dimensional data. www.nature.com/scientificreports/ the data is transformed into the corresponding semantic code vector by embedding the consensus matrix. To eliminate the influence of noise, when the mapping result of data (v i , t i ) is abnormal, γ d i tends to 0. The corresponding features are extracted using the original image and text information. W i can be written as W i = WE i , where E i = (e T i , . . . , e T N+(i−1)D+1 , . . . , e T N+iD ) is the feature matrix.
Sum the N components of images and text in each dimension
Denote where H is the correlation matrix between the mapping points and the original data points, and D is the diagonal matrix. Using matrix C, image and text information can be converted into corresponding semantic codes.
the final expression is
The variables in the objective function are relatively complex, and each univariate is solved by using an iterative update method.
First, fix C, U and update .
Since � = (V T)ϕ T , the objective function can be transformed into a solution for a single variable where Find the partial derivatives of ϕ T Second, fix , U and update C.
The expression becomes
The solution of C can be referenced to 31 . Third, fix , C and update U The update process is transformed into a single variable U ϕ = diag(ϕ 1 , . . . , ϕ D ), C i = −e T D+1 I D+1 diag(γ i )(e D+1 I D+1 ) www.nature.com/scientificreports/ Find the partial derivative of U Cross-modal semantic autoencoder. By mapping the image and text to the embedding consensus space, CSAEC can contain enough raw data information. V ∈ R d v ×n , T ∈ R d t ×n denote the visual and textual feature matrices, respectively, where d v and d t are the visual and textual feature dimensionalities. The following is to learn the projection matrix P v ∈ R d×d v , P t ∈ R d×d t separately: the encoder connects the image and text projection with the semantic code vector C, and the decoder is restricted so that the code vector can reconstruct the original features of the image and text. The encoder and decoder are linearly symmetric. Two encoders P v , P t project image and text data into low-dimensional space A, and two decoders reproject A back to highdimensional data. The hidden layer contains both image and text information.
For the image data, the embedding form of the automatic encoder is used to represent the information of the original features. The image-text paired representation should be unified, since in the retrieval stage, when the query information is given, the query will be sorted according to the similarity. So, we get where A ∈ R d×n represents n groups of training texts in a d-dimensional hidden space. The additional reconstruction task imposes a new constraint in learning of the projection function so that the projection must preserve all the information contained in the original textual features. For image modality, we also adopt an autoencoder to let the embeddings contain information from original visual features. We hope the representations of image-text pairs in the hidden space to be uniform. This form is a binding linear autoencoder 18 and has only one hidden layer.
For text data, to make sure of the low-dimensional ability to restore the original information points, let For each data point v i (i = 1 , 2, …, N), it can be approximated as a linear combination of all the other samples. Based on the mapping consensus we have proposed, the datasets ϕ d : In this way, the feature matrices V ∈ R d×n , T ∈ R d×n , P v ∈ R d×d , P t ∈ R d×d . Then by imposing sparsity on the matrix A and the projection matrix P v to the process of reconstruction, the optimal sparse combination matrix A and projection matrix P v can be obtained by solving the problem where a i is the ith column vector of the matrix A. As in the manifold learning methods, P v V should satisfy the orthogonal constraint. Through the sparsity constraint, the information captured by A can be used to search the relevant features and eliminate the effect of noise features. The function for structure learning is formulated as According to the expressions above, a multi-modal autoencoder can be obtained. Also, we make sure that the hidden layer contains enough semantic information. The hidden representation of the data is associated with the semantic code vector C. Considering the similarity between different modalities, we use tag information to standardize the potential representation of the autoencoder. The reference 18 has minimized the function by summing up the low-dimensional information of visual and text datasets. This method 18 relaxed the constraints and rewrite the objective of multi-modal autoencoder. In this way, the results have been improved. In retrieval phase, when a query is given, documents are sorted according to their similarity to the query. To guarantee the projected images and texts containing both semantic information and original feature information, we propose an improved autoencoder. On this basis, a regularization sparse constraint on the low-dimensional matrix A is added to obtain the final objective function where β is the weight parameter for balancing the two types of data information, and is a parameter that determines importance of semantic code vector.
We also use alternating iterative updating methods to solve the objective function separately. First, fix A and update P v , P t The solutions of the projection matrix P v , P t are similar. Let 2C 11 U T + �C 21 + � T C 12 T P v = I, P T v VL A V T P v = I, P t T P t = I, P T t TL A T T P t = I, A ii = 0 s.t. P v T P v = I, P T v VL A V T P v = I, A ii = 0 (2) min P t �P t T − A� 2 F + βtr(P t T TL A T T P t ) s.t. P t T P t = I, P t T TL A T T P t = I, A ii = 0 + 2β(V T P v P T v V + V T P v P T v VA + T T P t P T t T + T T P t P T t TA) | 4,655 | 2021-10-13T00:00:00.000 | [
"Computer Science"
] |
Electrochemical study on nickel aluminum layered double hydroxides as high-performance electrode material for lithium-ion batteries based on sodium alginate binder
Nickel aluminum layered double hydroxide (NiAl LDH) with nitrate in its interlayer is investigated as a negative electrode material for lithium-ion batteries (LIBs). The effect of the potential range (i.e., 0.01–3.0 V and 0.4–3.0 V vs. Li+/Li) and of the binder on the performance of the material is investigated in 1 M LiPF6 in EC/DMC vs. Li. The NiAl LDH electrode based on sodium alginate (SA) binder shows a high initial discharge specific capacity of 2586 mAh g−1 at 0.05 A g−1 and good stability in the potential range of 0.01–3.0 V vs. Li+/Li, which is better than what obtained with a polyvinylidene difluoride (PVDF)-based electrode. The NiAl LDH electrode with SA binder shows, after 400 cycles at 0.5 A g−1, a cycling retention of 42.2% with a capacity of 697 mAh g−1 and at a high current density of 1.0 A g−1 shows a retention of 27.6% with a capacity of 388 mAh g−1 over 1400 cycles. In the same conditions, the PVDF-based electrode retains only 15.6% with a capacity of 182 mAh g−1 and 8.5% with a capacity of 121 mAh g−1, respectively. Ex situ X-ray photoelectron spectroscopy (XPS) and ex situ X-ray absorption spectroscopy (XAS) reveal a conversion reaction mechanism during Li+ insertion into the NiAl LDH material. X-ray diffraction (XRD) and XPS have been combined with the electrochemical study to understand the effect of different cutoff potentials on the Li-ion storage mechanism. The as-prepared NiAl-NO3−-LDH with the rhombohedral R-3 m space group is investigated as a negative electrode material for lithium-ion batteries (LIBs). The effect of the potential range (i.e., 0.01–3.0 V and 0.4–3.0 V vs. Li+/Li) and of the binder on the material’s performance is investigated in 1 M LiPF6 in EC/DMC vs. Li. Ex situ X-ray photoelectron spectroscopy (XPS) and ex situ X-ray absorption spectroscopy (XAS) reveal a conversion reaction mechanism during Li+ insertion into the NiAl LDH material. X-ray diffraction (XRD) and XPS have been combined with the electrochemical study to understand the effect of different cutoff potentials on the Li-ion storage mechanism. This work highlights the possibility of the direct application of NiAl LDH materials as negative electrodes for LIBs.
Introduction
Rechargeable lithium-ion batteries (LIBs) dominate the market for several decades due to their outstanding energy density, high working voltage, and long cycle life [1][2][3][4]. Graphitic carbon has long been employed as the most common intercalation-type negative electrode material. However, graphite is limited by its relatively low theoretical capacity (372 mAh g −1 ) [5]. The necessity of increasing the energy and power density has made it urgent to discover new types of anode materials with higher capacity than graphite, good rate performance, and cycling stability.
Layered metal hydroxides, like Ni(OH) 2 , have been considered as negative electrode materials for LIBs owing to their structure with large interlayer spacing that enables rapid transfer of lithium ions [6,7]. Li et al. reported promising performance of β-Ni(OH) 2 -reduced graphene oxide composites as an anode for LIBs [8]. Nevertheless, the sluggish rate capability and poor cycling stability of Ni(OH) 2 have hindered its application [9]. Compared with single metal hydroxides, layered double hydroxides (LDHs), as a class of two-dimensional anionic clays [10], show larger basal spacing owing to the pillaring of interlayer species, which can favor the intercalation of Li + . LDHs, which can be expressed as [M 2+ 1-x M 3+ x (OH) 2 ] x+ (A n− ) x/n ·mH 2 O, offer various possible valence states provided by mixed metal ions (M 2+ and This work is dedicated to the memory of Roberto Marassi, a brilliant mentor who instilled in his students the passion for electrochemistry and still inspiring the next generation of scientists. M 3+ ) in the host layer. Based on their lamellar structure that consists of positive charged brucite-like host layers and exchangeable charge-balancing interlayer anions (i.e., A n− such as NO 3 − , CO 3 2− , Cl − , SO 4 2− ) [11], LDHs possess adjustability of physical and chemical properties, which can be achieved by replacing the metal cations, tuning the molar ratio of metals, and altering the interlayer anions. Therefore, this type of layered materials find applications in many fields, including catalysis [12][13][14], biochemistry [15,16], wastewater remediation, and supercapacitors [17][18][19][20][21][22].
In the field of rechargeable batteries, LDHs are typically employed as precursors or templates for the synthesis of metal oxides, which are then used as electrode's active materials in LIBs. In fact, the application of LDH-based composites as electrode materials was reported only very recently. For instance, Shi et al. [23] presented the fabrication of CoNi LDH and graphene-wrapped CoNi LDH as negative electrodes for LIBs. The graphene-wrapped CoNi LDH composite electrode exhibits a higher reversible specific capacity of 1428.0 mAh g −1 at 0.05 A g −1 and excellent capacity retention of 75% after 10,000 cycles at 10 A g −1 in the potential range of 0.01-3.0 V vs. Li + /Li compared with the CoNi LDH electrode. Zhang et al. [24] synthesized a Co 3 V 2 O 8 @NiCo LDH material, which reveals a reversible specific capacity of 1329.4 mAh g −1 at 1 A g −1 after 500 cycles and good cycling performance (893.1 mAh g −1 at 5 A g −1 after 950 cycles). This bi-material shows improved performance compared to the pure Co 3 V 2 O 8 and to the pure NiCo LDHs in the potential range 0.01-3.0 V vs. Li + /Li. On the other hand, with the increasing need to cut the cobalt consumption that is raising sustainability and environmental concerns [25][26][27], it is crucial to explore and develop cobaltfree LDHs. With this aim, a NiFe-LDHs/reduced graphene oxide composite anode material has been synthesized by Zhang et al. [28]. This material shows an initial capacity of 602.8 mAh g −1 after 80 cycles at 500 mA g −1 in the potential range 0.01-3.0 V vs. Li + /Li. Although LDHs have been reported as novel electrode materials for lithium-ion batteries, research dedicated to understanding the Li-storage mechanism, which is fundamental for optimizing the performance, is limited. Besides, the role of the binder, which is a critical component in influencing the cycling ability and rate performance, is rarely investigated for this kind of compound [29].
In this work, we report, for the first time, the direct application of NiAl LDH with nitrates as its interlayer anion as a negative electrode material for LIBs. The NiAl LDH electrode delivers high specific capacity and shows excellent cycling stability with sodium alginate (SA) as the binder. Ex situ XAS analysis reveals that the NiAl LDH stores Li ions via a conversion-type mechanism during the discharge-charge processes.
Preparation of NiAl LDH
NiAl LDH was obtained by a facile and straightforward one-pot hydrothermal reaction. All reagents (aluminum nitrate nonahydrate (Al(NO 3 ) 3 ·9H 2 O), nickel nitrate hexahydrate (Ni(NO 3 ) 2 ·6H 2 O), and urea (CO(NH 2 ) 2 ) provided by Sigma-Aldrich) were of analytical purity and were directly used without further purification. By following the procedure in [30], a solution has been prepared by dissolving 30 mmol of Ni(NO 3 ) 2 ·6H 2 O, 45 mmol of urea, and 15 mmol of Al(NO 3 ) 3 ·9H 2 O (a Ni:Al ratio of 2:1) in 150 mL of deionized water. This solution was then transferred into a Teflon-lined stainless steel autoclave of 200-mL capacity at a temperature of 100 °C. After a 24 h of hydrothermal process, the product was filtered and washed with water and ethanol several times. The NiAl LDH was finally obtained by drying the product in an oven for one night at 70 °C.
Structural and physical characterization
The X-ray diffraction (XRD) patterns were collected using STOE STADI P X-ray powder diffractometers equipped with Mythen1K detectors with Mo Kα1 radiation (λ = 0.70932 Å). The diffraction pattern was analyzed by full-profile Rietveld refinements, using the software package WinPLOTR [31]. The instrument used for the field emission scanning electron microscope (FE-SEM) characterization was a ZEISS SUPRA 40 VP. FT-IR spectroscopy was performed using a Spectrum 65 FT-IR Spectrometer (PerkinElmer, Waltham, MA, USA) equipped with a KBr beam splitter and a DTGS detector by using an ATR accessory with a diamond crystal; the spectra were recorded from 4000 to 600 cm −1 . The X-ray photoelectron spectroscopy (XPS) was acquired using a Thermo Scientific K-alpha spectrometer. The samples were analyzed using a micro-focused, monochromated Al Kα X-ray source (1486.6 eV, 400-μm spot size). XPS spectra were recorded with a concentric hemispherical analyzer at the pass energy of 50 eV and fit with one or more Voigt profiles (binding energy uncertainty: ± 0.2 eV). Scofield sensitivity factors were applied for quantification [32] using the Advantage software package. On the electrode samples, all spectra were referenced to the CF 2 component originated from the polyvinylidene fluoride (PVDF) binder centered at 290.7 eV binding energy. Regarding the NiAl LDH-SA sample, the spectral calibration was done on the C 1 s peak (C-C, C-H) at 285.0 eV binding energy controlled by means of the photoelectron peaks of metallic Cu, Ag, and Au, respectively. X-ray absorption near-edge structure (XANES) and extended X-ray absorption fine structure (EXAFS) spectra for the Ni K-edge of the samples with the different state of charge were conducted on the P65 beamline, PETRA III, German Electron Synchrotron in Hamburg (DESY). XAS spectra were collected at Ni K-edge in transmission geometry with the continuous scan mode. The double-crystal fixed-exit monochromator was equipped with Si (111) crystals. Ex situ samples for XAS measurements were obtained by applying a constant potential to equilibrate the system after the desired potential is reached. The specific surface area of sample was determined by the Brunauer-Emmett-Teller (BET) method. The sample is heated for 4 h at 100 °C in a 500-μmHg vacuum to remove contamination on the surface. A known volume of gas is added to the sample chamber and the pressure is recorded at 77.4 K. The sample was analyzed with a multipoint measurement with a micromeritics ASAP 2020 Plus Physisorption apparatus.
Electrodes preparation and electrochemical tests
NiAl LDH electrodes have been prepared as follows: 70 wt% of the as-prepared NiAl LDH was mixed by stirring in a quartz mortar for 10 min with 20 wt% conductive carbon black (TIMCAL® Super C65) and 10 wt% of binder in the appropriate solvent. For comparison, polyvinylidene fluoride (PVDF, R6020/1001, Solvay) and sodium alginate (SA, Sigma-Aldrich) were chosen as the binders. The solvents used to dissolve PVDF and SA were N-methyl-2-pyrrolidone (NMP, GC 99.5%, Merck KGaA) and 9:1 water/isopropanol, respectively. The blended slurries were then coated on a 10-μm-thick copper foil current collector with a wet thickness of ~ 110 μm, followed by drying in an oven at 80 °C for 12 h. After that, the coated electrodes were cut into individual disks of 12-mm diameter (∼ 11-μm thickness) with ~ 0.7-mg mass loading of NiAl LDH. The electrodes with different binders are denoted as NiAl LDH-SA and NiAl LDH-PVDF, respectively. CR2032 coin cells were built in an argon-filled glovebox (MB200, MBraun GmbH) consisting of the as-prepared working electrode, a lithium foil counter electrode (15-mm diameter, Alfa Aesar), a glass fiber separator (Whatman glass microfiber filter, 675-μm thickness), and LP30 electrolyte (1 M LiPF 6 in ethylene carbonate/dimethyl carbonate in a weight ratio of 1:1, BASF). The electrochemical tests (galvanostatic charge-discharge (GCD) and cyclic voltammetry (CV)) were carried out on a multichannel potentiostat (VMP3, Bio-Logic). The electrochemical cells were kept in a binder climate chamber at 25 °C during the electrochemical experiments. Two potential ranges (0.01-3.0 V and 0.4-3.0 V vs. Li/Li + ) were employed for GCD and CV tests.
Results and discussion
The as-prepared NiAl LDH was characterized by XRD ( Fig. 1) and the crystal structure was determined by Rietveld refinement with the assistance of the FullProf software package. The XRD reflections of NiAl LDH can be indexed using a structure model of oxonium nickel oxide (ICSD 24,986). The broad reflections can be explained by phase separation with the same rhombohedral R-3 m space group, suggesting the different d-spacing between 6 and 8 Å. The FT-IR spectrum is shown in Fig. S1. The broad absorption band at 3410 cm −1 can be attributed to the O-H stretching vibrations of the hydroxyl groups and the presence of water molecules in the interlayer of LDH [22]. Specifically, the interlayered lattice water is hydrogen bonded to the transition metal slabs [33,34]. It is expected that during delithiation, the crystal water molecules remain in the interlayer space [35,36]. The weak peak at 1632 cm −1 is ascribed to the vibrations of the in-plane bending mode of water molecules. The sharp characteristic absorption band at 1348 cm −1 corresponds to Fig. 1 a Crystal structure of NiAl LDH and b Rietveld refinement of X-ray powder diffraction data of NiAl LDH polycrystalline material the stretching vibration mode of NO 3 − , which confirms the absorption of NO 3 − in the interlayer of NiAl LDH materials. The bands at 749 and 652 cm −1 are due to the stretching and bending modes of Al-OH and Ni-OH, respectively. XRD and FT IR spectra demonstrate the successful synthesis of NiAl LDH with NO 3 − in the interlayer. Based on the thermogravimetric curve (Fig.S2) and the feeding molar ratio of Ni(NO 3 ) 2 ·6H 2 O and Al(NO 3 ) 3 ·9H 2 O (a Ni:Al ratio of 2:1), the molecular formula of the as-prepared NiAl LDH is expected to be [Ni 1-x The morphology of NiAl LDH was characterized via FE-SEM, which is shown in Fig. 2. The homogeneously dispersed cluster of lamellae with 10-30 μm of diameter can be observed on the image at low magnification (Fig. 2a). The higher magnification image (Fig. 2c) shows that the NiAl LDH lamellae cluster has a flower-like structure arranged in a concentric form and it is highly nano crystallized. Besides, a thickness of the NiAl LDH lamellae ranging from 20 to 40 nm can be observed in Fig. 2d. Furthermore, N 2 isotherms of NiAl LDH have been performed to determine the surface area and pore size. The BET-specific surface area of NiAl LDH is 46.85 m 2 g −1 , and the average pore diameter is around 8.54 nm (Fig. S3).
The surface electronic state of NiAl LDH materials is illustrated by XPS, as shown in Fig. 3. Survey XPS spectrum reveals the presence of Nickel (Ni 2p, Ni 3p, or Ni 3 s), aluminum (Al 2 s or Al 2p), oxygen (O 1 s), nitrogen (N 1 s), and carbon (C 1 s). High-resolution XPS spectra of Ni 2p 3/2 , Al 2 s, O 1 s, N 1 s, and C1s are shown in Fig. 3b-f. The peak of Ni 2p 3/2 located at 856.3 eV with its shake-up satellites, as shown in Fig. 3b, is a signature of various Ni (+ II) species in NiAl LDHs [37,38]. The Al 2 s-Ni 3 s spectrum shows two peaks centered at 114.1 eV and 119.7 eV (Fig. 3c), which are signatures of Ni and Al chemical environments. The peak positions of these atoms are the signature of Ni 2+ and Al 3+ environments in the crystal structure of NiAl LDH material [39]. The O 1 s spectrum (Fig. 3d) shows signals coming from all oxygenated species at the surface. The prominent oxygen peak at 532 eV corresponds to a mix of oxygen from NiAl LDH structure and other oxygenated species from surface contamination environments (CO and COO groups) [40]. The XPS analysis demonstrates that the Ni 2+ and Al 3+ elements coexist in the product, which is in good accordance with the prediction. Concerning the C 1 s spectrum, three peaks are presented. The main peak on C 1 s spectrum at 285 eV corresponds to hydrocarbon contamination (285 eV). The peaks centered at 286.5 eV and 288.5 eV correspond, respectively, to mono-and bi-oxygenated carbon environments. Carbon species come from surface contamination of NiAl with the air environment. N 1 s spectra display one peak at 407.1 eV, corresponding to nitrates (NO 3 − ) present in the interlayer of NiAl LDHs material. The XPS analysis of the prepared NiAl LDH material determines the atoms' chemical environments present in the crystalline structure and it agrees with the FT-IR and XRD measurements made on the material.
The NiAl LDH composite electrodes based on the two binders (SA and PVDF) have been characterized via CV and GCD. Figure 4a displays the CV curves of NiAl LDH-SA electrode at 0.1 mV s −1 in the potential window of 0.01-3.0 V (vs. Li/Li + ). In the first cycle, the weak and [41]. The well-defined cathodic peak that appears at 0.17 V can be ascribed to the formation of the solid electrolyte interphase (SEI) films on the surface of NiAl LDH originated from the electrolyte decomposition. During the anodic sweeps, the peak at 1.03 V is due to the oxidation of LiH into LiOH [42]. The anodic peaks located at 1.48 and 2.26 V could be attributed to the decomposition of SEI, as well as the oxidation of metallic Ni 0 to Ni 2+ [43]. In addition, the presence of cathodic peaks at 0.73 V and 1.37 V after the first cycle indicates an irreversible structural or textural transformation, which occurs during the first lithiation [44]. It is worth noting that the CV sweep of the 3rd cycle can be well overlapped with that of the 2nd cycle, implying reversible electrochemical reactions in the process of lithiation/delithiation after the first cycle.
The galvanostatic charge-discharge performance of NiAl LDH electrode with SA binder at a specific current of 50 mA g −1 in the potential range of 0.01-3.0 V is shown in Fig. 4b. In the first lithiation curve, the long plateau at around 0.51-0.57 V is in agreement with the initial cathodic peak at approximately 0.57 V observed in the CV curve. Three sloped plateaus can be observed in the first delithiation profile, corresponding to the anodic peaks observed in the CV curve. The NiAl LDH-SA electrode delivers 2586 and 1578 mAh g −1 for the first lithiation and delithiation, respectively, indicating an initial Coulombic efficiency (ICE) of 61.3%. This low ICE is due to the irreversible decomposition of the electrolyte and the formation of the SEI layer on the surface of the active material. From the second discharge profile, two discharge plateaus at around 1.49 V and 0.85 V emerge, corresponding to the cathodic peaks related to the reduction of Ni 2+ and LiOH in the CV curves, respectively. Besides, three charge plateaus located at approximately 0.92 V, 1.46 V, and 2.22 V in the second charge profile well agree with the anodic peaks in the CV curves. Despite the capacity loss in the first cycles, a high reversible capacity of around 1500 mAh g −1 is achieved in the subsequent processes. Figure 4c and d show the electrochemical features of the NiAl LDH electrode based on the PVDF binder, which is the most widely used binder in commercial electrodes for LIBs. Similar to the results shown in Fig. 4a, in Fig. 4c, the cathodic peaks at 0.5 V and 0.22 V are due to the transformation of LiOH/LiH and Ni 2+ /Ni 0 , respectively. The cathodic peak at 0.22 V also includes the electrochemical reduction of the electrolyte with the formation of the SEI film. In the anodic sweeps, the peaks centered at 1.04, 1.45, and 2.28 V are well overlapped with that of the 1st cycle and can be assigned to the reversible conversion process of LiH/ LiOH and Ni 0 /Ni 2+ , respectively.
As shown in Fig. 4d, when the PVDF binder is utilized in the NiAl LDH electrode, the initial discharge and charge capacities of 2221 and 1577 mAh g −1 are achieved, corresponding to an initial Coulombic efficiency of 71.0%, which is higher than what obtained with the NiAl LDH-SA electrode. During the following charge process, the decomposition of the SEI film can be responsible for the rapid capacity decay of the electrode based on both binders, as reported in the literature [45]. Although the NiAl LDH-PVDF electrode delivers comparative capacity as the NiAl LDH-SA in the first and second cycles, it presents a dramatic decay at the third cycle, suggesting inferior charge-discharge stability.
The rate capability test was performed with currents ranging from 0.05 to 10.0 A g −1 and shown in Fig. 5a. The NiAl LDH-SA electrode delivers average discharge capacities of 1665, 1201, and 1051 mAh g −1 at 0.05, 0.1, and 0.2 A g −1 , respectively. When using PVDF as a binder, the average discharge capacities at the same currents are 1324, 827, and 668 mAh g −1 , respectively, significantly lower than those of the electrode with the SA binder. Furthermore, the NiAl LDH-SA electrode recovers to a high reversible capacity of 1109 mAh g −1 when the current reverses back to 0.05 A g −1 . This value is higher than the recovered capacity of 715 mAh g −1 obtained with the NiAl LDH-PVDF electrode, indicating that the NiAl LDH-SA electrode is more stable and can better withstand high currents. Figure 5b depicts the cyclic performance of the two electrodes. It is worth noting that after a significant capacity decay during the first 154 cycles, the capacities of the NiAl LDH-SA electrode increase gradually. After 400 cycles, the electrode can still retain a high reversible capacity of 697 mAh g −1 at the current of 0.5 A g −1 .
Both electrodes reach nearly 100% of Coulombic efficiency after the initial cycles, confirming the excellent reversibility of NiAl LDH and the absence of side reactions after the formation of the SEI. It is worth noting that this capacity fluctuation that appears at the 154 cycle can be influenced by [46].
The long-cycling performance of the NiAl LDH electrodes in the potential range of 0.01-3.0 V at the high specific current of 1.0 A g −1 has been further evaluated and shown in Fig. S4. A similar high initial discharge capacity (1405 mAh g −1 ) is achieved by both the electrode (independently from the binder). After undergoing a capacity decay for 170 cycles, the NiAl LDH-SA electrode starts a period of capacity increase. Stable long-term cycling has been achieved with a capacity of 388 mAh g −1 after 1400 cycles. On the other side, the NiAl LDH-PVDF electrode shows an irreversible capacity fading after 65 cycles, with poor longcycling performance.
To better understand the influence of binders and reactivity of NiAl LDH electrodes on the electrochemical response, we performed XPS analysis. C 1 s, F 1 s, O 1 s, and P 2p XPS spectra are presented in Fig. 6. O 1 s spectra collect signals coming from all oxygenated species present on the NiAl LDH-PVDF samples. Figure 6b and f reveals one peak centered at around 531.5 eV, which is characteristic of the mix of O = C and Li 2 CO 3 components in the battery system with DMC as the solvent. Three additional elements center at 530.3 eV (the combination of ROLi/LiOH), 533.6 eV (O-C and P-O environments), and 528.5 eV (attributed to O 2− anions from Li 2 O component) are presented in the spectra, respectively. Another peak that appears at 537.8 eV is assigned to Na KLL from sodium present in the SA binder, which is always present in O 1 s spectra of sodium-based samples. Figure 6c and g show the spectra of F 1 s, one component located at 685.0 eV confirming the presence of LiF (on all cycled electrode samples), and the other situated at around 687.1 eV indicating the presence of P-F component from LiPF 6 and CF 2 -CH 2 component from PVDF (only present on NiAl LDH-PVDF samples). Besides, P 2p spectra ( Fig. 6d and h) have to be fitted with 2p 3/2 -2p 1/2 doublets separated by 0.9 eV with a 2/1 intensity ratio due to spin-orbit coupling. The main doublet with P 2p 3/2 component located at 133.6 eV is the phosphates signal from the decomposition of LiPF 6 , and the other at around 137 eV corresponds to the phosphorus from LiPF 6 .
The C 1 s spectra (Fig. 6a and e) comprise one component (located at 283.3 eV) characteristic of the carbon black (CB). The three components at 286.2 eV (attributed to the CH 2 chemical environment in PVDF binder), 290.7 eV (CF 2 -CH 2 ), and 291.8 eV (CF 2 -CF 2 ) confirm the presence of the PVDF binder on the surface of the electrode [47]. The four other components at 285 eV (C-C/C-H), 286.8 eV (mix of C-O/ROLi) [48], 288.5 eV (C = O), and 290.1 eV (attributed to the Li 2 CO 3 chemical environment) are due to the degradation products of the electrolyte which are composing the SEI. These components remain on the electrodes during electrochemical measurements. For the NiAl LDH-PVDF electrode, the intensity of the components attributed to CB slightly increases after discharging to 0.22 V, indicating the thinning down of the SEI film. While for the NiAl LDH-SA electrode, the CB components decrease, suggesting a thicker up of the SEI. The change of C 1 s spectra after the full cycle reveals a thicker SEI formation on the NiAl LDH-SA electrode. This difference may be due to the gelation derived from a stronger interaction of PVDF binder and electrolyte, which hinders the further formation of SEI film [49].
XPS analysis of NiAl LDH after cycling confirms a similar SEI composition for the electrode with both binders. The species of Li 2 O (from NiAl LDH-SA samples), LiF, a mix of ROLi/LiOH, Li 2 CO 3 , and phosphates, which are decomposition products of electrolyte during cycling, are detected with XPS on all samples. Furthermore, compared with the NiAl LDH-PVDF electrode, a less amount of LiF and a higher amount of Oxygen (from oxidized species) on the NiAl LDH-SA electrode surface are found, which suggests a higher electrolyte degradation when SA is used as the binder. Fig. S5 shows the binder's influence on the concentration of the species that make up the SEI film. The use of PVDF as a binder on the electrode can promote the LiF formation, which is found in less amount on the NiAl LDH-SA electrode. In comparison, the SA binder facilitates lithium oxide formation such as Li 2 O, ROLi, and LiOH. Moreover, the quantity of these species remains stable during the first full cycle. The lower solubility of lithium oxide with respect to LiF in carbonate electrolytes brings about a stable SEI film [50]. As a result, the stable lithium oxide component in SEI film could be a dominant source of anode passivation. The thicker and stable SEI film detected when using SA as binder might be the reason for the higher irreversible capacity at the first cycle and for the better cycling stability as compared to the PVDF-based electrode.
According to recent reports [51][52][53][54], the potential discharge cutoff can influence the cycling stability. Indeed, a higher potential discharge cutoff can prevent the complete reduction of Ni 2+ to Ni 0 and can result in a different SEI. Figure 7 and Fig. S6 report the CV and GCPL curves recorded on the two electrodes in the potential range 0.4-3.0 vs. Li + /Li. The NiAl LDH-SA and NiAl LDH-PVDF electrodes show similar CV shapes during the first cycle (black curves in Fig. 7a and c). A weak peak at 1.23 V can be observed during lithiation, which is attributed to the reduction of Ni 2+ to Ni + and the simultaneous intercalation of Li + . This peak strengthens during the following cycles, suggesting the domination of the Ni + /Ni 2+ transformation. A distinct cathodic peak appears when discharging to 0.6 V, suggesting the reduction of the [42], and the other anodic peak at around 2.23 V can be correlated to the conversion of Ni into nickel hydroxide.
The weak anodic peak appearing at 1.43 V may be attributed to the decomposition of SEI film [55,56]. The reduction peak at around 1.23 V of the NiAl LDH-SA electrode shows almost no shift in the following cycles (Fig. 7a). In comparison, the corresponding reduction peak of the NiAl Fig. 7 Electrochemical performance of NiAl LDH with different binders in the potential range of 0.4-3.0 V. a CV curves at a scan rate of 0.1 mV s −1 for NiAl LDH-SA electrode; b GCD curves at a current density of 0.05 A g −1 for NiAl LDH-SA electrode; c CV curves at a scan rate of 0.1 mV s −1 for NiAl LDH-PVDF electrode; d GCD curves at a current density of 0.05 A g −1 for NiAl LDH-PVDF electrode; e rate capacity at different current densities of SA and PVDFbased electrodes; and f cycle performance and Coulombic efficiency at a current density of 0.5 A g −1 of NiAl LDH-SA and NiAl LDH-PVDF electrodes LDH-PVDF electrode (Fig. 7c), with cycling, shifts to more negative potentials, from 1.23 V to around 1.00 V, indicating an increase in electrode polarization [57,58].
The GCD performance of the NiAl LDH electrode based on SA and PVDF binders at 0.4-3.0 V was further investigated ( Fig. 7b and d). The initial discharge capacity of the NiAl LDH-SA electrode is 1465 mAh g −1 , which is higher than that of the NiAl LDH-PVDF electrode (1330 mAh g −1 ). The corresponding initial Coulombic efficiencies are 60.0% and 51.1%, respectively. Besides, the SA binder-based electrode shows more well-defined lithiation and delithiation plateaus in the subsequent sweep profiles. Figure 7e shows the rate test of the NiAl LDH electrode with the two different binders at current densities from 0.05-10.0 A g −1 . The average discharge capacities of the NiAl LDH-SA electrode are 957, 679, and 559 mAh g −1 at 0.05, 0.1, and 0.2 A g −1 , respectively, and a reversible capacity of 526 mAh g −1 is recovered when the current rate returns to 0.05 A g −1 after 45 cycles, corresponding to a 58% of retention at 0.2 A g −1 . As a comparison, the NiAl LDH-PVDF electrode provides an average capacity of 774, 474, and 356 mAh g −1 at 0.05, 0.1, and 0.2 A g −1 , respectively, along with a lower restored capacity at 0.05 A g −1 after 45 cycles (343 mAh g −1 ) and lower retention at 0.2 A g −1 (46%). Figure 7f shows the long-term cycling experiment on the two electrodes in the restricted potential range at 0.5 A g −1 .
The NiAl LDH-SA electrode delivers a higher initial discharge capacity (607 mAh g −1 ) and Coulombic efficiency (65.0%) than the NiAl LDH-PVDF electrode (417 mAh g −1 and 56.7%, respectively). However, in the potential window 0.4-3.0 V, in contrast to what was obtained in the extended potential region of 0.01-3.0 V, both electrodes suffer from poor cycling retention (< 10% after 400 cycles). These results demonstrate that the small cutoff potential of the NiAl LDH electrode can dramatically affect the delivered capacity and the cycling stability. This finding is the opposite of what was observed with another LDH composite electrode (CoFe LDH with nitrates in the interlayer) tested in 1 M NaCF 3 SO 3 /Diglyme on sodium-ion batteries, where the electrode at a potential window of 0.4-3.0 V vs. Na + /Na resulted in improved stability compared with the CoFe LDH electrode in a smaller cutoff potential (0.01 V vs. Na + /Na) at 1 A g −1 after 200 cycles [52].
Furthermore, CV at different scan rates of NiAl LDH electrodes at 0.4-3.0 V was performed (Fig. S7) to better understand the electrochemical reaction kinetics. With increasing the scan rate, the CV shape is retained and the peak positions ( Fig. S7a and b) gradually change, which suggests low resistance and mild polarization [59]. Bulk ion diffusion controlled and capacitive effect behaviors can be revealed from the equation describing the relationship of peak current (i) and scan rate (v) [60,61]: A b value of ~ 0.5 indicates a diffusion-controlled process, while a value ~ 1 indicates a capacitive surface-controlled effect. The parameter b can be expressed as the slope of the log(v)-log(i) plot. Regarding the SA binder-based electrode, the b-values of anodic peak 1 and peak 2 are 0.92 and 0.90, respectively (Fig. S7c). The b-values of the anodic peak 1 (0.84) and peak 2 (0.85) of the PVDF binder-based electrode are consistent with that of the SA-based electrode, revealing that the intercalation of Li-ion within the NiAl LDH electrode is a surface-dominated process, independently from the binder.
To understand the redox processes occurring on the NiAl LDH electrode during the 1st cycle, XAS measurements were performed on ex situ samples. Normalized Ni K-edge XANES spectra and Fourier transform (FT) of the recorded EXAFS spectra are displayed in Fig. 8. The ex situ samples consist of lithiated (0.4 V, 0.01 V) and delithiated (3.0 V after 0.4 V, 3.0 V after 0.01 V) NiAl LDH electrodes. The current used to bring the electrodes to the desired potential is i = av b Fig. 8 a Normalized XANES spectra at Ni K-edge and, b Fourier transform of the recorded EXAFS spectra collected on ex situ samples 0.05 A g −1 . The Ni K-edge (Fig. 8a) of the NiAl LDH shifts to lower energies after lithiation to 0.01 V, confirming the conversion reaction Ni 2+ → Ni 0 at this potential. According to FT EXAFS spectra, the pristine NiAl LDH shows two distinct peaks at around 1.4 and 2.7 Å, corresponding to the 1st and 2nd Ni coordination shells in the initial structure, accordingly. The Ni metal phase is obtained when the peak corresponding to the Ni-Ni band in the metal structure appears at around 2.1 Å on the FT EXAFS spectra (sample lithiated to 0.4 V). In agreement with the ex situ XRD data, this result shows that the appearance of the Ni metallic fraction is accompanied by the amorphization of the initial NiAl LDH structure (Fig. S8). Based on the ex situ XAS and XPS measurements, by stopping the electrochemical reduction to 0.4 V, the SEI is not completely formed and the conversion reaction is only started but not completed. This can reflect the worst stability of the electrodes cycled at the cutoff potential of 0.4 V. On the other side, when the sample is discharged to the lowest potential of 0.01 V, the diffraction reflections of another product of the conversion reaction of LiOH can be recognized. Besides, the pristine NiAl LDH material does not return to the initial oxidation state after one cycle and the Ni remains slightly reduced, explaining the fast capacity loss of NiAl LDH. The mechanism of the electrochemical reaction occurring during the first cycle can be proposed as two steps process: 1) Intercalation (corresponding to the potential range of 3.0 V-0.5 V) 2) Conversion (corresponding to the long plateau at around 0.5 V)
Conclusion
In summary, NiAl LDH material with NO 3 − as the interlayer anion was applied as a negative electrode on lithiumion batteries and the role of the binder (PVDF and SA) and of the potential cutoff was evaluated. The NiAl LDH electrode with SA binder shows a high capacity and more LiOH + 2Li + + 2e − ⇌ Li 2 O + LiH stable cycling ability than the electrode with PVDF binder. A higher amount of lithium oxide component at the surface of the NiAl LDH-SA electrode is detected by XPS. Since Li 2 O is less soluble in carbonate-based solvents than the LiF (detected as a major SEI component in the PDVF-based electrode), the resulting SEI should be more stable and explains the better cycling ability of the SA-based electrode. Furthermore, the NiAl LDH electrode at the discharge cutoff potential of 0.01 V can achieve a longer cycling life in comparison to the electrode at a discharge cutoff potential of 0.4 V owing to a complete conversion reaction and a complete SEI formation in the potential range of 0.01-3.0 V (vs. Li/Li + ). Ex situ XAS confirms that the NiAl LDH stores Li + via a conversion mechanism (Ni 2+ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 8,682.8 | 2021-07-21T00:00:00.000 | [
"Materials Science"
] |
Picosecond transient absorption rise time for ultrafast tagging of the interaction of ionizing radiation with scintillating crystals in high energy physics experiments
Here we report the first results of a search of a signature for picosecond time stamps of the interaction between ionizing particles and transparent crystalline media. The induced absorption with sub-picosecond rise time observed in a cerium fluoride scintillation single crystal under UV excitation is directly associated with the ionization of Ce3+ atoms in CeF3 crystals, and the very fast occurrence thereof can be used to generate picosecond-precise time stamps corresponding to the interaction of ionizing particles with the crystal in high energy physics experiments.
Introduction
There is an upcoming demand for a new generation of detectors allowing high time resolution. To develop a new generation of ultrafast detectors, especially in experimental particle physics at high luminosity colliders, the processes permitting a measurement precision better than 10-20 ps for the interaction time between ionizing radiation and a detecting medium need to be identified. An important factor when considering the interaction of high-energy charged particles and γ-quanta with detectors is the size of the interaction region. The energy deposit in the detector medium is due to several processes, including ionization, bremsstrahlung, and pair creation along the particle's trajectory. Moreover, a relativistic particle moving with a speed v ∼ c in a transparent detecting medium whose refractive index is greater than unity produces Cherenkov radiation. In order to reduce the interaction region and have a compact experimental setup in high energy physics experiments, one has to use detectors with short radiation X 0 , and nuclear interaction R H lengths and small Moliere radius R M . Inorganic scintillation materials meet these requirements in many ionizing radiation detectors [1]. In the last forty years, the application of crystalline scintillation materials in ionizing radiation detectors in high energy physics has played a crucial role in the discovery of the properties of matter and promoted a continuous progress in the detecting techniques. From small detectors based on NaI(Tl), CsI(Na), BaF 2 , PbF 2 , and Bi 4 Ge 3 O 12 , the experimentalists came to the gigantic Electromagnetic Calorimeter of CMS Collaboration at LHC, consisting of 11 m 3 of PbWO 4 scintillation crystals. The high quality of the CMS PWO electromagnetic calorimeter and its good performance since the operation start in the LHC irradiation environment has allowed the discovery of a new boson [2]. Nevertheless, the physics of the scintillation development [1] imposes certain limitations on the scintillation materials application in future experiments, especially in high luminosity collider experiments, where a high time resolution of the detectors to mitigate the pileup will be required [3,4]. The authors of [5] considered some features of the possible emission and absorption of optical photons which appear at the early stages of the interaction process between ionizing radiation and crystalline material. They occur when the density of states has some peculiarities, such as a gap in the bottom of the conduction band. Figure 1 illustrates such distribution of the electron density in a crystalline medium of interest. Here, in addition to possible radiative transitions, such transient states may cause electronic absorption transitions if the crystal is illuminated by an external light source. Thus, in addition to hot intra-band radiative transitions, the absorption transitions can be observed as soon as the population of the lower levels -1 - of the conduction band is significant, which usually takes place in less than 10 −12 s [1]. Thus, optical absorption from the lowest excited state of the matrix can give a time stamp with a precision close to 10 −12 s, or even better. However, the observation of this absorption must be performed before free carriers are captured by the luminescent centers and/or shallow electron traps. The time scale of this process is comparable to the scintillation rise time of a typical scintillation crystals doped with Ce 3+ activators and does not exceed 100-300 ps [6]. The observation of such transient absorption and the choice of the proper material need further investigation. It should be mentioned that the fast rising transient absorption in the infrared range was observed at two-photon band gap excitation in CsI crystals [7]. It was associated with the transitions in a self-trapped exciton created after the excitation.
Samples, experimental technique and results
In the present work, we choose cerium fluoride, a crystal with short X 0 and R M , one of the candidates for detectors at LHC with high luminosity [8]. A CeF 3 single crystal has a density of electronic states in the bottom part of the conduction band, which ideally corresponds to the requirements described. The bottom of the conduction band is formed by d orbitals of Ce 3+ ions with some admixture of f and s states of Ce 3+ . The partially filled f orbital of Ce 3+ is about 3 to 4 eV higher than the top of the valence band formed by the filled p states of F − [9]. The authors of [10] measured the optical absorption from the 4f level of Ce 3+ ions in a CeF 3 crystal and identified gap between the 5d Ce 3+ electronic levels and other bands in the upper part of the conduction band. Five absorption bands in the range from 247 to 194 nm correspond to the absorption transitions from the 4f level of Ce 3+ to the five components of the 5d level split by the hexagonal crystalline field and vibronic interaction. There is also a transition near 172 nm, assigned to 4f-6s transition [11]. Thus, the bottom state of the conduction band corresponds to the d state of Ce 3+ -2 - 3 . Arrows indicate different absorption processes: 1laser absorption (263 nm), 2 -absorption from the lowest excited state (this transition is also involved in the up-conversion process), 3 -absorption by higher excited states populated after the recombination of Ce 4+ with electrons generated in the up-conversion processes, 4 -emission from the lowest excited state, also a part of the up-conversion process resulting in the ionization of Ce 3+ to Ce 4+ + e, 5 -fast non-radiative relaxation, 6 -delayed non-radiative relaxation.
with zero-phonon position near 280 nm. The energy of the allowed d (with strong admixture of p)-s transition from the lowest excited d state of Ce 3+ is expected to be in the range of 400-420 nm. Due to a wide forbidden zone the crystal itself has no optical transitions in the visible range. Figure 2 shows the possible optical transitions in a CeF 3 crystal.
A CeF 3 crystal sample of 3 mm thickness was excited by femtosecond laser pulses in the UV range, the estimated absorption coefficient of the 263 nm radiation was at the level of 5000 cm −1 . This excitation provides the excitation of Ce 3+ ions to the lowest 5d excited level.
The experiment was carried out with a pump-probe spectrometer based on a custom-made original femtosecond Ti:Al 2 O 3 pulsed oscillator and a regenerative amplifier, both operating at 10 Hz repetition rate [12]. The pulse duration and the energy of the Ti:Al 2 O 3 system after the amplifier were 140 fs and up to 0.5 mJ, respectively, tunable over the spectral range from 770 to 820 nm. The pulses of the fundamental frequency (ω) at the output of the amplifier (790 nm output wavelength was set for the present study) were divided into two parts in a 1:4 ratio ( figure 3). The beam with higher intensity was converted to the third harmonic (λ ≈ 263 nm, E up to 12 µJ) and used as the excitation pulse. Pulse energy was chosen to prevent sample surface damage by intense laser pulses. The smaller part of the fundamental frequency, after passing the delay line, was used as a white supercontinuum probe pulse generated by focusing it into a 1 cm long watercontaining cell. By using a semi-transparent mirror, the supercontinuum radiation (360-1500 nm) was subdivided into two pulses (reference and signal) of similar intensity, and then they were focused on the sample by means of the mirror optics. The reference pulse is required to eliminate an impact of shot-to-shot instability of supercontinuum. It passes the sample always before the -3 - excitation. Induced change of optical density is calculated by the formula: where E sg , E * sg and E ref are energies of signal pulses passed the sample before and after excitation and the reference pulse respectively.
The sample's surfaces were carefully polished to prevent the surface damage during the illumination with ultra-short pulses. The beam spot diameter was 0.5 mm. The spectra of both pulses were recorded for each laser shoot and processed by a system including a polychromator equipped with a CCD-camera. The absorption spectra of light from the white supercontinuum were measured in the spectral range from 390 to 700 nm.
Three wide bands were observed in the spectral range from 390 to 700 nm, as seen in figure 4. The narrow dip at 400 nm is due to scattering of the second harmonic of the fundamental frequency and does not relate to the measured sample. It is detected due to the fact that outer absorption filter in the 3rd harmonic generator (figure 3) does not absorb completely the second harmonic of the fundamental frequency. The first peak near 410 nm has a rise which coincides with the rise time of the excitation pulse and is shorter than 1 ps. Assuming that it corresponds to fast populating of the lowest d level of Ce 3+ ions, following the excitation pulse, we correlate this band to the absorption of the probe pulse from Ce 3+ d to Ce 3+ s states. Two other wide bands, peaked near 560 and 630 nm have a slightly higher level of optical density. Figure 5 shows the kinetics of the transient absorption in short (0-200 ps) and long (0-2000 ps) scales. Their intensity profile shows a further growth in the time period (0-400 ps) after excitation, whereas the short wavelength band reaches a constant value of optical absorption in the time scale of 0-200 ps. Approximation of the kinetics curves with sum of exponents showed that long wavelength bands have a rise time at the level of 200-300 ps, so one can conclude that the corresponding electronic levels population is still increasing after the direct excitation.
-4 - One possible explanation of this long wavelength absorption is the up-conversion processes resulting in the ionization of the cerium ions and the population of the upper 5d states. The concentration of excited cerium ions is rather high (about 5 × 10 17 cm −3 for the described experimental conditions), and the energy transfer between the excited Ce 3+ * ions may occur, leading to the ionization of the excited Ce 3+ * : Ce 3+ * + Ce 3+ * → Ce 3+ + Ce 4+ + e. This process is analogous to the concentration quenching of excitons in CdWO 4 [13] and is facilitated in CeF 3 , since the diffusion coefficient of excitations over the Ce 3+ subsystem is rather high due to the overlapping of the emission and excitation spectra in CeF 3 . The recombination of the electron with Ce 4+ has an almost instantaneous component due to the geminate recombination and a delayed component due to the diffusion of the electron to the ionized center. In LiYF 3 :Ce, this delay can be as long as a few nanoseconds [14]. After the recombination of electrons with ionized Ce 4+ , all 5d levels are populated, thus producing the delayed induced absorption in the long-wavelength domain. Also, we do not exclude that these rising long-wavelength components are connected to shallow traps. They may be due to the population/depopulation of very shallow electron capturing centers created by deformations due to vacancies in the nearest coordinating spheres. Such shallow levels may be created by the d-f states mixture with a predominant f density component. If their depth is less than 0.2 eV, the two observed bands ideally correspond to the electron transfer from their ground state to two upper Stark components of the 5d level of Ce 3+ ions. Approximation of the kinetics curves by sum of exponents showed that in a longer time range, all bands show a decrease of the optical density with a time constant near 2.5 ns, indicating a decrease in the population of the corresponding level. It correlates well with the time profile of the depopulation of the lowest Ce 3+ conduction -5 - band level related to the interconfiguration d-f luminescence transition. It is well known that the initial stage of the luminescence kinetics curve of CeF 3 crystals contains a short component with a decay constant at the level of 3-5 ns [15].
In order to exploit the observed ultrafast rising transient absorption and create precise time stamps related to the interaction of ionizing radiation with the crystal in forthcoming detectors, CeF 3 crystals in the detector have to be probed with 1 to 5 picosecond laser pulses with a wavelength of 400 nm. Wavelength of the probe pulse does not overlap scintillation of CeF 3 which peaked at 330 nm [1]. So scintillation and probe pulses can be discriminated by spectrum. Set of -6 -the short laser pulses with a total duration 500-1000 ps consisted of several hundreds 1-5 ps pulses and synchronized with collision of particles is required to scale time intervals between probe pulses at the level of several picoseconds. Frequency of the laser pulse sets should be similar to frequency of the particles collision, of about 40 MHz. Such laser sources can be constructed from commercially available optical components and lasers.
Conclusions
We observed the induced absorption with a sub-picosecond rise time in a cerium fluoride single crystal under UV excitation causing the transfer of electrons to cerium excited states. One of the detected bands with a maximum near 400 nm corresponds to the transient absorption from the lowest Ce 3+ excited state to the upper levels in the conduction band, whereas the two other bands with maxima near 560 and 630 nm most probably appear due to up-conversion processes or the action of shallow electron traps. These processes are directly associated with the ionization of Ce 3+ atoms of CeF 3 crystal and the very fast occurrence of this absorption can be used to generate picosecond precise time stamps of the interaction of ionizing particles with the crystal in high energy physics experiments. | 3,390.8 | 2014-07-11T00:00:00.000 | [
"Physics"
] |
Narrative Generation in Entertainment: Using Artificial Intelligence Planning
From the field of artificial intelligence (AI) there is a growing stream of technology capable of being embedded in software that will reshape the way we interact with our environment in our everyday lives. This ‘AI software’ is often used to tackle more mundane tasks that are otherwise dangerous or meticulous for a human to accomplish. One particular area, explored in this paper, is for AI software to assist in supporting the enjoyable aspects of the lives of humans. Entertainment is one of these aspects, and often includes storytelling in some form no matter what the type of media, including television, films, video games, etc. This paper aims to explore the ability of AI software to automate the story-creation and story-telling process. This is part of the field of Automatic Narrative Generator (ANG), which aims to produce intuitive interfaces to support people (without any previous programming experience) to use tools to generate stories, based on their ideas of the kind of characters, intentions, events and spaces they want to be in the story. The paper includes details of such AI software created by the author that can be downloaded and used by the reader for this purpose. Applications of this kind of technology include the automatic generation of story lines for ‘soap operas’.
Introduction
Artificial Intelligence (AI) refers to the intelligence demonstrated either by machines, or in this case, software, and incorporates functions such as reasoning, planning, learning, natural language communication, perception and robotics.Through advancements in artificial intelligence we have been able to accomplish many things, from performing tasks that would otherwise be dangerous for humans to take on, to automating mundane and tedious types of work.Through AI we will be able to tackle a vast assortment of more challenging and interesting problems in the future, such as more accurately predicting the weather, providing effective driverless transportation and eventually creating robotics so advanced that they will be able to rival the thought process and complexity of the human brain (Turner, n.d.).
A particular area of interest with room to grow is in finding new and interesting ways to use these concepts to provide a more automated process and intriguing experience in entertainment.With this in mind, this paper concerns the automatic narrative generation for storytelling.
The work behind this paper has an eventual goal of enabling software to autonomously generate dialog amongst characters using only a set of predefined attributes.For example, a user could create characters each with their own set of parameters including gender, age, experiences, personalities, ambitions, goals and different relationship variables between each other.The AI software would generate dialog between them intelligent enough to flow based on the subject of the conversation, with each character involved producing a meaningful response to what was said previously for a more natural simulation.
In order to bring this type of functionality to a software application, the idea must be brought back to the most basic aspects of storytelling.This is known as the fabula of a story, and consists of the most basic elements.These elements generally include actions, or a sequence of events that are carried out between characters and other objects in the story to create the story's skeleton.Therefore, this project aims to use various techniques in AI to allow a user with no previous programming or technical experience to shape a story based on their own ideas and story elements.
The Problem
Storytelling is traditionally produced manually by talented writers with both a natural ability, as well as skills developed during their education.These stories are for the most part developed using a writer's experiences, personality, morals, ambitions and perhaps most important, their creativity.Creative writing is therefore a process that takes quite some time to complete, and varies heavily based on the writer.
Many of these prerequisites required for creative story-telling can be artificially reproduced using predefined variables, allowing a piece of software to intelligently create narration between characters and tell a story.
Targeted Audience
The audience for this product may include companies in the entertainment industry, such as television programs or children's networks with cartoons.For example, this software could be used to take characters from television series and create unique scenarios between them to be used for episodes.Similarly, a film script could also be generated using predefined characters and situations between them.This technique would most likely be more successful with drama genres, as they focus heavily on character development and interaction rather than action sequences.
Stories in video games could also be uniquely generated using this concept.Game developers could use the same character techniques to develop stories and interactions between different characters in the game.This could provide a unique story and user experience for each user that plays the game, offering new levels of re-playability, an important aspect to consider when purchasing a game.
This idea could also be used to generate storytelling for short books or comics.Simple stories used in children's books or interactive comics and graphic novels used on smart phones and tablets could also be uniquely generated with this method.
Research Groups
The work of three particular research groups are worth highlighting.They are currently working within a similar field of artificial intelligence in entertainment while incorporating human psychology and interaction.These research groups are all currently active and have undertaken many projects in various aspects of content generation.
Teeside University Dr Julie Porteous leads a research group at Teeside University that looks into intelligent virtual environments (IVE).The group specifically deal with Human-Centred Multimedia and Human-Centred Interfaces, and are mostly involved in entertainment computing, with some work in health informatics.This group is most noted for their work on IRIS, or Integrating Research in Interactive Storytelling.This project was aimed at achieving breakthroughs in the understanding of interactive storytelling and the development of the necessary technologies involved.It was designed to make interactive storytelling technologies in regards to performance and scalability in order to support the production of real interactive narratives.It also aimed to make the new generation of these technologies more accessible to authors and other types of content creators of various disciplines.
The group is currently working on a project called MUSE, or Machine Understanding for Interactive Storytelling.This project will introduce a new method of navigating through and understanding information through three-dimensional interactive storytelling.This system can take in text-based input of natural language and process it into knowledge representing characters, their actions, plots and the world around the characters.These are then rendered as 3D worlds that the user may navigate through with interaction, re-enactments and gameplay.(IVE Lab, 2011).
Liquid Narrative Group
The Liquid Narrative Group in North Carolina State University works with procedural content generation allowing them to create content for games and other virtual environments.Similar to this project, they use models of narrative to build stories and tell them automatically.At the core of their work and research is Narrative Structure and Comprehension, which looks into computational models of narrative, its structure and learning how we build mental models of stories while we produce and comprehend them.
The group is also responsible for a project entitled Interaction in Automatically Generated Narratives, involving the creation of interactive experiences within a progressing story.It required the development of stories allowing user input to dynamically alter the actions that take place during the story line.This project was ended in 2005 (Liquid Narrative Group, 2014).
USC Institute for Creative Technologies
The final research group is Jonathan Gratch and his group from USC Institute for Creative Technologies.As a computer scientist and psychologist, his research consists of developing human-like software agents for virtual training environments.These methods are used to create psychological theories of human behaviour.He specifically investigates how algorithms can be used to control human behaviour in virtual environments.
One particular area of interest is emotion modelling, where Gratch looks into the role emotions play in the believability and immersion of a simulated story's world.In the Emotion Project, the group develops models that allow artificial characters to display an emotional response to events that occur within the story's world and respond with actions and behaviours that are consistent with that of humans in an emotional state.
In the Virtual Human Project, research in intelligent teaching, natural language generation and recognition, interactive narrative, emotional modelling and graphics and audio are all combined to provide a realistic and compelling training environment.The virtual humans can interact with the trainee and provide emotional responses to their actions (Gratch, 2014).
Existing Systems
Generator of Adaptive Dilemma-based Interactive Narratives (GADIN) The GADIN (Generator of Adaptive Dilemma-based Interactive Narratives) system, developed at the University of York, is one method currently used in interactive story generation, and is currently being used in games and television.Its purpose is to satisfy essential criteria for interactive narrative used by other systems that would otherwise be incapable of being addressed.
With this system implemented, the story creator is only required to provide basic information regarding the domain background, such as information on characters and their relationships, actions and problems.Instances are then created of these problems and story actions and a planner generates a sequence of actions that lead up to a problem for the characters involved (which in this case, can also be the user).This goes a step further by allowing the user to provide input on choosing their own actions, allowing the system to adapt future storylines according to past behaviour.The effectiveness of the stories generated in this way are evaluated using criteria such as interestingness, immersion and scalability (Barber, 2008).
Suspenser
Suspense is a very important aspect of storytelling by readers and listeners.While there has been extensive research into ways of automating narrative, the subject of adding depth to these stories, such as suspense to evoke cognitive and affective responses by readers is severely lacking.
Suspenser is a specific framework in development by the Liquid Narrative Group at North Carolina State University designed to help research and develop a system that can produce a narrative designed specifically to evoke suspense from the reader.Similar to GADIN, this system can take in a data structure containing a plan comprised of goals of the story's characters and actions that they can perform in pursuit of these goals.The system uses a plan-based model of comprehension to determine the best way to output the final content in the story to best manipulate the reader's level of suspense.This is done through adopting theories developed by cognitive psychologists.
Suspenser takes three elements as input before creating a story, the first being the fabula, or the raw material of a story.The second input is a point t in the story's plan corresponding to a particular point where the reader's suspense will be measured.The final is the desired length of the story, so that story elements, actions and suspense can be adjusted accordingly.With this information the system can determine the sjuzhet, which refers to the way the story is organised.More specifically in this case, it includes the content of the story up to the point t allowing the reader to infer a minimum number of complete plans for the character's goal.This is done in accordance with psychological research on suspense.(Cheong & Young).
Actor Conference (ACONF) Narrative plays an important role in understanding the events of our lives on a daily basis.The ability to generate this narrative automatically can have a huge impact on virtual reality systems designed for entertainment, training or education.This concept is complicated by two main problems: plot coherence and character believability.The coherence of the plot refers to the appearance that events in the story all lead towards some outcome or goal.Character believability refers to the appearance that the events in the story are driven by the attributes of the characters within the story.There are currently many systems capable of achieving only one of these goals, but the Actor Conference system (ACONF) presents a different approach to automatic narrative generation with the ability to generate stories with both problems addressed.
The ACONF system is specifically designed to exploit the advantages of both character-centric and author-centric techniques and ideas to achieve both problems of plot coherence and character believability.It uses a decompositional, partial-order planner to develop and assemble a sequence of actions that make up the story.These actions represent the way the characters will behave as part of this story.Using a planner for this allows for the identification of casual relationships between different actions, as well as providing an ordered sequence of operations as output that can be directly executed by agents (or characters) in this virtual world.(Riedl & Young, 2003).
AI Techniques underlying ANG
The main AI technique we will focus on is automated planning, as this process is what builds the story from its components.Automated planning or AI Planning is a part of the field of artificial intelligence that involves the automated solving of problems through the generation of action sequences or strategies.Plan generation is followed by plan execution by intelligent agents such as autonomous robots and vehicles.Planning solutions are often complex, and are constructed or discovered using various types of algorithms.A basic planning problem has a given start state, goal conditions and set of actions that may be carried out by the intelligent agent.A sequence of actions leading from the start state to the goal is then discovered.The aspects of the problem that are uncertain are the effects of the action, knowledge of the system state and a sequence of actions that may guarantee the achievement of the goal.
Similar to automatic narrative generation, interactive storytelling researchers have been using planning systems and has become the most common approach.Planning provides a well-rounded approach to this problem for many reasons.Narratives, which are also the basis for interactive storytelling, may be broken down into three levels.The lowest level is called the fabula, and is defined as 'a series of logically and chronologically related events' (Boutilier).Because planning consists of a series of actions that work towards a goal, it represents a good model for the fabula.
Planning Domain Definition Language (PDDL) is the language used to define and execute AI Planning, and is generally used as standard.It uses a combination of two files, including the 'domain', which holds all of the declarations needed.This includes variables of different types called predicates, as well as actions that can be carried out.The second file is called a 'problem file', which uses the variables and actions declared in the domain file along with a 'start' and 'end' state.When used with a planner, the problem file will declare the initial state of the problem, and the planner will use the domain declarations to make legal decisions governed by that file to achieve the end state (McDermott, et al., 1998) (see appendices 12-1 and 12-2 for this project's PDDL domain and problem files).The use of PDDL therefore dictates a host of other requirements to ensure the project can be successful.
User Requirements
This project is not governed by any specific user (individual or company), and therefore has user requirements specified by myself, as well as the recommendations of this project's supervisor.These requirements for the software prototype portion of this project are as follows:
Tools
Using Prolog logic programming language is essential to creating a prototype for this product.Prolog is a logic programing language that will allow me to declare relations with specific facts and rules, as well as create domains and problems that can be tested using search algorithms.As discussed earlier, PDDL will be the main method of testing problem files and programming characters with many different attributes.
For a possible user interface, Java will be required to ensure it can be as effective as possible.It will require looking into ways of having Java communicate with other types of necessary languages to allow the creation of characters and attributes in this way.Doing so will allow any user, even without a programming background, to be able to create characters for storytelling.
It will also be required to learn methods of connecting the planning engine to a shell to translate the output into natural language for demonstration purposes.
Software
Eclipse IDE will be used to program in Java for a user interface allowing the creation of characters.If time permits, an interface will be created that users can use to create characters, attributes and relationships that can be translated into PDDL using Java.Java with Swing should allow for a user-friendly end product that would be relatively straight-forward to understand for the average user.Java is also cross platform and can communicate with many other programming languages, keeping options open to unexpected requirements as progress is made through the project.
Planners will also be utilized to plan out steps from storytelling problem files and provide a possible output solution for a story.This will need to be translated into a script using Java.
Product Specification
Product Specification Interface The prototype will include an interactive interface to allow a user to create their own characters and other story elements with specific attributes and relationships to generate their own stories with the help of planners.This would be ideal, as it would prove that any number of characters and scenarios could be generated using this idea, and show that anyone, with or without a development background, can be capable of producing content in this way.
Data Definition
The software prototype will be capable of receiving user input in the form of story element objects and characters and their relationships between one another.These variables will be predefined to ensure that they have the appropriate effect on the characters and the story's content.This will output a planning problem in PDDL with various specific states and parameters used to define the particular story.Actions may also be defined to provide specific effects of particular events that may occur.
These PDDL files will be able to be parsed by various planners available across the web.An attempt to integrate one or more planners into this application will be made to try and keep everything in one area and application.Once a plan is created, the software will be able to parse the steps and adjust it into the form of a narrative with a clearly define sequence of events, with variables and solutions based on the user's initial input.
Once content is generated, the application will attempt to be hooked up to a shell capable of processing text into natural language.This shell will then be capable of reading out the dialog with different characters for demonstration purposes.
Services
The final product of this project will be a software prototype designed to prove the concept of automatic narrative generation based on a set of predefined attributes for characters, goals and stories.This application will be programmed using characters with variables including gender, age, experiences, personalities, ambitions, goals and different relationship variables between each other.Ultimately, stories generated in this way will be able to be read out loud by natural language processing software.
Development Method
This project will be developed using a software prototyping methodology.A prototype in this case refers to a piece of software that is in its early stages designed to test a concept or process that can be learned from.This is incomplete from a final piece of software as it only focuses on achieving specific goals within a future larger project.The process of software prototyping therefore involves creating prototypes built to simulate core aspects of the final application that is in development.
This type of software methodology is useful when assessing the requirements that will make the final product successful.It also allows developers to test the feasibility of the product without having to construct the entire system, thus saving time and money.
This process can be broken down into four phases: 1. Identify Initial Requirements.
Also referred to as a prototype plan, this phase is used to determine the basic objectives of the prototype including the necessary input and output.From here the prototype's functionality may also be defined.
Development.
During this phase the first prototype is developed as an executable piece of software containing the user interfaces and possibly very limited functionality.
Evaluation.
In the evaluation phase, everyone involved with determining the final product, including customers and end-users, review the prototype and provide feedback for changes and additional functionalities to ensure the development is going in the right direction.4. Enhancement.
Based on the feedback received the software specifications and prototype can be improved.This means that steps 2 through 4 may be repeated as necessary.
Prototyping may also be broken down into several different types, including throwaway, evolutionary, incremental, operational and extreme prototyping.This project will focus on using throwaway prototyping.
Throwaway prototyping involves creating a model of the core functionality of the system at an early stage in development.Once all necessary preliminary requirements are gathered and understood, a simplified working version of the application is created to visually demonstrate the capabilities of the system.
This type of prototyping can be completed quickly and is useful if some aspects of the requirements specification is not fully understood.This means that these requirements can be explored and identified in depth and tested very quickly before any heavy development takes place.
Generally, once a working model is demonstrated and agreed upon, it is essentially discarded to begin formal development of the system.As the final goal is to create a prototype, the development process will terminate after the prototyping stage.
Prototyping does have its disadvantages however.Its insufficient analysis can sometimes cause developers to lose focus on the final solution by overlooking superior solutions that may be better to maintain.Developers may also spend too much time and money developing a prototype, or become attached to it making it difficult to throw away.In this case sometimes developers would try to alter prototypes for use as a final product when it does not provide an acceptable underlying architecture.If moving onto a final product, these problems would have to be taken into consideration.However, in this case, throwaway prototyping is perfect for ensuring that the core goals of the product are understood, and can prove the concept visually and audibly to users of the system (Beaudouin-Lafon).
Use-Case Diagram
Figure 1 refers to a Use-Case diagram, which outlines all of the various actions that can be carried out by the user, and how they are achieved using both the software itself and an additional planner.A more in depth analysis of how every aspect of the system works together is described in the Implementation section of this document.
Implementation
After research into the feasibility of producing a product on this scale, even as a prototype, some compromises had to be made.Rather than creating an application with the ability to create characters with different personality traits, research lead me in a different direction with a heavy focus on what defines a story.This implementation therefore revolves around the ability for anyone without programming to build a short story using the programmed interface.These stories are created using a user interface to add story elements, from which a PDDL problem file is created.Together with a predefined domain, a planner is used to find a solution that can be converted into a narrative-like structure.
Structure
Stories are comprised of 8 main elements, including characters, places, things (objects), information (knowledge), goals, actions, a protagonist and an antagonist.
The Story class has therefore been designed to hold this type of information, with array lists used to hold groups of characters, places, things and information.
The Character class has been designed to hold extra information, including a name, a current location (place), a list of friends (characters), a list of things in their current possession, a list of objects they like, a list of objects they would prefer not to get rid of, and a list of information that they may have.All of these attributes can be added or removed by the user when adding characters to a story.
The Thing class has a similar design, allowing each individual item to have a name and an initial location.The Place and Info classes simply contain a field for an appropriate name.The Goal class contains three Strings to be used for parsing, which include the first party involved (a Character or Place), the second party (Character, Thing, Info or Place depending on the first party), and a conjunction String such as 'has', 'at', 'knows', etc.
User Interface The user interface begins with the main window for the ANGgui class, which is also the central location for creating and controlling every aspect of a user's story.This can be seen in Figure 2 as a full story with all elements filled in and generated.To begin, a user may use the 'My Stories' panel to add or remove a story, while giving it a name.The panel contains a list of all individual stories created by the user.Selecting a story will update all of that story's information in all other panels to the right of the story.
Once a story is created, the user may add any other item of their choosing.It is recommended to start with places, since both characters and things can have an initial location.Pressing the add button in the Places panel with display a simple window asking for the name of the place.Once added, the added places will appear in the list, and be added to the selected story.These places can be removed from the story and list by hitting the remove button.The Information panel works in an identical way.
Using the add function in the Things panel will display a window requiring a name and a current location.If the item created is located at a particular place that was previously created, it can be selected here.If the item is meant to be in the possession of a particular character, the location should be set to 'none', as this function is controlled in the Character section.Using the add character function shown in Figure 3 will display a new window with all of the character options.From here the user can add the character's name and initial location based on the places added previously.There are also options to use previously added material to include a list of that particular character's friends, items in their current possession, items they like, items they will not give up, and pieces of information they currently know.These items can be added or removed with the interface's buttons and combo boxes.Selected characters from the list may also be edited and deleted as necessary.
The Options panel contains an area to set the protagonist and antagonist of the current story.The protagonist is who will be used when the planner is looking for a solution, and the protagonist has been programmed to be the current thief in this particular story domain.
The actions area is used to modify four specific available actions within a story, including examining things, examining places, talking about information and imitating dinosaurs (scaring other characters).Selecting one option in the list and pressing the edit button will bring up a special window for that action containing combo boxes with different relevant story elements that can be used in combination to add an effect to that action.
Below the actions area is an option to add goals to this particular story, which determine the steps used by the planner to achieve them.This is important, as this determines how certain characters behave and interact with objects, places and other characters.Adding a goal will bring up a window asking to choose a character or a thing.If a character is chosen, the option to know, have, be at, etc. can then be chosen.Based on that selection, the appropriate set of objects added to the story will be available in the final area.For example, if a character is chosen with the word 'at' selected, then the final combo box will be populated with the list of places added to the story.This would subsequently create a goal requiring that at some point, the selected character must end up at the selected location.Likewise, if a Thing is selected, a location is the only goal it may have.
Functions and Planner
Once all elements of a story have been created, the user has the option to hit the 'Generate' button, where a number of functions take place.
The first of these functions is done in the CreateFile class, where all of the story information is parsed and outputted to a PDDL problem file using the appropriate syntax.This includes object declarations and types, as well as the entire initial state, including where characters and things are, which character currently has what object, what each character knows, etc.This file also contains the goals of the story, and is located in a new directory created in the project directory to house files associated with that particular story.
Once created, a classed called PlannerFunctions uses the build in the Process class to run a specific command through the terminal without opening it.This command is to locate and run the 'FF' planner in the project directory.This command is also programmed to locate the pre-defined story domain, and the newly created problem file to be run through the planner.In order to gather further information about the domain and problem files, the planner is also run in a specific configuration to output additional information about every object created, every action with their effects, and the entire initial state.If the designed story has a solution and parses correctly, the planner will find it and output all of this information, including a list of steps to achieve the contained goals.
This planner output is captured by the function and separated into three sections, including actions, objects/initial state, and steps.These three sections are then sent back to the main window and distributed to their corresponding text areas in the output panel at the bottom of the main window.
When a list of steps are generated, they are also sent to a method that uses particular situations to parse this data into more of a narrative-like format.This story is then displayed in the final output window called 'Narrative'.Once completed, any number of changes can be made to that story and re-generated to view different results.
Story Domain and Problem
The application has the ability to add and modify character actions from the actions section.This set of actions are used to make up the story Domain once the story is generated.The story domain used for illustration purposes in this document and during testing is based on the version created and used in interactive storytelling testing by Leandro Motta Barros and Soraia Raupp Musse from the University of the Sinos Valley (Barros & Musse, 2007).Small alterations were made to have it parse successfully with the FF Planner (Joerg) and this application.
The domain file is designed with specific objects, including all of the story elements available in the ANG interface to create a story.It also includes a set of default actions, including 'GoTo', 'Take', 'TalkAbout', 'ImitateDinosaur', 'ExaminePlace', 'ExamineThing', 'GivePresent', and 'AssumeTheft'.Each action can involve different object types, and can have modified preconditions and effects based on the way the user sets up the actions in the application.
For example, when using 'GoTo', a character will move from their current location to another location.'Take' involves a character taking an item from a place or a character.'TalkAbout' involves two characters talking about a particular piece of information in order to learn another piece of information.'ImitateDinosaur' is used when a character must take an object from another, but the owner of the object does not want to give it up.The other character imitates a dinosaur to scare them into giving it to them, resulting in that character no longer being friendly towards them.'ExaminePlace' is used for a character to look for items at the particular location.'ExamineThing' is used to gain a particular piece of information from an object.'GivePresent' is used for one character to give a gift to another.If the other character likes the particular object given then that character becomes friendly towards the other.Finally, 'AssumeTheft' is used to put an item at a particular location.
This application may be used to create a story of any type with any objects provided by the user, since all actions and story elements may interact with each other in the way that the user chooses.Currently, ANG does not support the addition of new actions.
The story problem used for this story domain consists of the declared objects, and the initial states of those objects.At the end contains the goals for the story, which result in the planner attempting to achieve those goals to provide a sequence of events for that story.Changing the types of goals will modify the outcome of every story in an attempt to achieve all of these goals.
Work in Progress
In the interest of time, the ability to save created stories to a file that can later be retrieved to avoid having to re-create stories was omitted.As this is a prototype to prove a particular concept, this function was considered less important than getting core functionalities working correctly.This could be done by creating a simple file when saving a story that contains lines of Strings including all of a story's information.A heading for each set of lines, such as Character or Place, would be useful in ensuring the information is structured in an organised way for easy retrieval.These files could then be retrieved at launch and their stories re-created accordingly.
Future Enhancements
This application is designed only to create a story's fabula, or sequence of events that take place.It cannot create dialog between characters.Future enhancements could use AI learning techniques to determine how a particular character might speak to another, or go into more depth about what action they may decide to take.This would require substantially more research for a working solution.
With further enhancements, the ability to attach the outputted narrative to a shell capable of reading it out loud with natural language processing could also be developed to provide further functionality for demonstration purposes.
Product
This entire project has an ultimate goal of allowing the ability to generate dialogue between characters with predefined attributes.After extensive research into this particular area of artificial intelligence, and into the structure of story-telling in general, it was decided that before this type of dialogue can be created, the story's most basic level, the fabula, must be constructed.This consists of low level actions and events that happen within a story.In that regard, the developed ANG prototype successfully achieves this goal, and allows anyone with no programming experience to use an interface to develop and generate these stories on his or her own.
The product includes certain predefined actions that may exist in any story, such as characters moving from one place to another, or taking objects from certain places.Additional actions, such as speaking to one another and examining objects and places can potentially include many different effects on characters and objects.By including the ability to manipulate these effects in the more complex story actions, there is virtually no limit to the complexity of the objects being created by the user to determine a full story.
While there are many way to improve the overall rigidity of the application, such as additional error handling and a more fluid narrative output, the product is a prototype that successfully delivers on its required specifications and ability to prove a concept.
Conclusion
Artificial Intelligence is rapidly shaping the future of many important aspects of everyday life.One particular area with growing interest in future advancements is entertainment, including television, film, gaming and general interactive and noninteractive means of story-telling.
Directors, producers, designers and developers are often looking for new techniques for engaging audiences into the world that they have created.A sense of immersion is key, and even with the best visual effects can be lost without interesting and unpredictable stories and characters within them.
The focus of this research and project is to analyse the feasibility of automating the most fundamental and lowest level component of a story; the fabula, which is comprised of a sequence of events that occur within a story.Through research it has been determined that in order to consider building an application complex enough to develop dialog between characters with predefined attributes regarding their personality, a story fabula must first be created.As these events are effect simple steps taken to achieve an ending, artificial intelligence planning has proven to be an effective way of providing results.
Automated story-telling through the use of artificial intelligence planning has proven to be an effective means of creating stories with varying results based on the intended goals.By altering these goals with new additions, the story results can change depending on those goals, and the type of search algorithms used within the planner.
The story example and its variations that can be developed using this product are very simple as a result of them being based on only the story's sequence of events.At present, this would be most useful as a teaching tool, allowing younger audiences to develop their own stories using characters.This would be useful for the development of their creativity and reading skills, and provide a more interactive and enjoyable way of learning these skills.
Future development into this idea has massive implications on what will be possible in the near-future.With these techniques, various aspects of scripts can already be pre-written, and computer-controlled characters in games can give the illusion of free will based on the way a user interacts with them.The limits of the human imagination are already being challenged with modern advancements into researching these topics, including going as far as scientists theorising that everyday life could possibly be nothing more than a simulation (Kinder, 2013).It is an exciting, if not somewhat terrifying thought when considering what may be possible within this lifetime.
Figure 1 .
Figure 1.Use Case diagram outlining the potential actions that may be carried out by the user and the software.
Figure 2 .
Figure 2. The main user interface after all story elements have been inserted, and a story has been generated.
Figure 3 .
Figure 3.The 'New Character' interface with available objects from the story as parameters for the new character. | 8,835.4 | 2015-01-18T00:00:00.000 | [
"Computer Science"
] |
The Role of Grain Size on Neutron Irradiation Response of Nanocrystalline Copper
The role of grain size on the developed microstructure and mechanical properties of neutron irradiated nanocrystalline copper was investigated by comparing the radiation response of material to the conventional micrograined counterpart. Nanocrystalline (nc) and micrograined (MG) copper samples were subjected to a range of neutron exposure levels from 0.0034 to 2 dpa. At all damage levels, the response of MG-copper was governed by radiation hardening manifested by an increase in strength with accompanying ductility loss. Conversely, the response of nc-copper to neutron irradiation exhibited a dependence on the damage level. At low damage levels, grain growth was the primary response, with radiation hardening and embrittlement becoming the dominant responses with increasing damage levels. Annealing experiments revealed that grain growth in nc-copper is composed of both thermally-activated and irradiation-induced components. Tensile tests revealed minimal change in the source hardening component of the yield stress in MG-copper, while the source hardening component was found to decrease with increasing radiation exposure in nc-copper.
Introduction
The continuously increasing energy demand, combined with a noticeable depletion in traditional energy resources all over the world, has revived interest in developing advanced nuclear power systems-both fission and fusion based [1]. Proposed designs for the next generation of fission nuclear power reactors (Gen-IV) require both the fuel and structural materials to serve in more extreme operating conditions than current light water reactor designs. These conditions are imposed in order to satisfy stringent requirements such as longer life cycle, higher efficacy of energy conversion, and safety during normal and accidental conditions [2][3][4]. Similarly, the plasma-facing materials in fusion reactors will encounter a harsh radiation environment in order to achieve a higher level of durability and higher quality plasma [5,6]. Accordingly, the search for fuel and structural materials with high radiation resistance has become an inevitable challenge for the nuclear industry [7,8]. The well-known deterioration of mechanical, thermal and physical properties of materials in radiation environments at macroscopic scale is attributed to the accumulation of radiation induced point defects which leads to the formation of microscopic scale defect structures such as dislocations and voids [9]. Thus, the ability of a material to eliminate irradiation-induced point defects determines its radiation tolerance [10]. Nanocrystalline (nc) materials are polycrystals with a grain size <100 nm characterized by a large volume fraction of interfaces and triple junctions [11]. Because grain boundaries act as sinks for irradiation-induced point defects, it was hypothesized that nc materials would possess enhanced radiation resistance compared to conventional micrograined (MG) materials [12,13]. This is based on the premise that both the thermal stability and mechanical integrity of the nc materials will be maintained under irradiation [14]. The miniscule grain size of nc materials provides an excess of short diffusion paths for irradiation-induced point defects to migrate and annihilate at grain boundaries. Many studies have confirmed the enhanced radiation resistance of nc metals and alloys under a range of irradiation conditions in terms of radiation type, exposure level, and temperature. El-Atwani et al. [15] characterized the radiation response of nc and ultrafine grained tungsten in an in-situ 2 keV He ion irradiation conducted at 950˝C. A lower bubble density was observed in nc tungsten (grain size < 60 nm) compared to ultrafine-grained tungsten (grain size 100-500 nm). Kilmametov et al. [16] showed that a fully dense nc Ti-50.6at.%Ni alloy with a grain size of 23-31 nm had higher resistance to irradiation-induced amorphization compared to its MG counterpart following 1.5 MeV Ar + ion irradiation at room temperature. The influence of grain size on the density of defect clusters was investigated by Rose et al. [17], who observed a proportional decrease in defect density with decreasing grain size in nc ZrO 2 and Pd. Furthermore, researchers have also reported enhanced radiation resistance characteristics in various ultra-fine grained steel alloys following neutron and ion irradiations when compared to their MG counterparts [18][19][20][21][22].
In contrast, other studies in literature have shown evidence of thermal and structural instability of nc materials under irradiation. Kaoumi et al. [23] conducted an in-situ ion irradiation study on nc Zr, Pt, Cu, and Au to determine how the microstructure evolves under irradiation. Irradiation-induced grain growth was observed in all samples in the investigated temperature range of 20-773 K. Similarly, Nita et al. [24] reported an increase in grain size of Cu-0.5Al 2 O 3 from 178 to 493 nm, accompanied with formation of stacking faults and dislocations, after a 590 MeV proton irradiation to 0.91 dpa. Irradiation-induced grain growth in nc transition metals was also reported by Brogesen et al. [25], who irradiated thin films of nc Ni, Co, Cr, V, and Ti with 600 keV Xe ions at liquid nitrogen temperature to eliminate any potential occurrence of thermally-activated grain growth. Karpe et al. [26] characterized the developed microstructure of Ar+ and Xe+ irradiated Fe and Zr-Fe thin films with a grain size of 70-120 nm, and observed an increase in grain size at all exposure levels.
Thus, there is disagreement in the literature on the radiation resistance of nc metals and alloys, and the hypothesis of enhanced radiation resistance for this class of materials remains questionable to-date. Accordingly, a firm conclusion on the potential of nc materials as reactor materials necessitates further research to elucidate the behavior of these materials in radiation environments. Copper, the element of interest in this study, plays a major rule in several nuclear applications due to its appealing thermal, mechanical, and physical properties. In the International Thermonuclear Experimental Reactor (ITER), the divertor components of the reactor are protected from the generated thermal energy of the plasma by the first wall which consists of a stainless steel shield bonded to a heat sink made of a copper-based alloy [27]. Additionally, the superconductor materials in fusion reactors are contained in a matrix of pure copper, which temporarily carries the electric current whenever the superconductors fail to do so [28]. Furthermore, copper is used in the fabrication of the canisters required for long term storage and isolation of spent nuclear fuel [29].
In this work, the influence of grain size on the response of nc-copper to fast neutron irradiation is investigated by exposing samples of nc-copper along with its MG counterparts to different damage levels. The scientific and technological importance of this work originates from the profound role of copper in nuclear applications as well as the obvious scarcity of neutron irradiation data in the literature of nc-copper in particular, and nc materials in general.
Materials and Samples
The nc-copper investigated in this work was synthesized via the electrodeposition technique by the 3M Corporation, St. Paul, MN, USA, while the MG-copper samples were legacy materials from the Nuclear Material Laboratory at North Carolina State University. Energy dispersion spectroscopy (EDS) on a Hitachi S-3200 scanning electron microscope (SEM, Hitachi High Technologies in America, Clarksburg, WV, USA) was utilized to determine the copper purity of the MG and nc-copper samples, and they both were found to be 99.999%. In order to evaluate the microstructure and mechanical properties pre-and post-irradiation, sample of different geometries were prepared and irradiated: (i) 3 mm discs for microstructure characterization via transmission electron microscopy (TEM); (ii) 2 mm gauge length miniature tensile samples for tensile tests; and (iii) 3 mmˆ5.3 mm plates for hardness measurements as well as non-destructive microstructure characterization techniques such as X-ray diffraction (XRD), atomic force microscopy (AFM), optical microscopy (OM), and SEM.
Irradiation Experiments
In this work, two irradiation facilities were utilized to expose the copper samples to a range of damage levels. The PULSTAR (This name refers to the ability of the reactor to produce short pulses of intense radiations) reactor in the Department of Nuclear Engineering at North Carolina State University was used for the low-dose irradiation and the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) for high dose exposures.
Low Dose Irradiation at PULSTAR
PULSTAR is a 1MWth open pool reactor fueled with low enrichment UO 2 in Zircaloy cladding with light water serving as a moderator and reflector. Samples of nc and MG-copper were sealed in evacuated quartz tubes and loaded in an aluminum canister (Figure 1a) inserted into a cadmium wrapped aluminum column in order to eliminate absorption of thermal neutrons by the samples. This minimizes the irradiation-induced activity via (n, γ) reactions, reducing the cooling time required for safe handling of the irradiated samples. Due to the inherent structure and other reactivity considerations of the PULSTAR core, the column containing the samples was not allowed to be irradiated near the core, where high neutron flux is achievable. Rather, the samples were irradiated in a vertical irradiation tube (West Rotating Exposure Port, WREP) at the core boundary (see Figure 1b), which limited the exposure level achievable in a reasonable time frame. High purity (99.999%) Ni foils were irradiated at the same vertical position as the aluminum canister and measurements of the induced activity in the foils were utilized to estimate the integrated fast neutron flux (E > 1.98 MeV) at the irradiation position; the flux is found to be~2ˆ10 12 n cm 2 sec . The copper samples were irradiated in the PULSTAR for 200 h at full power and the corresponding damage level in the samples was estimated to be~3.4ˆ10´3 dpa. Based on as-run experiments in PULSTAR, the maximum ambient temperature experienced by the copper samples during the irradiation was 55˝C.
High Dose Irradiation at ATR
Two capsules holding samples of both nc and MG-copper were irradiated in the center position of the East Flux Trap (EFT) at position E-7 in the ATR core ( Figure 2) [30]. Within each capsule, there was a test train assembly consisting of vertically stacked aluminum blocks designed to accommodate the different sample geometries. A thin aluminum disc was tack welded to the open end of each block to hold the samples, and the sample holder assemblies were strung together using thick aluminum wires, as depicted in Figure 3. Each test train was then sealed in a stainless steel capsule to prevent contact with the coolant water. The irradiation test assembly, as illustrated in Figure 2b, is comprised of the experiment basket, sleeve, and capsule assemblies. The assemblies contain the test trains (aluminum blocks and samples). The experiment basket of the test assembly is an aluminum tube that was designed to interface the capsule assembly with the EFT position E-7 in the ATR. The two capsules were irradiated concurrently for three ATR reactor cycles (144A, 144B, and 145A) to accumulate~1 dpa at damage rate of~7.52ˆ10´7 dpa/s. At the end of the first three cycles one capsule was withdrawn from the reactor core and the other capsule was irradiated for additional three cycles (145B, 146A, and 146B) to accumulate a total of~2 dpa of damage. The irradiation temperature of the copper samples in the capsules was calculated using the finite-element-based code Abaqus [31] in conjunction with Monte Carlo N-Particle (MCNP) code [32]. MCNP was utilized to provide the heat generation rate in each part of the capsule, which was then input into the Abaqus model. According to the calculated temperature distribution profiles shown in Figure 4, the irradiation temperature of the copper samples ranged from 70˝C to less than 100˝C.
Microstructure Characterization and Mechanical Testing
Several microstructural characterization techniques were utilized in order to investigate the microstructure of the copper samples pre-and post-irradiation. OM was utilized to determine the grain size distribution (GSD) and the average grain size of MG-copper using a chemical etchant consisting of 25% NH 4 OH + 25%H 2 O + 50% H 2 O 2 . The limited resolution power of OM prevented its applicability for characterization of the nc-copper samples. X-ray diffraction (XRD) patterns were recorded from MG-and nc-copper, pre-and post-irradiation, by a Rigaku smart lab diffractometer using CuK α radiation. The peak broadening observed in the nc-copper diffraction pattern enabled estimation of the average grain size, using both the Scherrer formula [33,34] and the Williamson-Hall plot method [35]. Because XRD analysis provides only the average grain size, other characterization techniques were employed to establish the grain size distribution (GSD) of nc-copper. Atomic Force Microscopy (AFM) (Veeco-D3000, Veeco, Plainview, NY, USA) was utilized to determine both the average grain size and the grain size distribution of nc-copper. Its high resolution power allows counting of tens to hundreds of nano-grains over a scanning area of only a few micrometers. Microstructural characterization and analysis of defect structures were done using TEM. Typical 3 mm TEM discs were punched from both MG-and nc-copper, thinned down mechanically to a thickness of~80 µm, and subjected to electrochemical thinning in an electrolyte solution of 10% nitric acid + 90% methanol maintained at 18˝C to create a thin electron transparent area on the foil. SEM was utilized to analyze the grain size of nc-copper.
Tensile testing and microhardness indentations were utilized to assess the influence of neutron irradiation on the mechanical properties of both MG-and nc-copper. All microhardness measurements reported in this work are based on a Vickers hardness setup using the Buehler OmniMet ® microhardness testing system (Buehler, Lake Bluff, IL, USA). Although microhardness measurements directly reveal a material's hardness, it lacks information on other essential mechanical characteristics, such as ductility and toughness of the material, which can be determined through tensile testing. The limited availability of nc-copper material necessitated the utilization of sub-size tensile samples to assess the mechanical behavior. Although tensile samples of MG-copper could have been machined according to the American Society for Testing and Materials (ASTM) standard for tensile testing, it was decided instead to use sub-size tensile samples to avoid any potential effect related to sample geometry or dimensions when comparing to the nc-copper results. Tensile testing of the sub-size tensile samples was conducted with a miniature tensile tester (Figure 5a,b) that was built specifically for this purpose in the Nuclear Materials Laboratory at North Carolina State University (Typical tensile grips were used for tensile testing of irradiated samples at INL using Instron 5967 dual column testing system (Instron, Grove City, OH, USA). All tensile tests were conducted at room temperature at a constant strain rate of 10´5 s´1. Figure 6a is an optical micrograph of as-received MG-copper used to determine the GSD of the material. The average grain size of MG-copper was found to be~38˘12 µm. Figure 6b shows TEM microstructure of the as-received MG-copper depicting no major defects. The average grain size of nc-copper was determined through several methods, including XRD, AFM, and TEM image analyses. Figure 7 depicts the XRD patterns of nc-and MG-copper, where major reflection peaks are identified. The difference in peak broadening between nc-and MG-copper is primarily due to crystallite size-induced broadening. Due to the grain size difference between nc-and MG-copper, there is substantial broadening induced in nc-copper, but grain size has almost no effect in the diffraction pattern of MG-copper. Analyses of the XRD pattern of nc-copper based on both the Scherrer formula and the Williamson-Hall plot method indicated average grain sizes of 17 and 44 nm, respectively. The GSD of the as-received nc-copper was established from an AFM micrograph taken over a 1 µmˆ1 µm scanning area (Figure 8a, b). It can be seen that the grain size of nc-copper varied between 10 and 100 nm, and the corresponding average grain size of the material was found to bẽ 48˘16 nm. The bright field TEM image of the as-received nc-copper, along with the corresponding diffraction pattern, is shown in Figure 9a where the near-to-complete diffraction rings are characteristic of a nanocrystalline material with randomly-oriented grains. Due to the large number of grains in each specific crystallographic plane, nc-copper forms rings rather than individual diffraction spots as observed in MG-copper. The GSD distribution of nc-copper based on TEM characterization is shown in Figure 9b and the corresponding average grain size of the material was found to be~28˘11 nm. Finally, the average grain size of as-received nc-copper was defined to be~34.4 nm from averaging all the values obtained by XRD, AFM, and TEM techniques.
Microstructure and Mechanical Properties of As-Received Materials
Hardness measurements were made over an 8 mmˆ6 mm area of the as-received materials to ensure homogeneity of the microstructure. The average microhardness of MG-and nc-copper was found to be~0.6˘0.02 GPa and 2.5˘0.05 GPa, respectively. Further evaluation of the mechanical properties of the as-received materials was achieved through tensile testing of sub-size tensile samples of MG and nc-copper. The resultant engineering stress-strain curves shown in Figure 10 were utilized to determine the engineering yield stress (S y ), the ultimate tensile strength (UTS), uniform strain (e u ), total engineering strain (e t ), and the strain hardening exponent (n) of both materials as listed in Table 1. From the data in Table 1, MG-copper possesses much higher ductility and toughness (from the uniform and total strain values) compared to nc-copper. From yield and UTS values, nc-copper has a higher overall strength compared to MG-copper as expected from grain refinement. The difference in strength and ductility between nc-copper and its micrograined counterpart is germane to the difference in grain size reported in the preceding subsection. The material's strength evolves with an increasing density of pinning points, such as grain boundaries, which are capable of hampering the mobility and propagation of imperfections (dislocations). Thus, it is plausible to ascribe the observed high strength and poor ductility of nc-copper to the increased grain boundary density upon grain refinement, and the ability of those grain boundaries to act as pinning points. This effect is commonly referred to as the Hall-Petch grain boundary strengthening mechanism [36] accordingly: where σ y is the yield stress; σ i is the friction stress; K y is strengthening coefficient (a material constant); and d is the grain size. According to Equation (1), the yield stress of a material increases with decreasing the average grain size which explains the observed high yield stress of nc-copper (grain size~34 nm) compared to that of its MG counterpart (grain size~38 µm).
Mechanical Properties and Microstructure of Irradiated MG-Copper
Microhardness measurements and tensile testing of irradiated MG-copper were conducted following the same procedures as for the as-received material. Microhardness measurements of irradiated MG-copper, listed in Table 2, reveal a steep increase in hardness due to 0.0034 dpa of damage. This sudden change in hardness is followed by a saturation, which may start below or at about 1 dpa. Saturation in hardness of neutron irradiated oxygen-free high-purity copper was observed by Singh et al. [37] to occur between 0.1 and 0.2 dpa, consistent with the behavior observed in this study. Figure 11 shows the engineering stress-strain curves of MG-copper, and the average mechanical properties of the material are listed in Table 1 from which the evolution of the mechanical behavior of MG-copper with damage level can be summarized by an increase in strength (both yield and ultimate) accompanied by a loss in ductility (both uniform and total elongation), jointly referred to as irradiation hardening and embrittlement. Close scrutiny of the mechanical properties of irradiated MG-copper indicates a more profound increase in yield stress compared to the UTS. Moreover, the difference between yield and ultimate strength was found to diminish with increasing damage level implying decreased work hardening (n) with increased neutron dose. Of interest is the yield drop phenomenon clearly observed in MG-copper at 2 dpa ( Figure 11). Using OM, the average grain size of irradiated MG-copper was found to be 39˘7, 37˘11, and 49˘14 µm after 0.0034, 1, and 2 dpa, respectively. Thus, it is possible to state that no grain growth occurred in irradiated MG-copper. This is consistent with the fact that thermally-induced grain growth occurs in MG-copper only at relatively elevated temperatures (> 600˝C) [38] while the maximum irradiation temperature experienced by MG-copper samples was less than 100˝C ( Figure 4). Characterization of the irradiation-induced defect structures in MG-copper using TEM revealed dislocation loops and networks in the grain interior at 0.0034 dpa ( Figure 12). As the damage level increased to 1 dpa, dislocation loops and networks were observed in both the grain interior (Figure 13a) as well as at grain boundaries (Figure 13b). Inspection of the TEM micrographs (Figures 12 and 13) indicates a higher dislocation density in the more heavily damaged sample. A twin structure was also observed (Figure 13c), a feature that became more common with increasing damage level. At 2 dpa, TEM characterization of MG-copper revealed formation of relatively higher dislocation density (Figure 14a) along with abundant twin structures in grains with curved boundaries as depicted by the dotted curve in Figure 14b. At this point, the relationship between yield strength and the developed microstructure in irradiated MG-copper can be elaborated by considering the yield stress in irradiated material as [39]: where σ i is the friction hardening; and σ s is the source hardening. Source hardening is commonly found in irradiated FCC metals (e.g., copper) where radiation-induced defects are present close to Frank-Read (F-R) sources. This increases the stress required for F-R operation and consequently contributes not only to increasing the yield stress but also yield point phenomena of irradiated materials. Friction hardening is the stress experienced by the mobile dislocations encountering irradiation-induced obstacles such as precipitates, voids, or other dislocations during their glide/slip. In the context of radiation hardening, the friction hardening component is usually decomposed into two components as follows: where σ SR is short range friction hardening; and σ LR is long range friction hardening [39]. The classification here depends on the type of obstacles responsible for inhibiting dislocation motion. σ SR arises from dislocation pinning by irradiation-induced defects such as voids and precipitates. As no voids or precipitates were observed in irradiated MG-copper, σ SR can be set to zero in Equation (3). σ LR arises from the repulsive force experienced by a mobile dislocation due to long range stress fields of forest dislocations. The contribution of long range friction hardening to the yield stress of irradiated material is given by [40]: where α is a constant, G is the shear modulus, b is the Burgers vector, and ρ d is the dislocation density. Equation (4) indicates that the long range component of friction hardening, and consequently the overall yield stress, is proportional to the dislocation density in the irradiated material. Thus, it is plausible to ascribe the continuous increase in yield stress of irradiated MG-copper with exposure level to the observed increase in radiation induced defects (dislocations in particular). An approach to decompose the yield stress in irradiated MG-copper into source hardening and friction hardening components will be discussed in Section 3.4 of this article.
Mechanical Properties of Irradiated nc-Copper
Microhardness indentation and tensile testing were conducted on nc-copper after irradiation following the same procedures applied to MG-copper. According to the microhardness measurements listed in Table 2, irradiated nc-copper exhibits a steep decrease in hardness following 0.0034 dpa irradiation. The decrease in hardness seems to saturate either below or at 1 dpa. After that, only a minor increase in hardness was observed between 1 and 2 dpa. This is in contrast to irradiated MG-copper where hardness was found to increase at all damage levels achieved in this study. Representative engineering stress-strain curves of irradiated nc-copper are shown in Figure 15 and the average mechanical properties of irradiated nc-copper based on the analysis of stress-strain curves of two samples are included in Table 1. We note that nc-copper exhibited a substantial decrease in yield stress and UTS, accompanied by an increase in total elongation, following 0.0034 dpa irradiation. This decrease in both yield stress and UTS post irradiation is referred to as irradiation-induced softening [41,42]. Yield stress and UTS further decreased after 1 dpa irradiation, although less dramatically. This was accompanied by a substantial decrease in both uniform and total elongation to below even its pre-irradiation values. Finally, nc-copper exhibited typical radiation hardening and embrittlement at 2 dpa, manifested by an increase in both yield stress and UTS accompanied by a loss of ductility. Thus, based on the analyses of hardness measurements and mechanical properties of irradiated nc-copper the following two observations are made: (i) radiation softening was noted in the material up to 1 dpa; and (ii) nc-copper exhibited common radiation hardening at 2 dpa. This differed from the radiation response of MG-copper, where radiation hardening was observed at all damage levels. Figure 16 shows XRD patterns of irradiated nc-copper compared to the as-received material. The observed decrease in peak broadening (in terms of FWHM) at 0.0034 dpa implies an increase in the grain size at this damage level; the average grain size of nc-copper at 0.0034 was found to bẽ 70 nm. The reduction in peak broadening continued through 1 and 2 dpa, where peak broadening was below the limit to determine the average grain size with XRD. The variation in peak intensity in XRD patterns from one damage level to another indicates a change in the relative grain population in a particular crystallographic direction. Thus, the 0.0034 dpa irradiation resulted in a rearrangement of grain orientation such that the peak of highest intensity changed from (111) to (200). For both 1 and 2 dpa samples, the (111) peak exhibited the highest intensity, similar to the case of as-received material. Finally, the presence of only the four major diffraction peaks of copper at all damage levels indicates no second phase formation in the irradiated material. Figure 17 shows an AFM image of nc-copper following 0.0034 dpa and the average grain size of the material was found to increase from 48˘16 to 65˘10 nm after irradiation. Therefore, the results from both XRD analysis and AFM suggest that nc-copper underwent grain growth during 0.0034 dpa irradiation. TEM characterization revealed several microstructural features in nc-copper at 0.0034 dpa. Figure 18a shows the presence of twins and dislocation structures in the irradiated material. These defect structures were not observed in the as-received material, implying that some grains have grown enough to accommodate these defects. However, this sample still contains nanosized grains, as observed in other regions, as indicated by near to complete diffraction rings in Figure 18b. The GSD of nc-copper at this damage level was established by combining the distributions from both AFM and TEM characterization ( Figure 19) and the corresponding average grain size was found to be 86˘38 nm. The relatively wide range of grain size observed in the material suggests the occurrence of non-uniform grain growth at this exposure level. Formation of twin and dislocation structures are noted in nc-copper irradiated to 1 dpa (Figure 20a). In addition, nc-copper with this damage level exhibited formation of twin structures at faceted grain boundaries as depicted by dashed lines in Figure 20b. At 2 dpa, twin and dislocation structures were the most pronounced microstructural features ( Figure 21). Interestingly, we note twin structures at curved grain boundaries, similar to that observed in MG-copper at the same damage level (depicted by the dotted curve in Figure 21b). It is worthwhile to mention that nanograins were not observed in any TEM foil from nc-copper after either the 1 or 2 dpa irradiations. However, attempts to determine grain size of nc-copper at 1 and 2 dpa using OM were not successful, as the grain sizes were still too small to be resolved with this technique. Thus, the average grain size and GSD of irradiated nc-copper at 1 and 2 dpa were determined using SEM (Figure 22) to be~0.8˘0.6µm and 0.75˘0.5 µm at 1 and 2 dpa, respectively. Thus, irradiated nc-copper exhibited an increase in the average grain size with exposure levels starting at 0.0034 dpa. After that, saturation in grain growth occurred at around 1 dpa, as depicted by the dotted lines in Figure 23. At this point, it is possible to elaborate the structure-property relationship in irradiated nc-copper by considering the Hall-Petch relationship (Equation (1)) and the friction and source hardening components comprising the yield stress (Equation (2)):
Microstructural Characterization of Irradiated nc-Copper
The substantial decrease in the yield stress of nc-copper from~557 to 371 MPa at 0.0034 dpa is attributed to the increase in grain size from~34 to 86 nm. As grain growth persisted, the average grain size increased to the submicron level (~800 nm) at 1 dpa, resulting in a further decrease in the yield stress. At this damage level, grains in irradiated nc-copper have grown enough to accommodate complex forms of defects, such as dislocations and twins. At higher doses the yield stress decreased moderately from~371 to 357 MPa, accompanied by loss of ductility reflecting the grain size saturation. Accordingly, it is plausible to state that the mechanical behavior of nc-copper at high doses was not solely controlled by grain growth. After 1 dpa, no further grain growth was observed in irradiated nc-copper, so the mechanical behavior of the material was governed by common radiation hardening and embrittlement. This was manifested by an increase in yield stress from~357 to 388 MPa, accompanied by ductility loss.
Grain Growth in Irradiated nc-Copper
The overall mechanical behavior and microstructural evolution in irradiated nc-copper reveals that grain growth has a detrimental effect on the overall radiation response of the material, even at very low exposure levels. This necessitates investigating how grain growth in nc-copper originated under the irradiation conditions in this study. Thermally-activated grain growth has been observed in nc metals and alloys well below the temperature required to trigger grain growth in its MG counterparts [43]. Thereby, a series of annealing experiments followed by hardness and grain size measurements were conducted on as-received nc-copper to assess its thermal stability. Samples of as-received nc-copper were isothermally annealed in vacuum for three hours at temperature ranges from~300 to 750 K. Subsequently, grain size measurements were conducted and the average grain size was determined. Figure The variation of grain size with annealing temperature reveals a sudden increase in the grain growth rate at~520 K. The change in grain growth rate implies that thermally-activated grain growth in nc-copper is controlled by two distinct mechanisms. Kinetics of isothermal grain growth is typically described by the following rate equation: where D is the grain size at time t; K 0 is a pre-exponential constant; n is the grain growth exponent; Q is the activation energy for a specific grain growth mechanism; R is the gas constant; and T is the annealing temperature. Thus, the activation energy for thermally-activated grain growth in as-received nc-copper was determined by plotting grain size versus inverse temperature for several grain growth exponent (n) values (see Figure 25). With n set to 5, the activation energy for grain growth of as-received nc-copper at temperatures above~520 K was found to be~55 kcal/mole. This is in reasonable agreement with the 46.8 kcal/mole reported for the activation energy of lattice diffusion in MG-copper [44]. At temperatures below 520 K, the activation energy of nc-copper was found to be 22 k kcal/mole for n = 5. This is in agreement with the activation energy for grain boundary diffusion reported in some nc metals and alloys [45]. In addition to isothermal annealing experiments, Differential Scanning Calorimetry (DSC) was used to determine the temperature at which thermally activated grain growth is triggered in nc-copper. Details about the theory behind this technique can be found elsewhere [46]. DSC Q2000 series from TA Instruments (Wood Dale, IL, USA) was used to anneal a 3 mm disk of the as-received nc-copper at a rate of 10 K/min and Figure 26 shows the heat flow through the sample as a function of measured sample temperature. According to the DSC scan, the onset of thermal instability in as-received nc-copper occurs at approximately 450 K (~170˝C) and the full peak is observed at 520 K (~235˝C) (Figure 26). Recalling the isothermal annealing data of as-received nc-copper, grain growth via grain boundary diffusion was dominant at~450 K which coincides with the onset of thermal instability observed with the DSC scan. Furthermore, the transition from grain boundary diffusion to lattice diffusion at~520 K (247˝C) is in good agreement with the formation of the full peak thermal instability at~235˝C in the DSC scan. Thus, it is concluded that temperature regime in which thermally-activated grain growth is controlled by grain boundary diffusion represents the onset to thermal instability in nc-copper. Thus, the observed grain growth in irradiated nc-copper can be separated into thermal and irradiation effects. Additional isothermal annealing experiments were conducted on samples of nc-copper at 328 K (55˝C) for 200 hrs and at 373 K (100˝C) for up to 740 hrs. This mimics the highest irradiation temperature and time in the PULSTAR and ATR, respectively.
The hardness of the annealed samples with annealing time at the two temperatures is included in Figure 27a,b. The as-received nc-copper annealed at 55˝C for 200 hrs exhibited a noticeable reduction in hardness. This implies an increase in grain size in light of the Hall-Petch equation. However, thermally-activated grain growth induced by annealing cannot be the sole cause of the observed decrease in hardness of nc-copper after the 0.0034 dpa irradiation. Thus, it is possible to split the grain growth in irradiated nc-copper at 0.0034 dpa into two components: (i) thermally-activated component (indicated by dotted arrows in Figure 27a); and (ii) irradiation-induced component (indicated by solid arrows in Figure 27a). Clearly, the irradiation-induced component of grain growth is more dominant at this damage level. Figure 27b indicates that grain growth in nc-copper irradiated to 1 and 2 dpa can be attributed to both a thermally-activated component and an irradiation-induced component, as in the 0.0034 dpa case. Both components contribute approximately equally to the overall grain growth at these exposure levels. While the mechanisms controlling thermally-induced grain growth in nc-copper were able to be identified, determination of the underlying mechanism for irradiation-induced grain growth in in-pile experiments is not possible due to limited data. Studies investigating the mechanisms of radiation-induced grain growth are primarily based on computational and simulation efforts [47]; however, there is no well-defined explanation of the mechanisms for this process to-date. Alternatively, the researchers are considering in-situ ion irradiation experiments combined with real-time TEM to understand the driving force and mechanisms underlying radiation-induced grain growth in nc-copper.
Radiation Hardening in Polycrystalline Copper
Radiation hardening was observed in both MG-copper and nc-copper. In MG-copper, radiation hardening was observed at all damage levels while it became dominant in nc-copper only at higher damage levels. In this section, radiation hardening in irradiated MG and nc-copper will be decomposed into source and friction components in light of Equation (5). The variation in yield stress of irradiated MG and nc-copper with grain size is plotted at each damage level in Figure 28. This allows investigating the influence of grain size and exposure level simultaneously on the mechanical behavior of polycrystalline materials. From Figure 28, the variation in the yield stress of irradiated polycrystalline copper can be thought to follow a general Hall-Petch behavior, albeit only two data points (representing MG and nc-copper) are present. A close scrutiny of the data reveals that the slope of Hall-Petch line inconsistently changes from one exposure level to the other. Friction and source components were examined by fitting the two data points at each damage level to a straight line. The slope of that straight line corresponds to the unpinning stress K y and the source hardening can then be calculated as K y ? d for both MG and nc-copper. Consequently, the friction hardening is calculated as the difference between yield and source hardening stresses. This approach relies on the premise that the straight lines are representative of the effect of radiation exposure on the yield stress and the unpinning stress, K y . The variation in friction hardening of polycrystalline copper is shown in Figure 29a. Clearly, irradiated polycrystalline copper exhibited an increase in friction hardening with increasing exposure level, as depicted by the dashed curve (Figure 29a). This is attributed to the observed increase in dislocation density in both MG and nc-copper with increasing exposure level. The source hardening component of the yield stress versus damage level is shown in Figure 29b and nc-copper exhibits a continuous decrease in source hardening with damage level. This is ascribed to the observed increase in grain size from 34.4 nm pre-irradiation to about 1 µm after irradiation to 1 and 2 dpa. Conversely, the source hardening of irradiated MG-copper does not vary significantly with damage, implying that source hardening has only a minimal contribution to the yielding of irradiated MG-copper.
Summary and Conclusions
The impact of grain size on the response of MG and nc-copper to fast neutron irradiation was assessed by evaluating the mechanical behavior and microstructural evolution in the material pre and post-irradiation. The following remarks were drawn: ‚ MG-copper exhibited typical radiation hardening and embrittlement at all damage levels achieved in this work.
‚ At low exposure levels, nc-copper experienced grain growth, and radiation softening was noted by a dramatic decrease in strength accompanied by increased ductility.
‚
The increase in grain size of irradiated nc-copper allowed formation of complex defect forms, such as twins and dislocations, at higher damage levels.
‚ Radiation hardening became dominant in irradiated nc-copper after grain growth saturation at higher exposure levels.
‚ Analysis of isothermal annealing and hardness measurements revealed that grain growth in nc-copper is composed of both thermally-activated and irradiation-induced components.
‚
The yield stress data of irradiated MG and nc-copper were analyzed on the basis of the Hall-Petch relationship.
‚ Friction hardening was found to increase with an increasing damage level in polycrystalline copper.
‚ Source hardening in irradiated nc-copper was found to decrease with an increasing damage level, while it has minimal contribution to the yield stress of irradiated MG-copper. | 8,750 | 2016-03-01T00:00:00.000 | [
"Materials Science"
] |
Thinking with Rosa: assent in philosophy of the Islamic world
ABSTRACT In Thinking with Assent: Renewing a Traditional Account of Knowledge and Belief, Maria Rosa Antognazza offers a historical narrative of pre-modern epistemology. She argues that until very recently, philosophers generally held that “knowing and believing are distinct in kind in the strong sense that they are mutually exclusive mental states”. This paper tests, and ultimately confirms, that account by applying it two thinkers of the Islamic world, al-Fārābī (d.950 CE) and Ibn Sīnā (Avicenna, d.1037 CE). It is shown that both of them used the term ‘assent (taṣdīq)’ as an umbrella term covering two very different states, knowledge and belief. In the case of Ibn Sīnā, this contrast is ultimately tied to his sharp distinction between immaterial intellective thinking and embodied thinking that uses a physical organ.
'renewing' promised in the title.The second section of the book develops and defends the traditional account in philosophical terms; the third section applies the insights of the first two to problems in religious epistemology.
My aim here will be much more modest.I simply want to complement (and compliment) the first section of Rosa's book by looking at a non-European tradition, namely philosophy in the Islamic world.Actually, what I will offer is less ambitious even than that, since I will have space only to treat two of the most famous of the falāsifa, that is, thinkers directly inspired by the Greek tradition: al-Fārābī (d.950 CE) and Ibn Sīnā (Avicenna, d.1037 CE).They will provide us with a test case for Rosa's claim that the 'traditional account' was indeed pervasive in philosophy up until early modernity.My brief investigation will ultimately ratify that claim, especially in the case of Ibn Sīnāa significant bit of confirmation, since Ibn Sīnā's stature and influence in the Islamic world was unparalleled.It is especially worthwhile to take up this particular case study, because al-Fārābī introduced, and Ibn Sīnā carried on, the practice of making the notion of 'assent' central to Arabic epistemology, just as it is central to Rosa's book.She uses 'assent' as the covering term for both knowledge and belief, and as we'll see, this too corresponds to what we find in the Islamic tradition.
Before we get to that, though, let's dwell for a moment on Rosa's reconstruction of the traditional account.As already mentioned, this account makes knowledge not a type of beliefe.g.justified true belief, or justified true belief with the addition of something furtherbut rather a sui generis mental state.Knowledge is grasping the "presence" of an object of knowledge, a "natural, primitive, effective contact between cognizer and cognized" (Antognazza, Thinking, §I.5).It can be, and in the tradition often was, compared to seeing.Rosa thus quotes Thomas Aquinas' remark, "The reason why the same thing cannot simultaneously and in the same respect be known and believed, is that what is known is seen whereas what is believed is not seen".1By contrast, 'belief' is reserved for less direct modes of cognition.Thus, says Rosa, "knowing is not 'the best kind of believing'; nor is believing to be understood derivatively from knowing as 'knowledge minus something'" (Antognazza, Thinking, Introduction).One implication of this is that knowledge is binary -'on/off' so to speakwhereas belief comes in degrees, the phenomenon modern-day epistemologists refer to as levels of 'credence'.
The upshot is that, whereas modern-day epistemology has most often (though not without exception) embraced a picture like this: the traditional account adopts the following analysis: Notice that in this second diagram, assent without knowledge is not designated as 'mere belief'. 2 Far from denigrating the epistemic status of belief, Rosa adopts the slogan 'knowledge first, but give belief its due'.She is convinced, and convinced that many pre-modern and early modern philosophers were convinced, that belief plays a crucially important role in our lives.It need not be epistemically deficient.Often, belief is in fact the best mental state we can reach.Rosa remarks that "a crucial part of our successful cognition is constituted by justified belief, which tracks truth if and when knowing is out of one's cognitive reach" (Antognazza, Thinking, §12.1.1).
To give an example that is prominent in her book, for many religious thinkers our grasp of God consists mostly or entirely of beliefs, not knowledge.In this life we do not enjoy God's 'presence' as we will in the beatific vision eagerly anticipated by such Christian thinkers as Aquinas.Another example might be 'knowledge by testimony', which on the traditional account is not in fact knowledge, unless testimony somehow puts us in contact with the objects at stake (Antognazza, Thinking, §5.3.2).Still, through testimony, we can get into a very favourable relation to the truth of a given propositionwe might be overwhelmingly 'justified' and even rightly take ourselves to be 'certain' that the proposition is true.It seems wrong, or at least misleading, to describe such cases as instances of 'mere' belief.In fact, as Rosa points out, the contrast between beliefs based on extremely reliable testimony and knowledge typically makes no practical difference (Antognazza, Thinking, §5.3.2).But they are still beliefs, and not knowledge, because they are assents that do not involve direct grasping.These considerations should forestall any suspicion that the above diagrams are in fact equivalent, with belief in general being re-named 'assent', and then subdivided into knowledgeable beliefs and mere beliefs.Instead, one may capture the contrast between highly certain, wellfounded beliefs and mere beliefs by integrating that contrast into the second diagram: Depending on one's epistemological predilictions, one could invoke 'warrant', or whatever is represented by the '+' in 'JTB+' models of knowledge, to spell out what a 'well-founded' belief is.But no addition will change belief into knowledge, because beliefs are a fundamentally different cognitive phenomenon from knowledge.It would be like trying to turn a cat into a bird by adding enough feathers.Now let us turn to the Islamic tradition, beginning with the author who, as already mentioned, was the first to make 'assent' a key term in Arabic epistemology.The word I am translating here as 'assent' is taṣ dīq, which literally means 'deeming true'.This is the standard translation in the secondary literature, though other renderings have been offered, including 'belief'; we'll come back to this.Taṣ dīq forms a pair along with taṣ awwur, which is usually translated as 'conception' (for the contrast see Butterworth, "À propos"; Lameer, Conception and Belief; Maróth, "Taṣ awwur and Taṣ dīq"; Wolfson, "Taṣ awwur and Taṣ dīq").This word comes from the verbal root used for 'forming', as of a picture or representation; thus the Greek word eidos, 'form', was translated into Arabic as ṣ ūra, which has the same root.
Conception is grasping the meaning of a term, phrase, or proposition, but without commitment to truth.Most often, the object of conception is a single term, in which case the question of truth would not arise anyway.By contrast assent is committing to the truth of a proposition.To put it in other, equally sketchy, terms, conception is to understand what something is or what something means, while assent is to think that something is the case.Here is a more technical explanation from al-Fārābī: Knowledge (maʿrifa) is of two types: conception (taṣ awwur) and assent (taṣ dīq).Each of these may be either complete or deficient … Complete assent is the same as certainty (huwa al-yaqīn), while complete conception is conceptualizing something in a way that encapsulates the thing's essence.Namely that the thing be conceptualized in terms of what is signified by its definition.Now, we can build on these two [ideas] to explain what precisely we mean by complete assent.We say that assent in general is when a human judges (yaʿtaqidu) that something is the case and accordingly judges that the existence of that thing outside the mind is in accord with what is held in the mind.Truth, then, is when the thing outside the mind in fact is in accord with what is held in the mind … and certainty is when we judge about that to which the assent applies, that it cannot at all be otherwise than we judge it to be. 3 (Burhān, 19-20) There's a lot to unpack here, principally the relationship of conception and assent to knowledge, and the concept of complete and incomplete conception and assent.
Starting with the former, it is immediately striking that al-Fārābī makes conception and assent types of knowledge.This already is a hint that Rosa's 'traditional account' may be playing a role.Obviously if beliefs fall under this scheme at all, they should appear on the assent side, since believing something is taking it to be true.The fact that some cases of knowledge are 'conceptions' thus shows that whatever knowledge is, in general, it is not just a species of belief.On the other hand, when al-Fārābī introduces conception and assent as types of knowledge, he does not mean to suggest that every case of assent is knowledge.This will be clear as we go on, but it is in any case quite intuitive: people frequently assent without knowing, as when they assent to false propositions.Rather, al-Fārābī's point is that all conceptions, and some assents, count as knowledge: The reader is advised not to put any weight on the phrase 'epistemic states', which is not taken from al-Fārābī; I have introduced it as a purposefully vague umbrella term that can cover the very different phenomena of conception and assent.The reader is, by contrast, advised to put plenty of weight on the lack of any contrast between knowledge and non-knowledge within the category of conception.This is because to conceptualize something is to know it, at least partially.Ibn Sīnā will later say that something "may be unknown by way of conception" (Ibn Sīnā, Ishārāt, 41), but by this he does not mean having a false conception.Rather, it means simply lacking any conception of that thing.Hence he adds that in such a case, "its meaning is not conceived until one learns [other] concepts" (Ibn Sīnā,Ishārāt,41).
The possibility of merely partial conception takes us to the second point about completeness and its lack.By 'complete conception' al-Fārābī means being in possession of a real definition, as when one conceptualizes human as rational mortal animal.An 'incomplete conception' would be either a partial definition, as when one conceptualizes human only as animal, or a merely nominal definition (one that mentions accidental features), as when one conceptualizes human as animal capable of laughing.The fact that conceptualization, whether complete or partial, counts as knowledge is another sign that al-Fārābī is working within Rosa's 'traditional account'.It looks very much like 'knowing by way of conception' is just grasping something, not unlike seeing.If there is no guarantee that the grasp will be complete, then that too is like seeing, since you can see something only in part.Again, all these points can apply to something more complex than a single term.If one is merely entertaining a proposition, that would involve conceptualizing it ('knowing what it means'), without assenting to it or to its negation.Though I do not know of a text on the issue, I assume that one could also 'incompletely' conceptualize a whole proposition, for instance if one were entertaining a proposition about humans and had only a nominal definition of human, or understood only one of the terms in the proposition.
What about complete and incomplete assent?In the passage quoted above, al-Fārābī tells us that assent in general is judging things outside the mind to be as they are in the mind.Here he is endorsing the sort of 'correspondence theory' of truth often ascribed to Aristotle; interestingly, he does so in terms of the matching of thoughts to the world, not the matching of statements to the world, as suggested by Aristotle at Metaphysics 1011b ("saying of what is that it is, and of what is not that it is not, is true"; see further Crivelli, Aristotle).This could be explained by saying that al-Fārābī is adjusting the definition of truth in light of Aristotle's doctrine in On Interpretation that words represent thoughts, and thoughts represent things (I take inspiration for this suggestion from David, "Correspondence Theory").In any case, believing that one's mental representation corresponds to the way things are is not enough for 'complete' assent.Also required is that the correspondence in fact obtains.In other words, that to which one assents must be true.Finally, it must be true in such a way that things could not be otherwise.
For further light on this last constraint, we should turn to another work by al-Fārābī, which has been well studied by Deborah Black, called On the Conditions of Certainty (Sharāʾit al-yaqīn, edited in the same volume as Burhān; see Black, "Knowledge and Certitude").He names no fewer than six conditions that need to be satisfied to reach complete or "absolute (ʿalā l-iṭlāq)" certainty, which partially overlap with what we have just seen (Sharāʾit 98): (1) It must be "judged of something that it is such-and-such".
(2) It must "suitably occur (yuwāfiqu) that it [sc.the judgement] corresponds to, rather than opposing, the existence of the thing outside [the mind]".(3) It must be "known that it corresponds".(4) It must be impossible that it not correspond.
(5) There is no time at which it does not correspond.( 6) All this happens essentially, not accidentally.
Is al-Fārābī effectively saying here that certain knowledge is a kind of belief, one that has a set of additional features?If so, he would be departing from the 'traditional account'.To answer this question we need to take a closer look at conditions 1-3, which Black calls respectively the belief, truth, and knowledge conditions. 4 On her reading, the first condition may be paraphrased as 'S believes that P' ("Knowledge and Certitude", 16), which seems fair enough.But as she points out, al-Fārābī is not using the Arabic term that would normally correspond to the Greek doxa, meaning 'mere belief'; that would be z ann, on which more shortly.Instead, he uses the rather generic term iʿtiqād, which I have rendered 'judgement'.He immediately goes on to offer two alternatives, raʾy ('opinion') and ijmāʿ ('consensus'), as alternative vocabulary to make the idea clear to his reader.But it is not only the verb for 'believing' that is important here, it is what al-Fārābī says about the content of the belief.Namely that one is judging 'of something that it is such-and-such', in other words, judging a predicate to hold of a subject.The fact that al-Fārābī is only interested in predicative judgements is hardly a surprise, since this is the type of proposition at stake in Aristotle's logic.We might then prefer to paraphrase the first condition as requiring that 'S judges that A holds true of B'.In other words, condition 1 requires that there is taṣ dīq, 'assent'.
As for condition 2, it simply affirms Aristotle's correspondence theory of truth.As Black suggests, we can take this to be an 'externalist' constraint on absolute certainty; to judge that animal holds true of human with absolute certainty requires that animal does in fact hold true of human in the external world.By contrast, condition 3 identifies an 'internalist' constraint on the belief.What al-Fārābī says about this condition is interesting: it is intended to rule out cases in which the person making the judgement "is not aware that what is judged [to be the case] corresponds; rather, as far as he is concerned (ʿindahu) it might be (ʿasā) that it does not correspond".As al-Fārābī goes on to say, the person in possession of absolute certainty knows that their judgement is 4 My thanks to Fedor Benevich and Abdurrahman Mihirig for helpful discussion of the passage.
not a "mere belief (z ann, again, the corresponding term to Greek doxa)", that is, a belief that might be either true or false.So what is being excluded here is belief of a very specific kind: beliefs in which the person with the belief is explicitly aware that the belief may not correspond to the way things are.
The kind of assent that interests al-Fārābī here is very different, as we can see from conditions 4, 5, and 6, which effectively spell out the meaning of "could not be otherwise" in the passage quoted earlier.These conditions are inspired by Aristotle's constraints on knowledge in the strict and proper sense (that is, demonstrative 'understanding', the translation urged by Burnyeat, "Aristotle"), which also require that such knowledge concern itself with eternal, necessary, and essential truths.So there is no sign here that al-Fārābī imagines us to be in possession of a mere belief (z ann) that could be turned into certain knowledge (ʿilm yaqīnī ) by adding something further, like justification.Rather īʿtiqād ('judgement'), which I take to be tantamount to ṭ aṣ dīq ('assent'), is an umbrella term covering cases of certain knowledge and avowedly uncertain belief.Black too denies that al-Fārābī's account can be assimilated to the JTB model, and as if she had already read Rosa's book almost twenty years ago, comments that for al-Fārābī "knowledge, like vision, requires direct epistemic contact with the object known at the time when it is occurring.And it is that direct relation to the object of one's belief that must be present to guarantee certitude" ("Knowledge and Certitude", 16, 22). 5 In keeping with all this, conditions 4-6 suggest that for al-Fārābī, as for Aristotle, knowledge in the strict and proper sense deals with a highly restricted range of propositions.For, according to these conditions, knowledge in the proper sense has to be about eternal, necessary, essential truths.Other cases of assent simply cannot be instances of "absolutely certain" knowledge, regardless how ideal our epistemic state may be.These would include assents to propositions about contingent and transient matters, like 'Socrates is in the marketplace'.Al-Fārābī does allow that one could have "incomplete certainty" about such propositions, and thus know them to be true in a less demanding sense.Just as one can have incomplete knowledge on the side of conception, for instance through a nominal definition, one can have incomplete knowledge on the side of assent.This 5 I have a slight disagreement with Black on the question of whether what is known is propositional.She notes, as I did, that condition 1 seems to be envisioning propositional judgements, indeed predications.She is worried though that his talk of 'awareness' suggests something more like 'acquaintance' with an object, which would be in tension with the propositional account.But I would take this to be simply a sign of al-Fārābī's underlying assumption that Aristotelian syllogistic is entirely compatible with a 'knowledge-as-seeing' account: what one 'sees' when grasping human is, for instance, that animal is predicated of human.This is not to deny that there are difficulties in bringing together the epistemology of the Posterior Analytics with that of the De Anima, only that al-Fārābī was surely convinced that it is possible.
would be non-scientific assent, in other words, assent that does not satisfy the demands of the Posterior Analytics.
The upshot is that, though some scholars do translate taṣ dīq as 'belief' (as in the very title of Lameer, Conception and Belief), it would be truer to al-Fārābī's presentation to fill out our earlier chart as follows: As Rosa liked to say, though, "call them as you like".We should not insist dogmatically on the terminology, as long as the philosophical point is clear.Namely that for al-Fārābī, assent is (as Rosa's historical account would predict) an umbrella term for two fundamentally different and mutually exclusive epistemic states.
While one could further explore al-Fārābī's views on the difference between these two states, at this point it will be more fruitful to turn to the more elaborate epistemology and philosophy of mind offered by Ibn Sīnā.He takes over from al-Fārābī the contrast between taṣ dīq and taṣ awwur, agreeing that these are the two varieties of knowledge (e.g. at Ibn Sīnā, Madkhal 30; Burhān 51).So far I have been translating two Arabic words indifferently as 'knowledge': ʿilm and maʿrifa.They have no systematically different connotations, though ʿilm was the usual rendering of the Greek epistêmê and thus had more tendency to connote the concept of properly scientific knowledge.Sometimes maʿrifa means something more like understanding or awareness.But in his masterwork the Healing, within the section corresponding to the Posterior Analytics, Ibn Sīnā stipulates an unusual meaning for the two words, using ʿilm for the universal knowledge that qualifies as fully demonstrative and maʿrifa for the subsuming of a particular under such universal knowledge (Ibn Sīnā, Burhān 283; Adamson, "On Knowledge", 283).To use an example already given by Aristotle, it would count as ʿilm to know that all triangles have internal angles equal to two right angles, and maʿrifa to know that this particular triangle has internal angles equal to two right angles.While this terminological stipulation is artificial and unusual, even in Ibn Sīnā's own writings never mind Arabic philosophical works more generally, it does show him making the same sort of contrast we have seen in al-Fārābī.ʿIlm corresponds to al-Fārābī's 'absolutely' or completely certain knowledge, and maʿrifa to a kind of incompletely certain knowledge.
For Ibn Sīnā universal, necessary, and essential knowledge is the best form of assent, and is a kind of certain assent (Ibn Sīnā, Burhān 51; Black, "Certitude, Justification", 122; Strobino, Avicenna's Theory, 41). 6He pursues the topic of certainty in a way that will look familiar from what we saw in al-Fārābī: (1) Certain assent comes with a second-order judgement (iʿtiqād) that it must be true.(2) A first kind of uncertain assent comes with no second-order judgement one way or the other as to whether it must be true.(3) A second kind of uncertain assent comes with a second-order judgement that the assent could be false.
The first kind of assent is used in demonstrative science, the second in dialectic and sophistical arguments, and the third in rhetoric.So this classification assent types gives us a rationale for the range of argument forms considered in Aristotle's Organon. 7et's consider the three types in turn, starting with the assent that must be true (cf.al-Fārābī's condition 4).In the spirit of Aristotle's constraint that demonstrative knowledge is of eternal truths (cf.al-Fārābī's condition 5), Ibn Sīnā elsewhere specifies that in the first kind of assent, the proposition cannot cease to be true (lā yumkinu zawāluhu, at Burhān 256; see Strobino,Avicenna's Theory,44).Notice that this is stronger than saying that the proposition does not cease to be true.Ibn Sīnā thinks that some propositions are always true, but only contingently so, for example the proposition 'the universe exists'.The universe is eternally made to exist by an extrinsic cause, namely God, and it is only this causal relation that renders the proposition true, when in itself it might have been false.Assent to this proposition is therefore of the third type: if one forms a secondorder judgement about it, this should be that it need not be true.What underwrites the modal features of these sorts of assent is Ibn Sīnā's essentialism, according to which each essence has certain features that are conceptually "inseparable" from it (Ibn Sīnā, Ishārāt 46; see further Benevich, Essentialität; Benevich, "Avicennan Essentialism"; Strobino, "Per Se, Inseparability").Animal and rational are inseparable from human, whereas existent is not inseparable from the universe.Thus Ibn Sīnā would also agree with al-Fārābī's condition 6 (essentiality).Indeed this is the really important condition, of which conditions 4 (necessity) and 5 (eternity) turn out to be mere corollaries.
As for Ibn Sīnā's second kind of assent, the kind used in dialectic and sophistry, its status relies on the merely psychological fact that the person making the assent has not considered its modal status.Ibn Sīnā says explicitly that if the person were to do so, they would judge either that the assent must be true, turning it into an assent of the first type; or that, while true, it could be false, turning it into an assent of the third type.To flesh out what is going on here, we might imagine a dialectical disputation where a premise is introduced on the grounds that it is 'widely accepted' (endoxon in Aristotle's Greek, mashhūr in Ibn Sīnā's Arabic), e.g.'pleasure is good'.If the premise is granted (musallam), then the argument can proceed without either party stopping to consider whether pleasure must be good (which is tantamount to the question of whether goodness is essential to pleasure), or might after all fail to be good.This is entirely consistent with the character of Aristotelian dialectic, where arguments are pursued without being grounded in scientific first principles and thereby established as essential and necessary truths.Instead, premises are simply conceded according to certain rules.Something similar happens in sophistical argumentation, except that the victim of the sophistry is being caught unawares, tricked into granting something they should not, instead of deliberately conceding a premise for argument's sake as in dialectic.
Finally, the third kind of assent occurs when one is fully aware that one's assent could be false.This might look strange at first glance: 'I assent to P, while cheerfully admitting that P might not be true'.But in fact most of our assenting is of this kind, since it covers all propositions about the accidental features of things.After all, an accidental feature is precisely a feature that could be separated from its bearer.Significantly, and in contrast to what we found in al-Fārābī, Ibn Sīnā does use the word z ann for this kind of assent, that is, the word that corresponds to the Greek doxa.So we can update our chart to reflect Ibn Sīnā's vocabulary, as follows: Where knowledge grasps propositions that are necessary, because they are essential, and belief grasps propositions that are at best contingently true (and at worse, just false).
One might wonder why scientific knowledge as applied to particulars (maʿrifa) falls under knowledge and not belief, since particulars are contingent.For instance, why would 'Socrates is rational' not qualify as an assent of the third type, just as much as 'Socrates is in the marketplace'?After all, it is not eternally the case that Socrates is rational, given that Socrates is not eternal.The full answer to this question is a bit complicated, but it boils down to the following.When one has scientific knowledge that human is (essentially) rational, one thereby knows that every particular human is (essentially) rational.So a necessary and eternally true proposition is still in play here, and it is what is known in the strict and proper sense; this knowledge is then deployed in the case of Socrates simply by recognizing that Socrates is a member of the kind human, that is, a particular falling under the relevant universal, and applying to him an attribute that is inseparable from human.
This connects to a notorious, and for our purposes highly relevant, discussion in Ibn Sīnā, concerning the question whether God knows particulars.As I have argued elsewhere (Adamson, "On Knowledge"), Ibn Sīnā's provocative response to this questionnamely that God only knows them 'in a universal way'is intimately related with what we've just seen.As a purely intellective being, God only grasps universals as such, but He can also know about particulars insofar as they are subsumed under those universals.It is only through His remote causal influence over them that He knows them to fall under just these universals.For example, because He is the ultimate cause of Socrates' existence, by knowing Himself God can know indirectly that Socrates is an existing human.Furthermore, He has the universal knowledge that human is rational, so He can know that Socrates is rational.Humans are in a different situation.We are possessed of sense-perception, which gives us direct access to particulars as such.So we can grasp Socrates as a particular human, and then straightforwardly apply to him any attributes that belong to the universals under which he falls.(Recent studies of this much-discussed topic include Nusseibeh, "Avicenna"; Lim, "God's Knowledge"; Kaukua, "Future Contingency"; Zadyousefi, "Adamson, Avicenna".) The reason I am going into this in such detail is that it is precisely in view of this faultline that Ibn Sīnā embraces the traditional account as laid out by Rosa.For Ibn Sīnā, knowledge in the strict and proper sense is assent of the first type, either in the form of demonstrative propositions (this sort of knowledge he calls ʿilm) or in cases where we apply such propositions to particular cases (this sort of knowledge he calls maʿrifa).Other kinds of assent concern themselves with non-scientific propositions, with contingent matters.Like Rosa, Ibn Sīnā wants to 'give belief its due', precisely because the vast majority of assents we make in everyday life fall into the latter category: Socrates is in the marketplace, that giraffe over there is tall, this cake would be delicious to eat.Ibn Sīnā even allows that we can have certainty about such assents and that we make them 'by necessity', not in the sense that the propositions at stake are in themselves necessary, but in the sense that we cannot help assenting.This would apply, for instance, to assenting on the grounds of sufficiently strong testimony, as well as assents that are based on sensation.
So if I see Socrates in the marketplace, or get powerful testimony that he is there at the moment, I may take myself to be certain that he is in the marketplace.I would make a second-order judgement that I cannot be wrong about this, even though I could never have scientific understanding of his being in the marketplace, since this is not necessary, essential, eternal, or universal, and in this sense lack 'knowledge' of it.A further situation arises when I simply grant something for the sake of argument, as in a dialectical encounter,8 or have a belief while lacking certainty.As Black notes, Ibn Sīnā will sometimes use the word z ann ('mere belief') for that latter category.Taking all this into account, we can now give our chart of epistemic states its final form: One reason to insist on a fundamental divide between knowledge and belief in Ibn Sīnā's theory of assent is that this divide is reflected in his philosophy of mind.According to him, we have only one psychological power that is not exercised through a bodily organ: intellect (ʿaql).Here we have a metaphysical basis for Ibn Sīnā's adherence to the traditional account, one he shares in common with many, if not most, other pre-modern thinkers who accept this account.Knowledge and belief are mutually exclusive mental states not just for the epistemic reasons we've been treating so far, but also because they are states of very different psychological powers.Beliefs about particulars always involve the brain, which is the seat of such powers as sensation, memory, and the imaginationall the powers that deal with particulars.A particularly important brain-centred power for our purposes is 'thinking (fikr)', which is the power that would be engaged when we are forming beliefs about particulars.While one function of 'thinking' is to prepare the way for the full-blown knowledge achieved by the intellect (Gutas, "Intuition and Thinking"), that kind of knowledge cannot be realized in the brain or in any other bodily organ.This is because it is universal in character, and Ibn Sīnā thinks that reception of a universal intelligible in a bodily organ would 'particularize' it, rendering it no longer suitable for scientific understanding.Indeed, this is the basis of Ibn Sīnā's proof for the immateriality of the rational soul.Because the human intellect grasps universal 'intelligibles (maʿqūlāt)', it must operate without using an organ, so it can survive and continue to enjoy knowledge after the death of the body (Alpina, Subject, Chapter 5; Adamson, "From Known").
Ibn Sīnā is aware that he is departing from Aristotle in some respects.He presents his own proof of the soul's immaterial activity as an improvement on Aristotle's, and criticizes Aristotle for suggesting that the soul is only the 'form of the body', since this would apply to the soul only insofar as it is the source of the body's perfection, not insofar as it has an intellective function in its own right.Still, Ibn Sīnā is basing himself throughout on originally Aristotelian premises, above all the rigorous demands placed on knowledge in the Posterior Analytics (see further Strobino, "Avicenna's Use"), which are also a driving consideration in al-Fārābī's epistemology.
In light of which, it is no surprise that what Rosa says about Aristotle in Thinking with Assent turns out to be directly relevant to these thinkers of the Islamic world.She writes: Judgement or taking-to-be-true (hupolēpsis) is, in turn, the generic cognitive mode under which, in De Anima, Aristotle groups more specific cognitive modes, including epistēmē and doxa … Epistēmē is not a matter of having justified true beliefs.If anything, as a grasp of demonstrations or as an explanatory capacity, epistēmē is closer to understanding than to justified true belief.
(Thinking with Assent, Section 1.2.ii) As Joep Lameer has shown (Lameer,Conception and Belief,(18)(19)(20)(21)(22)(23), in coining the term taṣ dīq al-Fārābī was thinking of a passage at Posterior Analytics 1.1, with the original Greek term being hupolambanein.9This might encourage us to modify Rosa's remark about hupolēpsis by inserting Arabic terminology in place of Greek, which would yield the following result: Judgement or taking-to-be-true (taṣ dīq) is a generic cognitive mode under which fall more specific cognitive modes, including ʿilm and z ann … ʿIlm is not a matter of having justified true beliefs.If anything, as a grasp of demonstrations it is closer to understanding than to justified true belief.
Which captures remarkably well what we have found in Ibn Sīnā.
Having said that, there is not a perfect fit between Rosa's own epistemology and that of Ibn Sīnā.As we saw, for Rosa sensation and perhaps testimony can give rise to knowledge in the strict and proper sense.So Rosa has a much more liberal understanding of 'knowledge' than the strict, intellective account of knowledge in Ibn Sīnā.She would apply the term to any primitive grasping of something that is 'present' to an epistemic subject.If we consult the final chart given above, this means that Ibn Sīnā's 'certain beliefs', or at least some of them, would count as knowledge in Rosa's terms.The difference between them is the result of significant commitments on Ibn Sīnā's part that Rosa does not share: in epistemology, his essentialism and its attendant modal theory, and his highly restrictive approach to 'knowledge' in the strict and proper sense, which is an inheritance from Aristotle; in philosophy of mind, the stark contrast he draws between an immaterially realized power for grasping universal intelligibles, and an embodied power that deals with particulars.None of this means, though, that Ibn Sīnā fails to exemplify Rosa's historical account.It is one thing to claim that knowledge and belief are mutually exclusive epistemic states, and another to say what belongs on which side of the ledger and why. 10s it happens, subsequent thinkers of the Islamic world challenged Ibn Sīnā's epistemology, and devised alternatives that come closer to the position adopted by Rosa.Two developments were especially relevant.First, a number of thinkersespecially Abū l-Barakāt al-Baghdādī (d. in the 1160s) and Fakhr al-Dīn al-Rāzīrejected wholesale the Aristotelian and Avicennan psychological theory according to which separate 'faculties' or 'powers' (Gk.dunameis, Ar. quwan) are the subjects of cognition.Rather, there is just the single 'self' that performs all cognition (Kaukua, "Self, Agent, Soul"; Tiryaki, "From Faculties"; Adamson and Benevich, "Fakhr al-Dīn").Against Ibn Sīnā's position, Fakhr al-Dīn al-Rāzī invoked one of the phenomena we have discussed above: the subsuming of a particular under a universal judgement.In such a case, Ibn Sīnā would say that brain-seated powers like sensation and memory, which grasp particulars, are working in tandem with the immaterial intellect, which grasps universals.Thus if we perceive that Socrates is a human, we can deploy the intellective knowledge (ʿilm) that human is rational to derive the knowledge (maʿrifa) that Socrates is rational.Fakhr al-Dīn argues that the real 'judger' or 'knower' for this latter proposition must just be a soul or self that can grasp both particulars and universals.How else would the two kinds of cognition be brought together?In one fell swoop, this plausible move undercuts Ibn Sīnā's bifurcated epistemology.
In a second development, some philosophersespecially those in the 'Illuminationist' (ishrāqī) tradition inaugurated by Suhrawardī (d.1191)stressed the analogy between directly knowing something and seeing something.In terms that are, again, strikingly similar to those used by Rosa, these Illuminationists spoke of a form of cognition they called 'knowledge by presence (h ud ūr)' (Kaukua, "Suhrawardī's Knowledge").They tend to define this sort of cognition negatively, as the absence of any impediment to grasping the cognized object.Just as we can see something so long as nothing is blocking our view, so the human mind will know its object as long as that object is 'present' to it and nothing prevents the cognition (e.g.interference from the body).This is also how we are aware of the contents of our own minds (including our beliefs) and our own bodies (Eichner, "Knowledge"), which are simply present to us.It is in the same way that God does after all know particulars as such, since nothing is hidden from Him, or ever could be (Benevich, "God's Knowledge").This second challenge to Ibn Sīnā even more obviously shifts the discussion in Rosa's direction.Now the phenomenology of unhindered sensation is being used as a paradigm of knowledge, doing away with the Avicennan stricture that knowledge in the proper sense has to do with universal, intelligible objects of cognition.
One could certainly extend this investigation by adding other thinkers of the Islamic world to Rosa's historical panorama.Figures like Fakhr al-Dīn al-Rāzī were critiquing Ibn Sīnā in light of epistemological ideas taken from the tradition of Islamic theology (kalām), which defined knowledge as a 'relation' between knower and known rather than the forming of a mental representation that corresponds to the way things are.This already suggests that the kalām theory maintained a mutually exclusive opposition between knowledge and belief, since knowledge is a sui generis relation on this account; but more discussion would be needed to establish that.On the Aristotelian side of the story, Ibn Rushd (Averroes, d. 1198) shared with Ibn Sīnā a wholehearted commitment to the epistemic strictures of the Posterior Analytics and its association of knowledge with universal, necessary truths.Indeed, this underlay Ibn Rushd's notorious claim that there is only one potential intellect for all of humankind.An intellect belonging to just one human would be embodied and thus particularized, therefore unable to perform the requisite kind of cognition.Furthermore, there can be only one act of grasping a given intelligible.As Stephen Ogden puts it, "the best way to explain how we can all think the same thing is that there is only one and the same thing that is thoughtin one intellect" (Ogden,Averroes on Intellect,109).One could hardly ask for a more dramatic instance of Rosa's historical thesis that knowledge and belief are mutually exclusive mental states: here knowledge as described in the Posterior Analytics is a cognitive state in an immaterial, universal intellect, while other kinds of thinking are realized in the brains of individual humans (Taylor, "Remarks").As these examples show, the widereaching and ambitious claim made in the first part of Rosa's Thinking About Assent is not just a historical thesis about the many philosophers she discusses, but also a hypothesis that can be tested on other philosophers whom she does not discuss.If the foregoing analysis is correct, then it is a hypothesis that is even more powerful and illuminating than she supposed.
Disclosure statement
No potential conflict of interest was reported by the author(s). | 9,083.2 | 2024-05-03T00:00:00.000 | [
"Philosophy"
] |
Heart Rate Measurement Based on 3D Central Difference Convolution with Attention Mechanism
Remote photoplethysmography (rPPG) is a video-based non-contact heart rate measurement technology. It is a fact that most existing rPPG methods fail to deal with the spatiotemporal features of the video, which is significant for the extraction of the rPPG signal. In this paper, we propose a 3D central difference convolutional network (CDCA-rPPGNet) to measure heart rate, with an attention mechanism to combine spatial and temporal features. First, we crop and stitch the region of interest together through facial landmarks. Next, the high-quality regions of interest are fed to CDCA-rPPGNet based on a central difference convolution, which can enhance the spatiotemporal representation and capture rich relevant time contexts by collecting time difference information. In addition, we integrate the attention module into the neural network, aiming to strengthen the ability of the neural network to extract video channels and spatial features, so as to obtain more accurate rPPG signals. In summary, the three main contributions of this paper are as follows: (1) the proposed network base on central difference convolution could better capture the subtle color changes to recover the rPPG signals; (2) the proposed ROI extraction method provides high-quality input to the network; (3) the attention module is used to strengthen the ability of the network to extract features. Extensive experiments are conducted on two public datasets—the PURE dataset and the UBFC-rPPG dataset. In terms of the experiment results, our proposed method achieves 0.46 MAE (bpm), 0.90 RMSE (bpm) and 0.99 R value of Pearson’s correlation coefficient on the PURE dataset, and 0.60 MAE (bpm), 1.38 RMSE (bpm) and 0.99 R value of Pearson’s correlation coefficient on the UBFC dataset, which proves the effectiveness of our proposed approach.
Introduction
Heart rate is a vital indicator of health monitoring. Heart rate measurement is essential for health management, disease diagnosis and clinical research. Traditional contact heart rate measurement methods, including electrocardiograms, require specific equipment such as ECG technology. The surface electrode is in direct contact with the patient's body surface, which brings inconvenience and discomfort to the patient, psychologically. In addition, ECG equipment is expensive, complicated to install, inconvenient to carry, and not suitable for real-time mobile heart rate monitoring. Remote photoplethysmography (rPPG) is a non-contact method to capture the periodic changes in skin color caused by the heartbeat through sensors such as cameras. The process of the method is as follows: (1) use the camera to capture the skin area (especially the face skin area) video; (2) analyze the periodic color changes in the skin area due to the blood flow pulsation caused by the heartbeat; (3) recover the corresponding rPPG signal and measure physiological indicators. The subtle color changes of skin in the video directly reflect changes in the rPPG signals, in other words, the deep learning models can capture the temporal variation of skin color to recover the rPPG signals. Today, severely affected by the COVID-19 pandemic, traditional heart rate measurement methods have greater safety risks. Close contact may cause infection, so that the study of non-contact rPPG signals measurement has attracted more attention [1][2][3].
With the continuous application of image video in the field of computer vision, AI has many applications in the field of healthcare such as HR measurement and blood pressure measurement [4], many non-contact heart rate measurement methods based on deep learning technology have begun to appear. Hsu et al. [5] proposed a method that used time-frequency representation to predict heart rate. The first step of their method was to detect the key points of the face and crop the region of interest, and then used the CHROM method to estimate the rPPG signals, and finally the representations were fed to VGG15 to estimate the heart rate. Špetlík et al. [6] proposed an end-to-end heart rate prediction model that included the extraction of the rPPG signals from the video sequence and the output of the predicted heart rate based on the rPPG signals received from the first stage. Niu et al. [7] aggregated the RGB signals in multiple regions of interest and converted them into spatial-temporal map representations, and then the spatial-temporal map representations were used to predict heart rate. Since the 2D convolution neural network only considers the spatial information of the video frame, many researchers began to use 3D convolution neural network to gain temporal information, which is significant for the rPPG signals recovery. Yu et al. [8] proposed PhysNet based on the spatiotemporal convolutional network, which can reconstruct precise rPPG signals from facial videos, and the final output of the model is the predicted rPPG signal. Tsou et al. [9] proposed Siamese-rPPG based on a Siamese 3D convolution network. Since different facial regions should reflect the same rPPG information, so they should be combined to improve the overall robustness for rPPG signals extraction. Lokendra et al. [10] proposed a novel denoising-rPPG network based on TCN architecture, which can model long sequences effectively. Moreover, Action Units (AUs) were used to denoise temporal signals by providing relevant information about the facial expression.
In order to extract more accurate rPPG signals, the attention mechanism has been widely used in the rPPG signals recovery [11,12]. Hu et al. [13] proposed a temporal attention mechanism for the extraction of the rPPG signals. The attention module strengthened the interaction capability of the previous and next frame information in the time dimension, which prevented abnormal changes in the temporal domain. Chen and McDuff [14] proposed an attention-based convolutional neural network to predict heart rate. The network combined an appearance model with a motion model; the attention mechanism was designed to direct the motion model to learn information more efficiently, the input of the motion model was normalized frame difference.
In summary, the flow of the existing non-contact heart rate measurement methods mainly includes three steps: ROI selection, rPPG signal extraction, and heart rate measurement. The ROI selection is the first step to obtain the rPPG signal, which directly effects the quality of the rPPG signal [15]. There are some disadvantages in the existing ROI selection methods. A small number of skin pixels will lead to large quantized uncertainty [16]. Additionally, the down-sampling process of the skin pixels is found to deteriorate the quality of the rPPG. To learn spatiotemporal features effectively, we analyze the forehead and cheek independently, considering the fact that the absolute size of the forehead and cheek is larger than other facial regions and those regions contain rich rPPG information [17], which makes it easy for network to learn spatiotemporal features. In terms of the rPPG signal extraction, due to the fact that the conventional 3D convolutional neural network cannot extract spatiotemporal features effectively, since it is susceptible to irrelevant factors such as lighting changes, we proposed a central difference convolutional network (CDCA-rPPGNet) with an attention mechanism to obtain more accurate rPPG signal from the output of the ROI selection process. Figure 1 shows an overview of the method used to predict the heart rate. Our contributions are summarized as below:
1.
We design the network based on central difference convolution to obtain rich time difference information for the extraction of the rPPG signals;
2.
We propose a more reliable ROI extraction method. Face detection is used to extract the forehead and cheek, then we splice them as the input of the model; 3.
The 3D-CBAM attention mechanism is designed to direct our network to learn information more efficiently and focus on more important features; 4.
Experiments based on PURE [18] and UBFC [19] datasets demonstrate the robustness and effectiveness of our network.
ROI Selection
All pixels of the face, except for the non-skin regions that contain no rPPG information, contribute to the rPPG signal. Most all heart rate estimation methods require the ROI selection. If we select the face region as the input of our model, the predicted rPPG signals will be interfered by the non-skin regions such as eyes and bread. Regarding only the cheek or forehead as input will ignore the other region that contains high-quality rPPG information, which will lead to a decrease in the robustness of the signal. In order to maximize the ratio of skin region, we splice the forehead and cheek as the input of our model. Face detection is used to extract the face region, precise facial landmarks are used to define the specific coordinates of the cropped regions. We use OpenFace [20] to get facial landmarks. For one thing, it offers high accuracy in face recognition; for another, it is easily integrated into today's mobile devices, which means it does not require high computing power. To define the ROI, ten points of the 68 facial landmarks are used, the motivation is that we want to get high-quality ROI with the simplest operation, the selected ten landmarks can obtain all pixels of the cheeks as possible and make the forehead avoid the influence of hair. As shown in Figure 2, eight points of them are used to define the cheek. The other two points are used to define the forehead. The method of extracting cheek refers to [21]. In Equations (1) and (2), the coordinates of the ten points are applied to accurately define the cheek and forehead. The cheek and forehead are down-sampled to 64 × 96 pixels and 32 × 96 pixels respectively.
where X * and Y * denote the x and y coordinates of the top-left vertex respectively. W * is the width of ROI, H * is the height of ROI. As the ROI is extracted in this way, we maximize facial pixels as the input of our network, which can weaken the impact of background and head movements as much as possible.
Central Difference Convolution
The process of extracting the rPPG signal is to obtain the temporal variation of skin color. In order to extract the spatiotemporal features more effectively, Yu et al. [22] first applied central difference convolution for the task of gesture recognition, which is beneficial for the rPPG signal recovery by better capturing time difference information [23]. Central difference convolution is developed based on conventional 3D convolution and is the basic unit of our network for heart rate measurement. Two steps are included in the operation process of traditional 3D convolution: (1) sampling the local receptive field C on the input feature map X; (2) aggregation of sampled values via weighted summation. Compared with conventional 3D convolution, temporal central difference convolution (3DCDC-T) enhances the spatiotemporal representation through considering temporal central differences. It captures numerous temporal contexts, which is suitable for heart rate measurement. The sample local receptive field C is divided into two types of regions: (1) the current time step R ; (2) the adjacent time steps R . Temporal central difference convolution also contains two steps similar to conventional 3D convolution; the output of 3DCDC-T could be calculated by the following Equation (3).
where p 0 represents the current position on both input and output feature maps while p n denotes the position in the local receptive field C. Hyperparameter θ ∈ [0, 1] tradeoffs the contribution between intensity-level and gradient-level information. 3DCDC-T is adopted in CDCA-rPPGNet for rPPG signal extraction.
3D Convolutional Block Attention Module
CBAM [24] attention mechanism is a lightweight and effective attention module that can be directly applied to convolutional neural networks. For feature maps generated by convolutional neural networks, CBAM calculates two dimensions of attention weights: channel and spatial, and then the corresponding elements of the attention map and the feature map are multiplied for adaptive feature refinement. We extended CBAM from 2D to 3D, its structure is shown in Figure 3. The channel attention module focuses on the feature channels that are decisive for the extraction of rPPG signals. As shown in Figure 4, the diagram of channel attention, the feature map F 3D is processed by channel attention module to 1D channel attention map M C 3D , which is multiplied by M C 3D to get F 3D .
The output F 3D can be obtained by the following formula: where σ represents the sigmoid function, AvgPool3D and MaxPool3D represent the averagepooling and maximum-pooling operations. MLP represents the multi-layer perceptron, the weights W 1 and W 0 are shared for both inputs. The symbol ⊗ represents the elementwise multiplication. Diagram of spatial attention is shown in Figure 5, the feature F 3D is processed by the spatial attention module, spatial attention module focuses on which pixels in the RGB image sequence have a greater contribution to the extraction of rPPG signal.
Hence, the output feature map F 3D can be calculated by: where σ represents the sigmoid function and f 7×7×7 denotes a 3D convolution layer with the filter size of 7 × 7 × 7. AvgPool3D and MaxPool3D represent the average-pooling and maximum-pooling operations. The symbol ⊗ represents the element-wise multiplication.
Network Architecture
To efficiently predict the rPPG signal, we propose an efficient network. An overview of CDCA-rPPGNet is presented in Figure 6. The first convolution layer aims to learn multiple combinations of color for more effective rPPG information. CDC_CBAM_BLOCK consists of two 3DCDC-T and 3D-CBAM, which is adopted to extract the rPPG information in the spatiotemporal domain. It helps to learn more effective temporal contexts and is less disturbed by non-skin pixels. The last layer aims to aggregate channels for final rPPG signals. AvgPool and AdpativeAvgPool are used to reduce the feature map size, which can weaken the impact of facial motion. The structure of CDCA-rPPGNet, including approximately 0.66 M parameters, is described in Table 1.
Loss Function
Our proposed network architecture is developed to recover rPPG signals with similarity in trend and to accurately estimate pulse peak time positions that match with ground truth rPPG signals. A suitable loss function needs to be designed to guide our networks. The frequently used loss functions are inappropriate for rPPG signals, since both rPPG signals (from facial video) and PPG signals (from contact measurement) reflect the blood volume changes, but their exact values are not the same. We only care about the trend of signals but ignore the specific value, so Negative Pearson Correlation is used as the loss function. Pearson Correlation indicates the linear similarity between rPPG signals and PPG signals, which can guide our networks to maximize the trend similarity. It is formulated as: where x is the predicted rPPG signals, y donates the ground truth rPPG signals, and T is the length of the rPPG signals.x andȳ denote the average values of two signals respectively. The Pearson Correlation coefficient, ranged from −1 to +1, indicates the similarity between two signals. The correlation of −1 represents a negative correlation between two signals, the value 0 represents no linear correlation. The value +1 represents a positive correlation between two signals. Our goal is that the predicted rPPG signals should be strongly correlated with the ground truth rPPG signals.
Results
To train and evaluate our network efficiently, experiments based on PURE and UBFC datasets were conducted. We used three performance metrics for heart rate measurement: mean absolute error (MAE), root mean squared error (RMSE), Pearson's correlation coefficient (R).
PURE:
The dataset contains ten subjects, every subject contains six different activities (steady, talking, slow head translation, fast head translation, small head rotation, medium head rotation). It is a fact that talking and head movements will cause large light variation, which makes it difficult to recover rPPG signals. In a total of 60 videos, every video is about 1 min and all videos were recorded by the industrial camera at 30 fps with 640 × 480 pixels spatial resolution. The ground truth PPG signals were captured with a finger pulse oximeter pulox CMS50E with a sampling rate of 60 Hz.
UBFC-rPPG: the dataset includes 42 videos of 42 subjects; each subject has one video, every video is about one minute. In the video recording process, in order to make the subject's heart rate change, the subject is asked to play a game that can trigger the heart rate change. The video was recorded by Logitech C920 HD Pro at 30 fps with a spatial resolution of 640 × 480 pixels. A finger pulse oximeter CMS50E was used to capture the ground truth rPPG signals with a 60 Hz sampling rate. Since UBFC-rPPG dataset is really small, we performed data augmentation on the sample, we flipped each sample left and right, which doubles the number of samples.
The examples of the two datasets are shown in Figure 7. For the PURE dataset, the training set contains six subjects (36 videos of six subjects) and the testing set contains the other four subjects (24 videos of four subjects). For the UBFC dataset, the training set contains 26 subjects (26 videos of 26 subjects) and the test set contains 16 subjects (16 videos of 16 subjects). Since the UBFC-rPPG dataset is extremely small, we performed data augmentation on the sample by flipping each sample left and right, which doubles the sample size.
Evalution Metrics
At present, three performance metrics are used for heart rate measurement: mean absolute error (MAE), root mean squared error (RMSE), Pearson's correlation coefficient (R).
1.
Mean absolute error (MAE) MAE is the average value of the absolute deviations of all estimated HR and the ground truth HR. It can be expressed as: 2. Root square mean error (RMSE) RMSE is the average value of the standard deviations of all estimated HR and the ground truth HR. It can be calculated by the following formula: where HR predict and HR gt denote the estimated HR and the ground truth HR respectively, and N is the number of heart rate samples. Cov(x, y) denotes the covariance of x and y.
Parameters Setting
For our experiments, due to the different frequencies of the ground truth rPPG signals and the video frame sampling frequency, we should normalize the rPPG signal first then subsample it to the video frame. The input of the model is the continuous ROI images x ∈ R 128×96×96×3 , ROI images were generated by the method mentioned in Section 2.1. To increase the sample size, we sample in steps of eight frames in the two datasets. The predicted rPPG signal was filtered by a sixth-order Butterworth bandpass filter, which was applied with a frequency between 0.7 to 2.5 Hz. The HR is estimated by the power spectral density (PSD) analysis from the filtered signals. We used the windows size of 10 s and step size of 2 s to calculate HR.
We used the Adam optimizer to train our model, the batch size is eight, the learning rate is set to 0.0002, the model is trained with 30 epochs. ReLU activation is used in each convolutional layer. All network components are implemented by Pytorch framework and trained with Quadro P6000.
Ablation Study
To evaluate the effectiveness of our model for non-contact heart rate measurement, we conduct experiments on the PURE dataset and the UBFC dataset. We perform the following ablation study: (1) replace 3DCDC-T with conventional 3D convolution and remove the attention module; (2) only remove the attention module. Some traditional methods and deep learning methods are used for comparison, the result shows that the proposed method outperforms other methods.
The experimental results of the PURE dataset are shown in Table 2. The results obtained by deep learning methods are generally better than traditional methods, and our proposed method achieved the best result. Existing deep learning models cannot capture rich temporal contexts well. In our model, we use 3DCDC-T to reduce the influence of noise. Besides, 3D-CBAM is used to help our network to learn more important features, which can improve the effectiveness of our method. The decrease of MAE and RMSE indicates that 3DCDC-T and the attention module are effective for recovering the rPPG signals, and the best result is achieved by combining them together.
We also evaluate the proposed method on UBFC dataset, and the results are shown in Table 3. Our proposed method achieves 0.60 MAE (bpm), 1.38 RMSE (bpm) and 0.99 R value of Pearson's correlation coefficient. Compared with existing deep learning methods, our proposed method outperforms the other deep learning baseline. Same as the result obtained on PURE dataset, 3DCDC-T and 3D-CBAM attention module are also helpful for the extraction of the rPPG signals on UBFC dataset.
In addition, as shown in Figure 8, we analyze the influence of 3DCDC-T and the attention module via the Bland-Altman plot. The estimated HR range is within the ground truth HR. It can be analyzed through the Bland-Altman that the HR distribution is more consistent with the addition of 3DCDC-T and the attention module. In conclusion, the plots visually indicate that our proposed method is more effective and robust. To show the results of our model more intuitively, we analyze the linear similarity of two signals via the scatter plot. As shown in Figure 9, these scatter plots indicate that the linear correlation of the HR predicted by our model and the ground truth HR is very strong. Figure 10 visualizes some examples of the estimated rPPG signals and corresponding power spectrum. It can be seen intuitively that the predicted signal and the ground truth rPPG signals almost have the same trend, which proves that our model is effective for remote heart rate measurement.
Discussion
We proposed a central difference convolution network with an attention mechanism to recover rPPG signals from facial video. We evaluated the proposed method on two public datasets. The experimental results indicate that our proposed method was more accurate than the previous method. The proposed method mainly included two steps: ROI selection, rPPG signals prediction using our model.
First of all, the ROI selection is very important to the recovery of the rPPG signals, because the region of the face in the video is originally small, which means that the skin pixels can be used to predict rPPG signals are really scarce. If the video is compressed, the quality of the skin pixels will be worse, which will make it difficult to estimate the rPPG signals. In theory, all skin pixels of the face have an effect on the extraction of signals, but the existing methods are difficult to use skin pixels efficiently. To solve this problem, we take the two ROIs that have the largest area and the most rPPG information as the input of our model. The cheek and forehead are affected by the background environment and light to varying degrees, but they both reflect the rPPG information. Using both of them can reduce the impact of the background on our signal extraction, which makes the rPPG signals we extract more robust. At the same time, we minimize the learning difficulty of our network, which can make the network more focused on learning useful information to recover the rPPG signals.
The next step is the construction of the neural network. The rPPG signals are essentially a kind of time series, the signals change with time. 3DCDC-T has the ability to better obtain the differences in time context, which is useful for the extraction of the rPPG signal, so we use 3DCDC-T to extract features. Changes in the rPPG signals are reflected from the subtle changes in the color of skin; they are relatively shallow features. Unlike video classification or action recognition, our task does not require a deep network to extract [33]. Therefore, we proposed a lightweight network. The attention mechanism is used to learn more important features for signals recovery. Some pixels on the cheek or forehead, such as bangs, do not contribute exactly to the rPPG signals recovery. So the attention mechanism can be added, which can guide our model to learn regions and channel features that are more essential.
In conclusion, the ROI selection is as important as the signals extraction method. Two steps of our method have an impact on heart rate prediction. Our proposed method showed an MAE of 0.46 bpm in the PURE dataset by combining two stages. We also achieved an MAE of 0.60 bpm in the UBFC dataset. Although our proposed method performed well in experiments, there are still some limitations. First of all, our preprocessing process requires accurate face detection and landmarks, which will not work normally if the subject's face is partially occluded or the subject is in motion. In addition, a deep learning based method requires a large number of training samples, which is a challenge for remote heart rate measurement. Since heart rate distribution is between 40 bpm and 150 bpm for most samples, our method fails to achieve good accuracy for predicting abnormal heart rate values. Although there are still some limitations, our proposed method has the potential to make a contribution to the practical use of assisted living, which is suitable for our measurement scenarios.
Conclusions
Remote HR measurement plays an important role in the field of healthcare. Due to the COVID-19 pandemic, remote HR measurement may be widely used in disease diagnosis and real-time heart rate monitoring. However, the process of most existing rPPG methods is too complicated to be applied to real scenarios. In this paper, we proposed a central differential convolutional network with an attention mechanism for more robust rPPG signal measurement from facial video. The preprocessing part uses face key point detection to segment and splice the regions of interest of the face. Compared with the conventional 3D convolution, the improved 3DCDC-T can estimate the rPPG signal more accurately by enhancing the spatiotemporal representation with an abundant temporal background. The attention mechanism can guide the network to learn more critical feature channels and spatial features for the rPPG signal recovery. On the one hand, our network only includes approximately 0.66 M parameters, which means we can easily deploy the model on the mobile device and, on the other hand, experimental results on two public datasets-PURE and UBFC-rPPG-demonstrate the effectiveness of our proposed method. Our model achieves an MAE of 0.46 bpm and an MAE of 0.60 bpm on the PURE dataset and the UBFC dataset, respectively, which is superior to other current methods. In the future, we will be looking to improve the robustness of the model in low-constraint environments, such as head movements, and reduce the impact of unbalanced HR distribution.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,116.6 | 2022-01-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Investigating UAV-Based Applications in Indoor–Outdoor Sports Stadiums and Open-Air Gatherings for Different Interference Conditions beyond 5G Networks
With the onset of 5G technology, the number of users is increasing drastically. These increased numbers of users demand better service on the network. This study examines the millimeter wave bands working frequencies. Working in the millimeter wave band has the disadvantage of interference. This study aims to analyze the impact of different interference conditions on unmanned aerial vehicle use scenarios, such as open-air gatherings and indoor-outdoor sports stadiums. Performance analysis was carried out in terms of received power and path loss readings.
Introduction
As technology is developing on a large scale, the 5th generation (5G) is the most advanced technology that can enable wireless communication between humans, sensors, and machines. This rapid evolution upgraded life in instant communication, quick interaction, and good quality of life. The major kits such as millimeter waves and heterogeneous networks lead a straight path in the research of 5G. When the first generation (1G) was introduced in 1979, it was only analog telecommunications, and then an upgradation with text messages came with the name second generation (2G). Now it is time for the fifth generation (5G) with improved data capacity. These 5G and beyond networks are required in urban, rural, and suburban areas. To fulfill the requirement of a good quality network, unmanned aerial vehicles (UAVs) have been used, which temporarily provide a network in regions such as indoor sports stadiums, outdoor sports stadiums, and open area gatherings [1,2].
According to channel measurement results, similar large-and small-scale parameters (SSP) must be obtained for two users with close intervals. It should change accordingly when the user changes or smoothly moves to a different terminal over time. Another effect is called the Doppler shift. It is caused by moving or changing the receiver and
• Channel Parameters
Channel parameters contain many parameters such as the following scenarios: UMi (urban micro to find the readings in sub-urban areas), UMa (urban macro to find the readings in urban areas), RMa (rural macro to find the readings in rural areas), and InH (indoor to find the readings in indoor or closed areas such as gathering places). In channel parameters, different scenarios are frequencies from 0.5 to 100 GHz. The frequencies considered are 28 GHz, 38 GHz, 60 GHz, and 72 GHz with human interference and rain. The variation in temperature, humidity, distance range (DR), type of environment, and rain rate in mm/h have been considered in channel parameters.
• Antenna Parameters
Antenna parameters contain the parameters for which it can control the antenna locations and the count of antennas required for optimal signals. For example, the number of counts of a transmitter (TX) and receiver (RX) antenna for better measurements of optimal waves and connections can vary.
•
Human blockage parameters Human blockage parameters have been considered due to power loss in the signal due to human interference in the channel. This human interference has been considered practical and has not been considered in an ideal case.
• Spatial Consistency Parameters
Spatial consistency parameters can also be considered. This is the software where the perfect measurements to manage and control the millimeter waves and 5G bandwidth can be found.
Background Survey
This paper discusses the usage of UAVs in indoor-outdoor sports stadiums and openair gatherings for millimeter wave frequencies of 5G and beyond communication networks. This paper also considers interference factors such as rain, buildings, vegetation, vehicles, and humans. A rigorous background survey has been conducted, and the existing literature has been summarized in Table 1. [1] 28 GHz, 38 GHz [2] 28 GHz, 86 GHZ [6] 38 GHz [7] 28 GHz, 60 GHz, 73 GHz [12] 28 GHz, 73 GHz [13] 28 GHz, 73 GHz, 140 GHz [14] 73 GHz [15] 28 GHz, 140 GHz [16] 28 GHz, 38 GHz, 60 GHz [Proposed] 28 GHz, 38 GHz, 60 GHz, 72 GHz represents the analysis of the mentioned cases in the cited references, while represents that the mentioned cases are not analyzed in the cited references. In [1], the author outlines the rationale for new millimeter wave cellular systems, methodology, and measurement gear, as well as a range of measurement data demonstrating that 28 and 38 GHz frequencies can be employed when using steerable directional antennas at base stations and mobile devices. In [2], the author examines the channel models used in 5G radio systems. The broad framework for channel models and the key differences between millimeter wave and microwave channel models are also discussed. In [3], the author investigates different channel models created for millimeter wave communication systems using the NYUSIM channel simulator. The created channels were analyzed for carrier frequencies of 28/73 GHz, MIMO antenna configurations from 2 × 2 to 64 × 64, and LOS/NLOS parameters. Based on stochastic geometry, the author develops an analytical model for downlink exposure in massive multiple-input multiple-output (MIMO) antenna networks for 5G. Then, the author analyzes different deployment scenarios of massive MIMO (e.g., cell-free, IoT, etc.). It can also benefit from realistic data representing the transmission gain after deploying massive 5G MIMO antennas into the 5G network [4]. In [5], the author evaluates the performance of the digital beam steering (DBS) precoder in millimeter waves multi-user multiple-input multiple-output (MIMO) systems. Using NYUSIM, realistic statistical features are calculated in 3D. In [6], the author examines how high temperatures, intense humidity, foliage, and more considerable raindrop size impact wireless communication in tropical regions using NYUSIM simulations. In [7], the author proposed a general approach to calculating the per-cell spectral efficiency of millimeter wave multicell single-stream systems. For 5G communications, the author explores the use of SSCM in unlicensed V bands (specifically 60 GHz) while considering both LOS and NLOS conditions. The NYUSIM channel simulator represents the channel characteristics of the 5G backhaul scenario [8]. In [9], the author discusses the use of UAVs in indoor and outdoor sports stadiums, open-air millimeter waves frequencies, and extreme interference factors such as rain, buildings, vegetation, vehicles, and people. Several weather factors are discussed in [10] regarding signal intensity in various settings and circumstances. Based on the NYUSIM simulator, predictions of the channel's performance are made. Using four frequencies, 30 GHz, 40 GHz, 60 GHz, and 80 GHz, the author evaluated the effectiveness of the channel and chose the best frequency for a tropical setting where rain attenuates between the transmitter and reception antenna. The author presents an analysis of the O2I penetration loss of millimeter waves channels at 28, 38, 60, and 73 GHz operating frequencies for different scenarios: Low loss/high loss and TX/RX antenna HPBW azimuth/elevation 10 • /15 • . The type of building (standard glass, wood, IRR glass, and concrete) and antenna properties affect channel characteristic O2I penetration loss [11]. In [12], the author compares three 5G channel models, i.e., QuaDRiGa, NYUSIM, and MG5G, from the perspectives of modeling methodologies, parameter settings, and channel simulations. He concludes that NYUSIM gives better results than other channel models and is also more suitable for the RMa scenario.
In [13], the author demonstrated that these new modeling capabilities reproduce realistic data when implemented in a Monte Carlo manner with NYUSIM 2.0, making it a useful measurement-based channel simulator for designing and evaluating fifth generation and beyond millimeter wave communication systems. In [14], the author created a two-level beamforming architecture for uniform linear arrays that takes advantage of the creation of spatial lobes. Simulations with the channel simulator NYUSIM were used to study the effect of subarray spacing on the spectral efficiency. The findings can be used to create antenna array topologies for 5G wireless systems. Several weather factors are discussed in [15] regarding signal intensity in various settings and circumstances. Based on the NYUSIM simulator, predictions of the channel's performance are made. Using four frequencies, 30 GHz, 40 GHz, 60 GHz, and 80 GHz, the author evaluated the effectiveness of the channel and chose the best frequency for a tropical setting where rain attenuates between the transmitter and reception antenna. An evaluation of multi-user massive multiple-input multiple-output (MIMO) systems is presented in this paper. The author examines a downlink single-cell scenario that uses linear precoding for zero-forcing (ZF) and conjugate beamforming (CB). A statistical 5G propagation channel was used for this evaluation, developed by NYUSIM [16]. The author performed on [4] simulated spatial channel modeling features for 73 GHz millimeter wave band using NYUSIM. The spatial consistency channel model for moving users and the channel model for static users without consideration of spatial consistency are compared with different channel parameters for LOS and non-LOS (NLOS) environments. Based on stochastic geometry, the author develops an analytical model for downlink exposure in massive multiple-input multipleoutput (MIMO) antenna networks for 5G. Then, the author analyzes different deployment scenarios of massive MIMO (e.g., cell-free, IoT, etc.). It can also benefit from realistic data representing the transmission gain after deploying massive 5G MIMO antennas into the 5G network [17]. The author uses NYUSIM software to analyze the performance of MIMO channels at 77 GHz under different configurations. Simulations are conducted in an NLOS environment with MIMO uniform linear arrays at the transmitter and receiver sides [18]. Using the NYUSIM tool [19], the author simulates a 5G channel at the E-band frequency. The urban microcell (UMi) environment was used in this study to assess the effects of massive MIMO and MIMO on LOS and NLOS. In both LOS and NLOS environments, directional and omnidirectional antennas, power delay profiles (PDPs), root mean squares (RMSs) delay spread, and small-scale PDPs were considered. In [20], the author presents a channel model for 5G millimeter wave cellular communication for urban microcells operating at 28 GHz in LOS conditions using multiple antenna elements at the transmitter and receiver. Different parameters affecting the channel have been considered in the simulation using NYUSIM software developed by NYU Wireless.
The author of [21] created a 3D spatial statistical channel model for millimeter wave and sub-THz frequencies in LOS and NLOS scenarios in an interior office building using comprehensive 28 and 140 GHz observations. In [22], the author investigated NYURay, a 3D millimeter wave and sub-THz ray tracer. This tracer has been calibrated for wireless channel propagation measurements at 28, 73, and 140 GHz in indoor, outdoor, and manufacturing settings. Root mean squares (RMSs) delay spread, and small-scale PDPs were considered. Indonesia's capital, Jakarta, is a tropical region with high rainfall; therefore, to support the success of initial 5G development planning, it is important to be aware of the channel characteristics over the frequency in Jakarta. Based on simulation results of the NYUSIM channel simulator in [22], the author examines how the characteristics of 5G channels are expressed in the power delay profile (PDP). Using the NYUSIM channel simulator, the author investigates how peripheral variations related to Baghdad city affect millimeter wave transmissions for different frequency bands at millimeter wave. In this study, the diurnal variation in atmospheric conditions limits the performance of millimeter wave transmissions, and critical design insights are pointed out when designing 5G systems [23]. In [24], the author examines millimeter wave communications for 5G. To meet the challenges of millimeter wave communication, architectures and protocols must be redesigned, including integrated circuits and system design, interference management and spatial reuse, antiblockage, and dynamics related to mobility. Current solutions have been reviewed and compared based on effectiveness, efficiency, and complexity. The author explores how 3GPP approaches challenges related to 5G millimeter wave standardization and how solutions can help achieve broader bandwidths and harness some of the inherent benefits of higher-frequency communications [25]. The author discusses several issues that must be resolved to use beamforming for access to millimeter wave frequencies, presents solutions for initial access, and validates them by simulations, showing that millimeter wave frequencies can be used for reliable network access [26]. The author discusses the potential benefits and challenges of the 5G wireless heterogeneous network (HetNet) incorporating massive MIMO and millimeter wave technologies [27]. In [28], the author discusses millimeter wave cellular systems coverage and capacity, emphasizing their key distinguishing characteristics, including the limited scattering nature of the channels and how RF beamforming strategies, such as beam steering, can provide highly directional transmission with minimal hardware complexity. The first performance evaluation of TCP congestion control in next generation millimeter wave networks is presented in [29]. In addition, the framework incorporates detailed models of the millimeter wave channel, beamforming, and tracking algorithms based on real measurements of New York City channels and detailed ray trace analysis.
Furthermore, 5G improves throughput, latency, network reliability, energy efficiency, and connectivity. In addition, the proliferation of smartphones, Internet of Things (IoT) devices, and new multimedia applications have increased the amount of mobile data, which has led to an increase in terahertz technology, communication technology, and 6G wireless communication solutions. Terahertz (THz) technology is expected to play an important role in the development of wireless communication in 6G and beyond with its ability to provide high-speed data transfer and low latency. However, the system faces many challenges, including limitations in internal and external environments due to path loss, reduced access to the environment's natural process and absorption, and standard processes of 5G and 6G networks that software vulnerabilities can attack. The key to meeting these challenges is using artificial intelligence (AI) to create stronger, more efficient terahertz communication protocols. The scope of related work with the advanced technologies is highlighted in Table 2 [30][31][32][33][34].
Contributions
The millimeter wave band has become prominent with the advent of 5G and beyond communication networks. This study examines the millimeter wave bands working frequencies. The main contribution of this paper is as follows:
•
In this paper, we have considered all possible working frequencies of millimeter wave communication networks such as 28 GHz, 38 GHz, 60 GHz, and 72 GHz. • This work examines the effect of multiple interferences in millimeter wave communication networks such as O2I penetration, rainfall, and human blockage. • This paper has also worked on UAV-based application use cases scenarios such as indoor-outdoor sports stadiums and open-air gatherings where the need for quality of service is of prime concern. • In this work, we also analyzed the optimal number of antennae in all possible use case scenarios such as indoor-outdoor sports stadiums and open-air gatherings within different levels of interference conditions.
Organization
The organization of this paper is described as follows: Section 1 gives an insight into the introduction of the 5G and beyond communication networks. Section 2 comprehensively describes scenarios, frequencies, environment, antenna, spatial consistency, and human blockage parameters. The simulation results of the analyzed scenarios and conditions are presented in Section 3. Future perspectives and scope of the research work are depicted in Section 4. Section 5 ends the paper with a conclusion.
Millimeter Wave Scenario Parameters
The ultra-wideband (millimeter wave) scenario simulation system NYUSIM allows an accurate model of wireless communication systems. For the millimeter wave scenario, some primary considerations that can be set in NYUSIM are as follows: • Carrier frequency: The frequency at which the signal is transferred is the carrier frequency. Carrier frequencies in millimeter wave systems are typically between 24 and 100 GHz. • Bandwidth: The term "bandwidth" describes the spectrum of frequencies used to transmit the signal. The bandwidth of millimeter wave mm-systems is frequently numerous (up to several gigahertz).
Urban Micro and Urban Macro Scenario
There are different environmental cases or scenarios that are being used to analyze the propagation of the signals, UMa (urban macrocell) and UMi (urban microcell) are among them, as shown in Figure 1. Urban areas contain more traffic, buildings, and people than rural areas. Thus, the propagation of the 5G network will have to face some complex scenarios. To solve these complex problems, microcells contain more antennas in urban areas. As the survey compares, the population in a rural area is less than that in an urban area. As a result, the perimeter of human blockage becomes more crux and interesting, which provides the opportunity for this research paper to redefine the perimeter of human blockage and the study of frequency, environment (outdoor/indoor), rainfall, human blockage, and O2I penetration. Considering the case of indoor stadiums, this gives the insight to determine if the density population is high. Still, the interruption due to buildings, rain, and trees is low, which helps this research in a more classified way in which the work is performed very straightforwardly. The perimeter of path loss and received power vary differently in UMa and UMi cases because of indoor and outdoor factors. Outdoors, the perimeter, such as rain, building interruptions, window interruptions, and human blockages, is at a peak level for 5G network propagation waves. As a result, it produces great differences in an outcome generated through the simulation (from NYUSIM software) [18,19].
Rural Macro Scenario
This paper analyzes various parameters such as environment (outdoor/indoor), rainfall, human blockage, and O2I penetration, along with various frequencies and different numbers of antennas. RMa is a rural macro scenario, as shown in Figure 2, in which the population is considered low, and the number of buildings and windows (of glass) is considered low. However, the number of trees is taken at high parameters, which fits best for rural areas. The consideration of population is high because of more numbers of agriculture fields, which helps to analyze the accumulated data on human interference and the number of antennas used. As the number of buildings is considered low, several windows (made up of glass) are at minimum consideration, which helps to analyze the parameters such as environment (outdoor/indoor), rainfall, human blockage, and O2I penetration. According to the reference, the amount of research on human blockage has been performed minimally, but here, with the help of the condition of RMa, it is more alarming and makes this paper more eye-opening in the upcoming development on 5G and beyond the network. Hence, a rural microcell is a scenario in the environmental case for 5G network propagation. As rural areas contain less traffic and fewer buildings and people, the usage of propagation is simple compared with urban areas. Thus, fewer macro cells are placed with a smaller number of antennas and sub antennas such as a microcell [10,11].
Rural Macro Scenario
This paper analyzes various parameters such as environment (outdoor/indoor), rainfall, human blockage, and O2I penetration, along with various frequencies and different numbers of antennas. RMa is a rural macro scenario, as shown in Figure 2, in which the population is considered low, and the number of buildings and windows (of glass) is considered low. However, the number of trees is taken at high parameters, which fits best for rural areas. The consideration of population is high because of more numbers of agriculture fields, which helps to analyze the accumulated data on human interference and the number of antennas used. As the number of buildings is considered low, several windows (made up of glass) are at minimum consideration, which helps to analyze the parameters such as environment (outdoor/indoor), rainfall, human blockage, and O2I penetration. According to the reference, the amount of research on human blockage has been performed minimally, but here, with the help of the condition of RMa, it is more alarming and makes this paper more eye-opening in the upcoming development on 5G and beyond the network. Hence, a rural microcell is a scenario in the environmental case for 5G network propagation. As rural areas contain less traffic and fewer buildings and people, the usage of propagation is simple compared with urban areas. Thus, fewer macro cells are placed with a smaller number of antennas and sub antennas such as a microcell [10,11].
Rural Macro Scenario
This paper analyzes various parameters such as environment (outdoor/indoor), rainfall, human blockage, and O2I penetration, along with various frequencies and different numbers of antennas. RMa is a rural macro scenario, as shown in Figure 2, in which the population is considered low, and the number of buildings and windows (of glass) is considered low. However, the number of trees is taken at high parameters, which fits best for rural areas. The consideration of population is high because of more numbers of agriculture fields, which helps to analyze the accumulated data on human interference and the number of antennas used. As the number of buildings is considered low, several windows (made up of glass) are at minimum consideration, which helps to analyze the parameters such as environment (outdoor/indoor), rainfall, human blockage, and O2I penetration. According to the reference, the amount of research on human blockage has been performed minimally, but here, with the help of the condition of RMa, it is more alarming and makes this paper more eye-opening in the upcoming development on 5G and beyond the network. Hence, a rural microcell is a scenario in the environmental case for 5G network propagation. As rural areas contain less traffic and fewer buildings and people, the usage of propagation is simple compared with urban areas. Thus, fewer macro cells are placed with a smaller number of antennas and sub antennas such as a microcell [10,11]. Millimeter wave beamforming using the UAV-based scenario shown in Figure 3 depicts users with interference factors such as rain, buildings, vegetation, vehicles, and people, creating a unique interference environment. Rain is one of the most prominent sources of interference, with the high humidity levels in cities leading to more rain and interference. Tall buildings, dense vegetation, and many vehicles also create interference as they can block or weaken the signal. People moving around the city can also cause interference as their bodies can absorb or reflect signals. UAVs can be used to extend the range of the network, providing coverage to areas that are difficult to reach with towerbased networks. Additionally, the use of millimeter wave beamforming technology in UAVs provides several advantages. It allows for higher data rates than traditional terrestrial networks because the signal is focused on a beam and is less affected by interference. Millimeter wave beamforming using the UAV-based scenario shown in Figure 3 depicts users with interference factors such as rain, buildings, vegetation, vehicles, and people, creating a unique interference environment. Rain is one of the most prominent sources of interference, with the high humidity levels in cities leading to more rain and interference. Tall buildings, dense vegetation, and many vehicles also create interference as they can block or weaken the signal. People moving around the city can also cause interference as their bodies can absorb or reflect signals. UAVs can be used to extend the range of the network, providing coverage to areas that are difficult to reach with tower-based networks. Additionally, the use of millimeter wave beamforming technology in UAVs provides several advantages. It allows for higher data rates than traditional terrestrial networks because the signal is focused on a beam and is less affected by interference.
Simulation Results
This section simulates and analyzes a UAV-based millimeter wave communication network. The analysis has been performed on frequencies such as 28 GHz, 38 GHz, 60 GHz, and 72 GHz. Considering these frequencies, results have been observed in the form of different conditions having all the possible combinations of the human body and rainfall interference that can be applied to different scenarios such as indoor sports stadiums, outdoor sports stadiums, and open area gatherings. This simulation also reflects the optimal number of antennas to provide a better network through better receiver power and path loss. It has been observed that if the signal wave is blocked due to the presence of the human body, the received power decreases and the power loss increases. The simulation parameters used for analyzing the work are mentioned in Table 3. This work has also been analyzed based on the following different interference conditions: Condition 1 (Human Blockage on and Rain Fall off): This is the interference condition in which the effect of rainfall is not considered, while the effect from the presence of the human body is considered depending upon the density of the user's area. Different interference conditions affect the analyzed values of path loss and received power. Since the effect of rain is off in this case and only human blockage is considered, this condition has been used specifically in indoor sports stadiums where there is a huge density of humans and no possibility of rain.
Condition 2 (Human Blockage off and Rainfall off): This interference condition has been considered the ideal case in which rainfall and the human body are not considered. This suggests no interruption between t-x and r-x antennas. The maximum value of received power and the minimum value of path loss in this condition are expected. This has minimum interference, so it has considered open area gatherings in rural areas where human participation is significantly less and there is no rainfall interference.
Condition 3 (Human Blockage on and Rainfall on): In contrast, considering the interference is in a way keeping the human body and rainfall interference. In this situation, the hindrance is maximum, so path loss would be maximum and received power would be minimum and considered a worst-case scenario. This is also considered in urban areas such as open sports stadiums with maximum human blockage and rain.
Condition 4 (Human Blockage off and Rainfall on):
The main study of propagation waves is to observe how they affect rain and the areas where human density is lowest, such as rural areas, and can be considered as open area gatherings in the rain where human density is low. In this condition, there is only rainfall interference and no human interference.
Indoor Sports Stadium
Nowadays, there is a huge requirement for better signals and good communication speed. This requirement becomes difficult to fulfill in the case of too many users in a particular region. Therefore, to solve this problem, some results have been researched and have shown the best number of antennas required to solve this problem. Indoor sports stadiums include a huge density of humans, so it is considered human blockage here. There is no scope for rain in an indoor sports stadium [9,20]. Table 4 shows the optimal number of antennas for indoor sports stadiums. In an indoor sports stadium, the work is performed to analyze the optimal number of antennas required for better quality of services at different working frequencies with different interference conditions. This analysis is performed based on factors such as received power and path loss. The analysis is performed regarding an optimal number of antennas for better-received power and path loss. If a human density is much less than it is being reflected by Condition 2. After the simulation work, it is concluded from Table 4 that for a 28 GHz working frequency, the optimal number of antennas for Conditions 1 and 2 is 2 in case of both betterreceived power and path loss. Similarly, if the millimeter wave is of 38 GHz frequency, the optimal number of antennas required for better-received power, and path loss is 4 for Condition 1 and 2 for Condition 2. With the signal having a frequency of 60 GHz, the optimal number of antennas required for better-received power and path loss is 4 for Condition 1, and for Condition 2 it is 4 for received power and 2 for path loss.
Outdoor Sports Stadium
In this case, both possibilities with human and rain interference are considered. Table 4 shows the significant and optimal number of antennas for factors such as received power and path loss. This varies with the amount of human density and the amount of rainfall [6,10,21]. Table 5 reflects the number of optimal antennas for better-received power and path loss concerning the different millimeter wave frequencies. Considering the simulation results, it can be concluded that the number of optimal antennas at 28 GHz frequency is 2 for all conditions for both received power and path loss. Similarly, in the case of 38 GHz frequency and received optimal power number of antennas for Condition 1 is 4; for Condition 2 it is 2; for Condition 3 it is 4; and for Condition 4 it is 4. Similarly, for path loss, an optimal number of antennas for 38 GHz is 2 for Conditions 1 and 2, and several antennas are 4 for Condition 3, and 2 for Condition 4. Similarly, for millimeter wave frequency, 60 GHz optimal number of antennas for better-received power is 4, 2, 4, 4 for Conditions 1, 2, 3, 4, respectively, and for better path loss, the optimal number of antennas is 4, 2, 4, 4 for Conditions 1, 2, 3, 4, respectively. If the millimeter wave frequency is 72 GHz, then the optimal number of antennas for better-received power is 8, 4, 8, 8 for Conditions 1, 2, 3, 4, respectively, and for better path loss the optimal number of antennas is 8, 2, 8, 8 for Conditions 1, 2, 3, 4, respectively.
Open Area Gatherings
Open area gatherings such as rallies, functions, and parties are also important to consider. Specifically, we consider rural gatherings such as rallies and parties. The possibility of having all possible cases of human and rain interferences is in this application [22,23]. Table 6 significantly reflects the efficient number of antennas used in open area gatherings for better-received power and path loss. Human density is less, so the number of antennas in all cases has been observed to be less compared to Tables 4 and 5. Considering all the simulation results, it has been observed that the number of optimal antennas for better-received power and path loss in the case of millimeter wave with the frequency of 28 GHz is 2 for all Conditions 1, 2, 3, 4. Similarly, in the case of 38 GHz frequency and received power, the optimal number of antennas for Condition 1 is 2; for Condition 2 it is 2; for Condition 3 is 4; and for Condition 4 is 2. Similarly, for path loss, the optimal number of antennas for 38 GHz is 2 for Conditions 1 and 2, and the number of antennas is 2 for Condition 3, and 2 for Condition 4. Similarly, for millimeter wave frequency, 60 GHz optimal number of antennas for better-received power is 4, 2, 4, 2 for Conditions 1, 2, 3, 4, respectively, and for better path loss, the optimal number of antennas is 4, 2, 4, 4 for Conditions 1, 2, 3, 4, respectively. If the millimeter wave frequency is 72 GHz, then the optimal number of antennas for better-received power is 4, 2, 4, 4 for Conditions 1, 2, 3, 4, respectively, and for better path loss the optimal number of antennas is 4, 4, 4, 4 for Conditions 1, 2, 3, 4, respectively. Figure 4 shows the received power at 28 GHz frequency. Some examples have been considered indoor sports stadiums. For indoor sports stadiums, the condition of no rain and human blockage interference is considered the most suitable. Following it, the received power in this case has been observed as −61.27 watts. The best result has been observed when there is no human blockage and no rain, having received power in the case of an indoor sports stadium is −51.545 watts, and the minimum received power has been observed when both human blockage interference and rain is −59.99 watts. Thus, overall received power in the case of indoor sports stadiums has decreased by 16 Figure 4 shows the received power at 28 GHz frequency. Some examples have been considered indoor sports stadiums. For indoor sports stadiums, the condition of no rain and human blockage interference is considered the most suitable. Following it, the received power in this case has been observed as −61.27 watts. The best result has been observed when there is no human blockage and no rain, having received power in the case of an indoor sports stadium is −51.545 watts, and the minimum received power has been observed when both human blockage interference and rain is −59.99 watts. Thus, overall received power in the case of indoor sports stadiums has decreased by 16.38% while considering human blockage and rain interference. Similarly, in the case of outdoor sports stadiums, the received power decreased by 20.27% by considering the worst-case scenario compared to the ideal situation. Similarly, in the case of open gatherings, this value has decreased by 14.8% in the worst-case scenario. Figure 5 defines the path loss at 28 GHz. In the case of an indoor sports stadium, the path loss is minimum when both human blockage and rain interference have not been taken and will be considered an ideal case. Therefore, in the case of indoor sports stadiums, path loss in the worst-case scenario (when there is a human blockage and rain interference is there) has been observed to increase by 10.3%. Similarly, in the case of an outdoor sports stadium, path loss increases by 12.41% in the worst case concerning the ideal case. Similarly, in the case of open gatherings, path loss was observed to increase by 9.82%. Figure 5 defines the path loss at 28 GHz. In the case of an indoor sports stadium, the path loss is minimum when both human blockage and rain interference have not been taken and will be considered an ideal case. Therefore, in the case of indoor sports stadiums, path loss in the worst-case scenario (when there is a human blockage and rain interference is there) has been observed to increase by 10.3%. Similarly, in the case of an outdoor sports stadium, path loss increases by 12.41% in the worst case concerning the ideal case. Similarly, in the case of open gatherings, path loss was observed to increase by 9.82%. Figure 6 reflects the received power at 38 GHz. In indoor sports stadiums, the received power is maximum when both human blockage and rain interference are not featured and will be considered an ideal case. Thus, in the case of an indoor sports stadium, the path received power in the worst-case scenario (when human blockage interference and rain is featured) has been observed to decrease by 15.3%. Similarly, in the case of outdoor sports stadiums, received power decreases by 4.8% in the worst case concerning the ideal case. Similarly, in the case of RMA, the received power was observed to decrease Figure 6 reflects the received power at 38 GHz. In indoor sports stadiums, the received power is maximum when both human blockage and rain interference are not featured and will be considered an ideal case. Thus, in the case of an indoor sports stadium, the path received power in the worst-case scenario (when human blockage interference and rain is featured) has been observed to decrease by 15.3%. Similarly, in the case of outdoor sports stadiums, received power decreases by 4.8% in the worst case concerning the ideal case. Similarly, in the case of RMA, the received power was observed to decrease by 12.8%. Figure 6 reflects the received power at 38 GHz. In indoor sports stadiums, the received power is maximum when both human blockage and rain interference are not featured and will be considered an ideal case. Thus, in the case of an indoor sports stadium, the path received power in the worst-case scenario (when human blockage interference and rain is featured) has been observed to decrease by 15.3%. Similarly, in the case of outdoor sports stadiums, received power decreases by 4.8% in the worst case concerning the ideal case. Similarly, in the case of RMA, the received power was observed to decrease by 12.8%. Figure 7 defines the path loss at 38 GHz. In the case of indoor sports stadiums, the path loss is minimum when both human blockage and rain have not been taken and will be considered an ideal case. Therefore, in the case of indoor sports stadiums, the path loss in the worst-case scenario (when there is a human blockage and rain interference is there) has been observed to increase by 9.8%. Similarly, in the case of outdoor sports, stadium path loss increases by 3.25% in the worst case concerning the ideal case. Similarly, in the case of open gatherings, path loss was observed to increase by 8.3%. Figure 7 defines the path loss at 38 GHz. In the case of indoor sports stadiums, the path loss is minimum when both human blockage and rain have not been taken and will be considered an ideal case. Therefore, in the case of indoor sports stadiums, the path loss in the worst-case scenario (when there is a human blockage and rain interference is there) has been observed to increase by 9.8%. Similarly, in the case of outdoor sports, stadium path loss increases by 3.25% in the worst case concerning the ideal case. Similarly, in the case of open gatherings, path loss was observed to increase by 8.3%. Figure 8 reflects the received power at 60 GHz. In indoor sports stadiums, the received power is maximum when both human blockage and rain interference are not featured and will be considered an ideal case. Thus, in the case of an indoor sports stadium, the path received power in the worst-case scenario (when human blockage interference and rain is featured) has been observed to decrease by 2.6%. Similarly, in the case of outdoor sports stadiums, received power decreases by 17 Figure 8 reflects the received power at 60 GHz. In indoor sports stadiums, the received power is maximum when both human blockage and rain interference are not featured and will be considered an ideal case. Thus, in the case of an indoor sports stadium, the path received power in the worst-case scenario (when human blockage interference and rain is featured) has been observed to decrease by 2.6%. Similarly, in the case of outdoor sports stadiums, received power decreases by 17.6% in the worst case concerning the ideal case. Similarly, in the case of open gatherings, received power was observed to decrease by 16.6%. the path received power in the worst-case scenario (when human blockage interference and rain is featured) has been observed to decrease by 2.6%. Similarly, in the case of outdoor sports stadiums, received power decreases by 17.6% in the worst case concerning the ideal case. Similarly, in the case of open gatherings, received power was observed to decrease by 16.6%. Figure 9 defines the path loss at 60 GHz. In the case of indoor sports stadiums, the path loss is minimum when both human blockage and rain have not been taken and will be considered an ideal case. Therefore, in the case of an indoor sports stadium, the path loss in the worst-case scenario (when there is a human blockage and rain interference) has been observed to increase by 1.9%. Similarly, in the case of outdoor sports, stadium path loss increases by 11.8% in the worst case concerning the ideal case. Similarly, in the case of open gatherings, path loss was observed to increase by 11.17%. Figure 9 defines the path loss at 60 GHz. In the case of indoor sports stadiums, the path loss is minimum when both human blockage and rain have not been taken and will be considered an ideal case. Therefore, in the case of an indoor sports stadium, the path loss in the worst-case scenario (when there is a human blockage and rain interference) has been observed to increase by 1.9%. Similarly, in the case of outdoor sports, stadium path loss increases by 11.8% in the worst case concerning the ideal case. Similarly, in the case of open gatherings, path loss was observed to increase by 11.17%. Figure 10 reflects the received power at 72 GHz. In the case of an indoor sports stadium, the received power is maximum when both human blockage and rain interference are not featured and will be considered an ideal case. Thus, in the case of an indoor sports stadium, receiving power in the worst-case scenario (when human blockage interference and rain is featured) has been observed to decrease by 3 Figure 10 reflects the received power at 72 GHz. In the case of an indoor sports stadium, the received power is maximum when both human blockage and rain interference are not featured and will be considered an ideal case. Thus, in the case of an indoor sports stadium, receiving power in the worst-case scenario (when human blockage interference and rain is featured) has been observed to decrease by 3.33%, concerning the ideal case. Similarly, received power decreases by 5.7% in an outdoor sports stadium. Similarly, in the case of open area gathering, received power was observed to decrease by 22.17%.
Optimal Solutions for Different Scenarios of millimeter wave UAV-Based Networks
are not featured and will be considered an ideal case. Thus, in the case of an indoor sports stadium, receiving power in the worst-case scenario (when human blockage interference and rain is featured) has been observed to decrease by 3.33%, concerning the ideal case. Similarly, received power decreases by 5.7% in an outdoor sports stadium. Similarly, in the case of open area gathering, received power was observed to decrease by 22.17%. Figure 11 defines the path loss at 72 GHz. In the case of an indoor sports stadium, the path loss is minimum when both human blockage and rain have not been taken and will be considered an ideal case. Therefore, in the case of an indoor sports stadium, the path loss in the worst-case scenario (when there is a human blockage and rain interference is there) has been observed to increase by 2.3%. Similarly, in the case of outdoor sports, stadium path loss increases by 3.82% in the worst case concerning the ideal case. Similarly, in the case of open gatherings, path loss has been observed to increase by 14.05%. Figure 11 defines the path loss at 72 GHz. In the case of an indoor sports stadium, the path loss is minimum when both human blockage and rain have not been taken and will be considered an ideal case. Therefore, in the case of an indoor sports stadium, the path loss in the worst-case scenario (when there is a human blockage and rain interference is there) has been observed to increase by 2.3%. Similarly, in the case of outdoor sports, stadium path loss increases by 3.82% in the worst case concerning the ideal case. Similarly, in the case of open gatherings, path loss has been observed to increase by 14.05%.
Future Scope
For a long time, there has been speculation on how 5G technology will be used. It is asserted that 5G will permit further advancements in smart cities, automated vehicles, digital business 4.0, and other areas, and will revolutionize several marketplaces. The most resilient network can be achieved by combining millimeter wave with femtocells and large MIMO, two other symbiotic technologies. This is largely due to the newest advancements and technology incorporated in the 5G system this year, where telecom providers would theoretically reap more benefits from their significant investments. As a result, smartphone vendors will be able to produce more affordable devices, increasing customer demand and resulting in network operators spending less on infrastructure. Mobile broadband advancements also lower power consumption. There are many prospects to find cutting-edge methods for handling networks thanks to the 5G infrastructure. Network slicing, which enables a single physical network to serve many virtual networks with different functionality and features, is born from this. In the chosen example, one network slice would offer high-speed mobile access on the same infrastructure, while another may result in lower network use for the 5G link level. With the help of 5G technology, different networks can frequently be provided to clients and market segments using the same network. With such a significant influence, 5G technologies would increase the financial potential for future creative business structures. UAVs have also been used in a variety of applications, including military, construction, image and video mapping, medical, search and rescue, package delivery, reconnaissance, telecommunication, surveil-
Future Scope
For a long time, there has been speculation on how 5G technology will be used. It is asserted that 5G will permit further advancements in smart cities, automated vehicles, digital business 4.0, and other areas, and will revolutionize several marketplaces. The most resilient network can be achieved by combining millimeter wave with femtocells and large MIMO, two other symbiotic technologies. This is largely due to the newest advancements and technology incorporated in the 5G system this year, where telecom providers would theoretically reap more benefits from their significant investments. As a result, smartphone vendors will be able to produce more affordable devices, increasing customer demand and resulting in network operators spending less on infrastructure. Mobile broadband advancements also lower power consumption. There are many prospects to find cuttingedge methods for handling networks thanks to the 5G infrastructure. Network slicing, which enables a single physical network to serve many virtual networks with different functionality and features, is born from this. In the chosen example, one network slice would offer high-speed mobile access on the same infrastructure, while another may result in lower network use for the 5G link level. With the help of 5G technology, different networks can frequently be provided to clients and market segments using the same network. With such a significant influence, 5G technologies would increase the financial potential for future creative business structures. UAVs have also been used in a variety of applications, including military, construction, image and video mapping, medical, search and rescue, package delivery, reconnaissance, telecommunication, surveillance, precision agriculture, wireless communication, and weather monitoring. There are several applications with the use of UAVs, and they are depicted in Figure 12 [24][25][26][27][28].
work slicing, which enables a single physical network to serve many virtual netw with different functionality and features, is born from this. In the chosen example, network slice would offer high-speed mobile access on the same infrastructure, while other may result in lower network use for the 5G link level. With the help of 5G tech ogy, different networks can frequently be provided to clients and market segments u the same network. With such a significant influence, 5G technologies would increase financial potential for future creative business structures. UAVs have also been used variety of applications, including military, construction, image and video mapping, m ical, search and rescue, package delivery, reconnaissance, telecommunication, sur lance, precision agriculture, wireless communication, and weather monitoring. There several applications with the use of UAVs, and they are depicted in Figure 12 [24][25][26][27][28]
Conclusions
In conclusion, 5G and beyond communication networks purely focus on increasing the quality of the service of the network. For better service quality, interference conditions need to be monitored and optimal solutions need to be provided. In this paper, O2I penetration loss is considered in all possible cases. However, there have been different scenarios with changing interference properties in terms of the presence of human and rain interference. Considering these interferences, four different interference conditions were considered. The analysis was also made with different millimeter wave frequencies such as 28 GHz, 38 GHz, 60 GHz, and 72 GHz. This work also concluded the optimal number of antennas for better-received power and path loss under different conditions for different millimeter wave frequencies. By using the analysis performed under different conditions, optimal simulation conditions are proposed for indoor sports stadiums, outdoor sports stadiums, and open area gatherings regarding received power and path loss. This paper also reflects the increase and decrease in the percentage of received power and path loss for the ideal case. | 11,019.4 | 2023-07-27T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Obesity and outpatient rehabilitation using mobile technologies: the potential mHealth approach
Obesity is currently an important public health problem of epidemic proportions (globesity). Inpatient rehabilitation interventions that aim at improving weight-loss, reducing obesity-related complications and changing dysfunctional behaviors, should ideally be carried out in a multidisciplinary context with a clinical team composed of psychologists, dieticians, psychiatrists, endocrinologists, nutritionists, physiotherapists, etc. Long-term outpatient multidisciplinary treatments are likely to constitute an essential aspect of rehabilitation. Internet-based technologies can improve long-term obesity rehabilitation within a collaborative approach by enhancing the steps specified by psychological and medical treatment protocols. These outcomes may be augmented further by the mHealth approach, through creating new treatment delivery methods to increase compliance and engagement. mHealth (m-health, mobile health) can be defined as the practice of medicine and public health, supported by mobile communication devices for health services and information. mHealth applications which can be implemented in weight loss protocols and obesity rehabilitation are discussed, taking into account future research directions in this promising area.
INTRODUCTION
Obesity, defined as a body mass index (BMI) of 30 kg/m 2 or higher, is today considered an important public health problem and epidemic (globesity; Gutierrez-Fisac et al., 2006;Capodaglio and Liuzzi, 2013). Obesity is also associated with early death and universally recognized as a risk factor for many health complications and disabilities such as cardiovascular diseases, osteoarthritis, hypertension, dyslipidemia, hypercholesterolemia, Type-2 diabetes and cancer (Flegal et al., 2005;Whitlock et al., 2009;Capodaglio et al., 2010Capodaglio et al., , 2011Castelnuovo et al., 2010;Capodaglio and Liuzzi, 2013).
There is general consensus among professionals that the etiology of obesity is multifactorial with interaction between genetic, individual, and environmental factors (Marcus and Wildes, 2009). Even if genetics plays an important role in the etiology of obesity, according to Dombrowski, "Behavioral factors, i.e., poor diet and physical inactivity are among the main proximal causes linked to obesity... obesity-related morbidity... and mortality..." (p. 7, 2012). Moreover social, psychological, and psychopathological variables are clear determinants in the development and treatment of obesity (Davin and Taylor, 2009). For example, epidemiologic investigations have revealed significant correlations between obesity and eating disorders, mood disorders, anxiety disorders, and personality disorders (Hudson et al., 2007;Pickering et al., 2007;Petry et al., 2008;Scott et al., 2008;Villa et al., 2009;Manzoni et al., 2010).
In the context of in-patient rehabilitation, interventions aimed at improving weight-loss, reducing obesity-related complications and changing dysfunctional behaviors should be typically carried out in a multidisciplinary context (with a clinical team composed by, dieticians, endocrinologists or nutritionists, physiotherapists, psychiatrists, psychologists surgeons, etc.). There may be additional benefit from the inclusion of specific instructions for changing diet and self-monitoring dietary intake, whilst providing guidance and support in maintaining goals initially achieved, anticipating possible future relapses and learning strategies to cope with difficult moments or situations (Capodaglio et al., 2010(Capodaglio et al., , 2013aManzoni et al., 2010Manzoni et al., , 2011bDombrowski et al., 2011Dombrowski et al., , 2012Capodaglio and Liuzzi, 2013). A range of psychological approaches may be suitable for the in-patient treatment of obesity, such as behavioral, cognitive-behavioral, interpersonal, systemic-strategic, psychodynamic, schema etc.; Shaw et al., 2005;Castelnuovo, 2010a,b). Among these different approaches, cognitive-behavior therapy (CBT) represents the gold standard for the treatment of obesity, focusing on dysfunctional behaviors, cognitive processes, unrealistic weight goals and body image perceptions (Murphy et al., 2010). The combination of psychological therapy and diet/exercise plans, leads to better weight loss outcomes than diet/exercise interventions alone. Psychological and behavioral treatments generally include out-patient follow-up sessions which facilitate ongoing assessment an guidance in a range of areas. This may include determining clients' ability to self-monitor (for example, using diaries), assistance with stimulus control (for example, restricting quantities of food) and behavioral modification strategies (for example, chewing slowly, taking time to taste and enjoy food, and increasing awareness of the pleasure associated with taste; Wing, 2002;Foster et al., 2005;Swencionis and Rendell, 2012).
In a multidisciplinary obesity rehabilitation approach, it is important to underline that treatment could involve nonpharmacological, pharmacological and surgical methods. Nowadays functional anti-obesity drugs are partially indicated for those who are obese with one or more weight-related comorbid conditions (Rueda-Clausen et al., 2013;Kushner, 2014;Patham et al., 2014). Moreover additional interventions could be necessary: bariatric surgery can be an effective approach for weight loss and comorbidity reduction, taking into account that surgery can generate considerable risks and can be advised only to selected patients (Sandoval, 2011;Henry et al., 2013;Kushner, 2014).
OBESITY REHABILITATION NEEDS OUT-PATIENT LONG-TERM STEPS
Recent studies have underlined the role of the neural reward system in the development and maintenance of obesity: "dysfunction of brain reward circuitry in response to food cues may predispose some individuals to obesity via an increased likelihood of overeating, particularly excessive consumption of palatable foods" (p. 744, Marcus and Wildes, 2009). Thus some kinds of obesity may be considered an expression of food "addiction," problem that typically requires a long-term treatment (Wang et al., 2001(Wang et al., , 2002(Wang et al., , 2004(Wang et al., , 2009Gearhardt et al., 2009Gearhardt et al., , 2011aGearhardt et al., ,b,c,d, 2012Gearhardt and Corbin, 2011;Gearhardt and Brownell, 2013).
Moreover binge eating disorder (BED) is typically connected with obesity (American Psychiatric Association, 2000;Hill, 2005;Berkowitz and Fabricatore, 2011;Gearhardt et al., 2011c;Wilson, 2011;Schag et al., 2013;Faulconbridge and Bechtel, 2014), even if not occurring exclusively in conjunction with overweight conditions. According to Hill (2005, p. 27), "it is apparent that BED is more common in the obese than in normal-weight individuals. In US weight loss clinics, 20-40% of patients are reported to have BED, although the use of a strict diagnostic interview reduces this to well below 20%. In community samples, BED is much less common, apparent in 1-3% of respondents. Overall, the prevalence of BED in any group increases with increasing obesity." Higher levels of psychological distress and self-esteem problems are associated with obesity with BED. Typically obesity with BED requires a longer term treatment in comparison with simple obesity (Hill, 2005).
Moreover, while from a clinician's point of view a 10% weight loss is generally considered an important success due to a significant reduction in comorbidities and complications, patients typically have higher expectations, perceiving that a good result constitutes a minimal 30% body weight reduction. Thus, establishing genuine and achievable expectations of weight loss represents an important challenge for the management of obesity (Foster et al., 1997;Jeffery et al., 1998;O'Neil et al., 2000;Wadden et al., 2000). Ongoing psychological support is required to assist patients in developing more realistic weight loss outcomes as well as in motivating them to follow rehabilitation programs Nonas and Foster, 2005). Although moderate weight loss (5-10% of initial weight) can lead to positive psychological changes, such as improvements in body satisfaction, self-esteem and mood (Hill, 2005), these findings tend to be associated with short-term studies. Typically, a long-term psychotherapeutic treatment is required in order to sustain realistic weight loss expectations and motivation to change (Hill, 2005).
Taking into account previous considerations, if we consider obesity to be a chronic form of food addiction, which may in some cases be accompanied by BED and unrealistic expectations of weight loss, long-term multidisciplinary treatment is likely to lead to optimal outcomes both across in-patient and out-patient settings.
Also, a collaborative approach, defined as a "strategy or set of strategies to help patients achieve and/or maintain a healthy weight that involve collaboration among healthcare professionals in at least two different disciplines (e.g., physicians and dieticians) for the delivery of weight management interventions" (p. 1190, Rao et al., 2011) is required. Thus strategies based on central planning, grounded in a "chronic care model" logic, tend to obtain better results, although at this early stage only a limited number of articles have reported real and practical collaborative experiences, largely in out-patient settings (Rao et al., 2011).
NEW TECHNOLOGIES FOR OUT-PATIENT OBESITY REHABILITATION: THE TECNOB PROJECT
Internet-based technologies provide patients with continuous and remote psychological, medical and nutritional support and education in order to enhance motivation, compliance and engagement, thereby maximizing the benefits of collaborative outpatient rehabilitation programs (Castelnuovo et al., 2003Riva et al., 2006;Castelnuovo and Simpson, 2011;Manzoni et al., 2011a;Rao et al., 2011;Simpson and Slowey, 2011).
Moreover the use of telemonitoring and telecare approaches that ensure continuity of care in out-patient settings can contribute to a significant cost reduction in the management of obesity and other chronic pathologies (Ekeland et al., 2010(Ekeland et al., , 2011Manzoni et al., 2011a).
One such pioneering example of a collaborative approach is the TECNOB Project (TEChNology for OBesity; Castelnuovo et al., 2003Castelnuovo et al., , 2010Castelnuovo et al., , 2011aCastelnuovo, 2007). It runs for a total duration of 13 months and consists of two consecutive phases: in-patient (1 month) and out-patient (the following 12 months). The clinician-patient relationship is considered a highly significant agent and vehicle for change. After discharge, out-patients begin to experience a sense of autonomy and competence as they continue the change process they have begun to develop during the in-patient phase, whilst learning to face a range of resistances and barriers. Through the use of videoconferencing out-patients are supported by the clinicians who worked with them during the in-hospital phase, through exploring resistances and impediments they experience and finding functional and healthy coping Frontiers in Psychology | Psychology for Clinical Settings mechanisms. Furthermore, out-patients are helped to experience a sense of mastery as they become proficient at attaining healthy behavioral changes.
Other positive experiences are well reported and described in (Bacigalupo et al., 2013), where the common components of the Internet-based clinical protocols are few (self-monitoring related to weight and physical activity and automatic-professional feedback to participants), whereas the intervention programs varied significantly in many details and features (Bacigalupo et al., 2013).
NEW TECHNOLOGIES FOR OUT-PATIENT OBESITY REHABILITATION: THE mHealth SCENARIO
Internet-based tools can provide promising results in enhancing weight reduction among obese patients but further studies are required in order to determine its long-term efficacy and effectiveness across clinical, organizational, and economic perspectives. (Manzoni et al., 2008Khaylis et al., 2010;Rao et al., 2011).
Until now the published data has not supported the competitive use of Internet interventions for weight loss and maintenance in out-patient settings. In spite of this lack of literature, promising clinical reports have been published about the usefulness of mobile phone devices in promoting healthy habits and weight loss attitudes (Rao et al., 2011;Park and Kim, 2012;Pellegrini et al., 2012;Schiel et al., 2012;Bacigalupo et al., 2013;Hebden et al., 2013;Rodrigues et al., 2013;Schoffman et al., 2013;Sharifi et al., 2013;Shaw et al., 2013).
Moreover, no unequivocal data have been collected about real costs of telemedicine. Certainly it could reduce travel time, hospital admissions and indirect costs to service users and their social networks (Ekeland et al., 2010(Ekeland et al., , 2011Khaylis et al., 2010;Hilty et al., 2013). The potential for technical problems as well as skepticism or reticence from patients, caregivers, nurses, and physicians may limit the spread of e-health solutions (e.g., Rees and Stone, 2005). The mHealth approach has the potential to make contributions not only in adult obesity (Tufano and Karras, 2005;Burke et al., 2012) but also in pediatric obesity (Jensen et al., 2012;Turner-McGrievy et al., 2013), thereby creating new treatment delivery methods that could increase participation, compliance and engagement (Graffigna et al., 2013a,b). About pediatric obesity, Cohen et al. (2012) in a recent review noted that telemedicine could be a promising approach to pediatric weight management, particularly for families in rural contexts with limited access to traditional treatments, even if many doubts are present again, particularly what treatment components (psychological support, lifestyle modification, nutritional education, medical prescription, etc.) could functionally fit into the e-health settings.
According to Eysenbach (2001, p. 1), e-health could be defined as "an emerging field in the intersection of medical informatics, public health and business, referring to health services and information delivered or enhanced through the Internet and related technologies. In a broader sense, the term characterizes not only a technical development, but also a state-of-mind, a way of thinking, an attitude, and a commitment for networked, global thinking, to improve health care locally, regionally, and worldwide by using information and communication technology." e-health is characterized by the presence of 10 features: efficiency, enhanced quality of care, evidence-based approach, empowerment of consumers and patients, encouragement of a new relationship between the patient and health professional, on-line education of physicians, information and communication exchange, extension of the health care scope beyond its conventional boundaries (in both geographical and conceptual sense), ethics and equity (Eysenbach, 2001). mHealth (m-health, mhealth, mobile health) could be defined as the practice of medicine and public health, supported by mobile communication devices, such as mobile phones, tablet computers, and PDAs, for health services and information (Riper et al., 2010;Eysenbach, 2011;Cipresso et al., 2012;Whittaker, 2012;Fiordelli et al., 2013). mHealth applications have also been implemented with promising applications and results in weight loss protocols and obesity rehabilitation (Chomutare et al., 2011;Burke et al., 2012;Cafazzo et al., 2012;Fiordelli et al., 2013;Martinez-Perez et al., 2013;Turner-McGrievy et al., 2013).
FIVE PSYCHOLOGICAL COMPONENTS TO BE CONSIDERED IN mHealth WEIGHT-LOSS APPLICATIONS
According to Khaylis et al. (2010, p. 932-936) five psychological components need to be considered for technology-based and mHealth-based obesity rehabilitation in order to facilitate weight-loss.
SELF-MONITORING
Self-monitoring refers to the process in which individuals regulate and keep track of their own behaviors. Technology can simplify the monitoring process, recording one's progress of food intake and physical activity using online devices. The reason these technologies are likely to be effective is because portable body monitors, pedometers, and handheld PDAs are mobile and, therefore, can be easily used, resulting in continuous self-monitoring. Also, these devices are more convenient for individuals without access to a high-speed Internet connection.
COUNSELOR FEEDBACK AND COMMUNICATION
Feedback from a counselor regarding goals, progress, and results can encourage, motivate, and assist patients in successfully completing a weight-loss program. A functional approach is to provide online weight-loss interventions with brief weekly or monthly counselor or psychologist visits. Participants typically submit their weekly food and exercise journals online, receiving personalized feedback, reinforcement, and recommendations from a counselor over e-mail.
SOCIAL SUPPORT
A group treatment format is typically preferred for behavioral weight-loss interventions. Not only does it constitute a costeffective method for delivering treatment to a larger number of people, but it also enhances social support, an important facilitator of behavioral change. Group support can foster motivation, encouragement, and commonality. To facilitate communication among participants electronic message boards, forums, "real time" chat rooms or online meetings represent useful tools.
STRUCTURED PROGRAM
Technology-based weight-loss programs incorporate principles of behavior therapy and change. They consist of structured weekly lessons on various topics, including nutrition, exercise, stimulus control, self-regulation strategies, goal-setting.
INDIVIDUALLY TAILORED PROGRAM
Interventions specifically designed around individual's goals typically record higher rates of adherence and weight loss. In one report, participants were required to meet with a health coach and select four high-priority behavioral change goals, before being monitored through a behavioral skills training program. In another study real-time SMS text messages were delivered to each patient, as a direct challenge to pre-identified barriers to exercise.
FUTURE DIRECTIONS IN CLINICAL PSYCHOLOGY FOR OBESITY REHABILITATION
Potential benefits of mobile monitoring methods for behavioral weight loss protocols appear clear . However, "future studies should examine ways to predict which self-monitoring method works best for an individual to increase adherence" (Turner-McGrievy et al., 2013, p. 513). There is a critical need for scientific research to evaluate the specific outcomes of collaborative approaches for weight management that utilize Internet and mobile based tools. mHealth approach could help clinicians by motivating patients in remote settings to develop healthier lifestyles (Castelnuovo, 2010a;Pietrabissa et al., 2012), to accept more intrusive medical treatments (such as drugs and weight-loss surgery), to cope with chronic conditions and to reduce complications (such as Type-2 Diabetes, hypertension and cardiovascular disease; Nguyen and Lau, 2012).
Moreover clinicians should adhere to good professional practice protocols in technological settings: "discussions of weight should be performed in a non-judgmental, respectful, and unhurried manner"(p. 1200, Rao et al., 2011),"readiness and self-efficacy to change behaviors should be assessed before weight loss strategies are initiated, and this information should be factored into decisions about what type of approach to use" (p. 1200, 2011), and collaborative approaches involving physicians, psychologists, nurses, and other clinicians need to be considered by utilizing consistent planning and training modalities.
Future directions in obesity and weight-reduction research have been provided by Rao et al. (2011Rao et al. ( , p. 1200: (1) "There is a need for larger studies, both those that include technologically based interventions and those that do not, that enroll a diverse spectrum of overweight and obese patients in terms of sex, race, and socioeconomic status. Latino subjects and men, in particular, are underrepresented in obesity studies to date. There is also a need to investigate the specific features of technologically based interventions (e.g., content, format, device) that make such interventions successful in promoting weight loss.
(2) Because attrition rates from technology-based studies are very high, there is a need to develop effective strategies to keep patients engaged in using technology tools for the long-term.
(3) Further evaluation of collaborative approaches (e.g., approaches involving centralized planning, approaches involving nurses in intervention delivery) in general is needed. In particular, larger studies of longer duration are needed to evaluate the effectiveness of the chronic care model as a framework for weight management interventions. (4) Use of electronic health records is increasing, and there is a need to explore the use of these valuable tools, not only for identification and assessment of obesity but also for the delivery of obesity interventions." Whittaker (2012, p. 6) provides some methodological suggestions for future research in this issue: "evaluations of effectiveness and usability are required and should be made publicly available. Where evaluation is planned during the development stage, data collection can be built in as an integral part of the program. The ideal of randomized controlled trials will still be necessary in some contexts. In these cases, careful consideration should be given to the appropriate comparator to ensure the right question is being answered. For example, what is usual care for this target audience? Can we measure an improvement in access as an outcome? Other research methods will be more appropriate in other circumstances, such as adaptive trials to allow the intervention to develop and improve as part of the research; observational trials and qualitative research methods to detect unintended consequences and changes to workflow; and qualitative studies to test acceptability. Evaluating effectiveness and usability is also possible while implementing a system, for example, with novel designs such as the stepped wedge cluster randomized trial, and particularly where there is little likelihood of harm." To conclude, further studies should investigate both possible advantages and applications of Internet and mHealth technologies in the treatment of obesity. In spite of promising preliminary reports, the evidence-base for the effectiveness of mHealth applications is meager and it remains too early to be able to recommend it for use in clinical settings. | 4,394.6 | 2014-06-10T00:00:00.000 | [
"Computer Science",
"Medicine",
"Psychology"
] |
Self-Assembly of Metal Nanoclusters for Aggregation-Induced Emission
Aggregation-induced emission (AIE) is an intriguing strategy to enhance the luminescence of metal nanoclusters (NCs). However, the morphologies of aggregated NCs are often irregular and inhomogeneous, leading to instability and poor color purity of the aggregations, which greatly limit their further potential in optical applications. Inspired by self-assembly techniques, manipulating metal NCs into well-defined architectures has achieved success. The self-assembled metal NCs often exhibit enhancing emission stability and intensity compared to the individually or randomly aggregated ones. Meanwhile, the emission color of metal NCs becomes tunable. In this review, we summarize the synthetic strategies involved in self-assembly of metal NCs for the first time. For each synthetic strategy, we describe the self-assembly mechanisms involved and the dependence of optical properties on the self-assembly. Finally, we outline the current challenges to and perspectives on the development of this area.
Introduction
Metal nanoclusters (NCs) consist of several to hundreds of metal atoms, bridging the gap between small organometallic complexes and large metal nanoparticles (NPs). The metal NCs typically have a core-shell structure, which is composed of a metal core and a protective ligand shell. Owing to their ultra-small size (<2 nm), which is comparable to the Fermi wavelength of electrons, the spatial confinement of free electrons in metal NCs leads to discrete electronic transitions, thereby exhibiting intriguing molecular-like properties such as molecular chirality, HOMO-LUMO transitions, and photoluminescence [1][2][3]. However, the quantum yields (QY) of metal NCs seldom exceed 0.1% [4,5], which greatly restrict them in many optical applications, such as biosensing, bioimaging, and solid-state lighting and display [6][7][8][9][10][11]. Recently, a strategy of aggregation-induced emission (AIE) [12] to obtain high luminescence of metal NCs has attracted increasing research interest [13]. The AIE origin of metal NCs could be attributed to the restriction of intramolecular vibration and rotation of the ligand's shell on the NCs' surface after aggregation, thereby facilitating the radiative energy transfer via restraining ligand-related nonradiative excited state relaxation [14,15]. So far, the common AIE approaches for metal NCs are cation-and solvent-induced aggregations [14,16,17]. However, both of these two AIE approaches often have the problems of structural irregularity and inhomogeneity due to the random aggregation route, which usually lead to the instability and poor color purity of NC aggregation [14], thereby restricting their potentially practical applications. Therefore, new approaches to synthesize a more regular or homogeneous morphology of NC aggregations are desperately needed.
Self-assembly, as an effective strategy to manipulate the spatial arrangement of nanosized building blocks to form specific structures [18], is considered to be capable of guiding the metal NCs to form a well-defined architecture. Although big success has been achieved in self-assembly of large building blocks, such as metal nanoparticles [19], proteins [20], and polymers [21], it is more difficult to direct metal NCs assemble into high-ordered structures due to their ultra-small size and unique core-shell structure. As to NCs, the large surface energy [22] makes them unstable in self-assembly, thereby resulting in recrystallization or fusing into big nanoparticles. In particular, the interactions between metal NCs originating from the ligand shell on the NC surface are rather weak, comparable to the thermal fluctuation energy of the surroundings, which often leads to the detachment of assembled NCs and formation of irregular structures [23]. Therefore, strengthening the inter-NC interaction through manipulating the outer layer of metal NCs, capping ligands, is critical to the success of self-assembly of metal NCs. So far, directing the capping ligands' configuration has been applied to the self-assembly of metal NCs mainly in two aspects. The first is to choose appropriate molecules as the capping ligands of metal NCs to direct the spontaneous association of NCs under equilibrium conditions into well-defined assemblies joined by covalent or noncovalent bonds, named "capping ligand induced assembly." The second is to utilize soft templates to guide the shape-controlled synthesis of metal NCs, named "soft template directed assembly." In this route, the NC assemblies formed should achieve the shape of the templates. Additionally, limited success has been achieved in utilizing the traditional AIE approaches in self-assembly of metal NCs into highly ordered architecture, including cation-and solvent-induced assembly.
In this review, we first summarize the synthesis strategies developed for the self-assembly of metal NCs into well-defined architectures and describe their optical properties. While there are a few reports of alloy NCs, this review mainly investigates the self-assembly of Au NCs, Ag NCs, and Cu NCs from the standpoint of directing the capping ligand configuration, including capping ligands-, soft templates-, and cation-and solvent-directed assembly. In specific self-assembly strategies, we introduce the driving forces involved in the NCs self-assembly, experimental variables controlling their assembled morphologies, and the dependence of NC optical properties on self-assembled structure. Finally, we outline the current challenges of NC self-assembly and our perspective on the development of this area.
The Self-Assembly of Au NCs
So far, Au NCs have become the most investigated metal NCs in self-assembling into well-defined architectures. The common strategy is to choose appropriate soft templates to form self-assembled Au NCs in situ, such as amphiphilic hydrocarbon, Au(I)-thiolate complexes, cetyltrimethylammonium bromide (CTAB)-metal halide complexes, polymer, and protein fibrils. A few polymers and proteins could also serve as capping ligands to synthesize Au NCs and endow them with self-assembled behaviors. In addition, several cations have been proven to be capable of triggering Au NC assembly into well-defined structures by electrostatic interactions. Therefore, we introduce the self-assembly of Au NCs on the basis of the three aforementioned synthesis strategies.
Amphiphilic Hydrocarbon
The self-assembly behaviors of hydrocarbon amphiphiles have been applied to Au NCs in two ways. One is to serve as soft templates at a liquid/liquid interface for in situ synthesis of shape-controlled assembled Au NCs. It is well-known that utilizing this interfacial organization technique could successfully guide the self-assembly of colloidal nanoparticles into a well-defined structure, which could also be applicable to the self-assembly of NCs. Utilizing the amphiphilicity of Au NCs precursor, Au (III)-SC 12 , Wang et al. demonstrated the successful self-assembly of Au NCs at the oil/water interface into ordered nanoribbons [24]. Au (III)-SC 12 has a hydrophilic head of an Au ion and a hydrophobic tail of alkyl, which guide the spontaneous self-assembly at the oil/water interface with Au 3+ pointing to the water phase. The water phase was added with a reducing agent, which further guaranteed the heterogeneous reduction of Au(III) to Au(0) at an oil/ water interface, leading to in situ formation of self-assembled Au NC nanoribbons at the interface ( Figure 1). The as-synthesized Au NC nanoribbons exhibited a large Stokes shift and more enhanced emission intensity than randomly dispersed Au NCs. Additionally, self-assembled nanosheets of hydrophobic alkyl thiol-capped Au NCs a single NC thick were also successfully obtained by the liquid/liquid assembly technique. Zhang and Lu investigated the two-dimensional (2D) self-assembly of 1-dodecanethiol (DT)-capped Au NCs in a colloidal solution of two miscible high boiling solvents with a slight difference of polarity [23]. The solvent microphase separation leads to a lamellar interface, which serves as a soft template to guide Au NC self-assembly into single-NC-thick sheets due to an inter-NC isotropic hydrophobic attraction. The morphology of the Au NC assemblies could be further adjusted by the Au NC concentration and solvent volume ratio. Furthermore, Zhang and coworkers reported the 2D self-assembly of DT-capped Au NCs into well-defined sheets with controlled thickness in the colloidal solution [25]. The initial assembly was 1D-oriented, triggered by the anisotropic dipolar attraction between NCs, leading to a redistribution of DT ligands and thereby generating the asymmetric van der Waals attraction between DT. The coordination of these two attractions together allows manipulation of the morphology and thickness of the 2D self-assembly of Au NCs. structure, which could also be applicable to the self-assembly of NCs. Utilizing the amphiphilicity of Au NCs precursor, Au (III)-SC12, Wang et al. demonstrated the successful self-assembly of Au NCs at the oil/water interface into ordered nanoribbons [24]. Au (III)-SC12 has a hydrophilic head of an Au ion and a hydrophobic tail of alkyl, which guide the spontaneous self-assembly at the oil/water interface with Au 3+ pointing to the water phase. The water phase was added with a reducing agent, which further guaranteed the heterogeneous reduction of Au(III) to Au(0) at an oil/ water interface, leading to in situ formation of self-assembled Au NC nanoribbons at the interface ( Figure 1). The as-synthesized Au NC nanoribbons exhibited a large Stokes′ shift and more enhanced emission intensity than randomly dispersed Au NCs. Additionally, self-assembled nanosheets of hydrophobic alkyl thiol-capped Au NCs a single NC thick were also successfully obtained by the liquid/liquid assembly technique. Zhang and Lu investigated the two-dimensional (2D) self-assembly of 1-dodecanethiol (DT)-capped Au NCs in a colloidal solution of two miscible high boiling solvents with a slight difference of polarity [23]. The solvent microphase separation leads to a lamellar interface, which serves as a soft template to guide Au NC self-assembly into single-NC-thick sheets due to an inter-NC isotropic hydrophobic attraction. The morphology of the Au NC assemblies could be further adjusted by the Au NC concentration and solvent volume ratio. Furthermore, Zhang and coworkers reported the 2D self-assembly of DT-capped Au NCs into well-defined sheets with controlled thickness in the colloidal solution [25]. The initial assembly was 1D-oriented, triggered by the anisotropic dipolar attraction between NCs, leading to a redistribution of DT ligands and thereby generating the asymmetric van der Waals attraction between DT. The coordination of these two attractions together allows manipulation of the morphology and thickness of the 2D self-assembly of Au NCs. On the other hand, amphiphilicity could be introduced into hydrophilic Au NCs by a simple surface modifying approach, thereby guiding the self-assembly of NCs at an air/water interface. For example, the anionic surface of the hydrophilic Au NCs could be modified with hydrophobic cations via a phrase-transfer-driven ion-paring reaction. Inspired by this idea, Xie and Lee successfully synthesized amphiphilic Au NCs by patching hydrophilic 6-mercaptohexanoic acid (MHA)-capped Au NCs with hydrophobic cetyltrimethylammonium ion (CTA + ) to approximately half of a monolayer coverage [26]. Owing to the coexistence of hydrophilic MHA and hydrophobic MHA-CTA + in a comparable ratio, the as-prepared Au NCs exhibited excellent solubility in solvents with different polarities and molecular-like amphiphilicity, and could self-assemble into staked bilayers at the air/liquid interface. On the other hand, amphiphilicity could be introduced into hydrophilic Au NCs by a simple surface modifying approach, thereby guiding the self-assembly of NCs at an air/water interface. For example, the anionic surface of the hydrophilic Au NCs could be modified with hydrophobic cations via a phrase-transfer-driven ion-paring reaction. Inspired by this idea, Xie and Lee successfully synthesized amphiphilic Au NCs by patching hydrophilic 6-mercaptohexanoic acid (MHA)-capped Au NCs with hydrophobic cetyltrimethylammonium ion (CTA + ) to approximately half of a monolayer coverage [26]. Owing to the coexistence of hydrophilic MHA and hydrophobic MHA-CTA + in a comparable ratio, the as-prepared Au NCs exhibited excellent solubility in solvents with different polarities and molecular-like amphiphilicity, and could self-assemble into staked bilayers at the air/liquid interface.
Au(I)-Thiolate Complexes
As one of the common Au(I)-thiolate complexes, Au(I)-cysteine complexes are well known to self-assemble into irregular architectures with sizes larger than 500 nm at an acidic pH [27]. A recent work suggests that the structural irregularity of Au(I)-cysteine complexes is related to the chirality of cysteine [28]. Specifically, the self-assemblies of pure L-cysteine-Au(I) or D-cysteine-Au(I) complexes are disordered and irregular with diameters larger than 500 nm, which is accordant with the aforementioned report. However, using the mixture of L-cysteine and D-cysteine to react with Au(III), the as-prepared L/D-cysteine-Au(I) complexes would self-assemble into a well-defined spindle shape. The morphological changes of Au(I) assemblies are determined to be closely associated with Au(I)-Au(I) aurophilic interactions, and stacked zwitterionic interactions and hydrogen bonding between L/D-cysteine ligands. Moreover, the Au(I)-cystine assemblies could serve as soft templates to prepare highly emissive Au NCs in situ by the NaBH 4 -mediated reduction.
On the other hand, our group recently found that Au(I)-glutathione (GSH) complexes were capable of forming crystalline networks encapsulating a great many Au NCs into nanoparticles in acidic pH [29]. The crystalline networks of Au(I)-GSH complexes could further disassemble by increasing the solution pH, thereby generating varied aggregation-induced emission (AIE) with a QY as high as 14%. Interestingly, the pH-induced disassembly results in the finding of a certain degree of crystallization occurring on the surface of Au NCs, expanding the knowledge of the surface/interfacial structures of AIE-type Au NCs. Additionally, the disassembled behaviors of the Au(I)-GSH thiolate crystalline networks surrounding Au NCs could further function as a pH-sensitive "valve" to control the access of environmental chemicals to the inner Au(0) core of the NCs. For example, for small molecules, such as cysteine, the Au(I) crystalline networks have pH-sensitive permeability and as the solution pH increased, the disassembly of crystalline networks would facilitate cysteine gaining access to the embedded Au NCs and etching the Au(0) core ( Figure 2). Based on this idea, our group recently developed an ultra-sensitive cysteine sensor at alkaline pH, exhibiting an ultra-wide linear concentration range of nine orders of magnitude and an ultra-low limit of detection of 6.3 pM [30]. As one of the common Au(I)-thiolate complexes, Au(I)-cysteine complexes are well known to self-assemble into irregular architectures with sizes larger than 500 nm at an acidic pH [27]. A recent work suggests that the structural irregularity of Au(I)-cysteine complexes is related to the chirality of cysteine [28]. Specifically, the self-assemblies of pure L-cysteine-Au(I) or D-cysteine-Au(I) complexes are disordered and irregular with diameters larger than 500 nm, which is accordant with the aforementioned report. However, using the mixture of L-cysteine and D-cysteine to react with Au(III), the as-prepared L/D-cysteine-Au(I) complexes would self-assemble into a well-defined spindle shape. The morphological changes of Au(I) assemblies are determined to be closely associated with Au(I)-Au(I) aurophilic interactions, and stacked zwitterionic interactions and hydrogen bonding between L/D-cysteine ligands. Moreover, the Au(I)-cystine assemblies could serve as soft templates to prepare highly emissive Au NCs in situ by the NaBH4-mediated reduction.
On the other hand, our group recently found that Au(I)-glutathione (GSH) complexes were capable of forming crystalline networks encapsulating a great many Au NCs into nanoparticles in acidic pH [29]. The crystalline networks of Au(I)-GSH complexes could further disassemble by increasing the solution pH, thereby generating varied aggregation-induced emission (AIE) with a QY as high as 14%. Interestingly, the pH-induced disassembly results in the finding of a certain degree of crystallization occurring on the surface of Au NCs, expanding the knowledge of the surface/interfacial structures of AIE-type Au NCs. Additionally, the disassembled behaviors of the Au(I)-GSH thiolate crystalline networks surrounding Au NCs could further function as a pH-sensitive "valve" to control the access of environmental chemicals to the inner Au(0) core of the NCs. For example, for small molecules, such as cysteine, the Au(I) crystalline networks have pH-sensitive permeability and as the solution pH increased, the disassembly of crystalline networks would facilitate cysteine gaining access to the embedded Au NCs and etching the Au(0) core ( Figure 2). Based on this idea, our group recently developed an ultra-sensitive cysteine sensor at alkaline pH, exhibiting an ultra-wide linear concentration range of nine orders of magnitude and an ultra-low limit of detection of 6.3 pM [30].
CTAB-Au Halide Complexes
Utilizing CTAB as a surfactant and thiourea as a reducing agent, Au halide salts have been employed to synthesize mesoscale assemblies of Au NCs with well-defined boundaries. The Au halide complex anions have a stronger affinity than Brto bind to CTA + , leading to weakened electrostatic repulsion between cationic CTA + , which facilitates the formation of hierarchical
CTAB-Au Halide Complexes
Utilizing CTAB as a surfactant and thiourea as a reducing agent, Au halide salts have been employed to synthesize mesoscale assemblies of Au NCs with well-defined boundaries. The Au halide complex anions have a stronger affinity than Br − to bind to CTA + , leading to weakened electrostatic repulsion between cationic CTA + , which facilitates the formation of hierarchical mesoscale micelles. The micelles of CTAB-Au halide complexes further serve as soft templates to guide the preparation and assembly of Au NCs, which have similar morphologies to the ultimate Au NC assemblies [31]. The concentration of CTAB was found to exert great influence on the morphology and packing density of the NC assemblies. For example, the 1D self-assembled nanorods of Au NCs would transform into hollow vesicles as the CTAB concentration increased. The Au NC assemblies templated by CTAB-Au halide complexes have been reported to have excellent performance and reusability in catalytic studies.
Polymer
Long branching polymers with specific conformations could serve as the backbone for templated synthesis of self-assembled Au NCs. One of the common spherical branching polymers, poly(amidoamine) (PAMAM) dendrimer, has been reported as a soft template to in situ synthesis self-assembled Au NCs. Utilizing the fourth-generation amine-terminated PAMAM dendrimers as capping and hosting ligands, Yang et al. presented a new strategy to successfully prepare highly emissive poly-Au 5 NCs with a QY of 25% [32]. The formation of poly-Au 5 NCs involves two stages. The first stage is simultaneous self-nucleation and self-assembly of PAMAM-Au ion complexes into poly-Au 5 NCs, accompanied with a rapid emission increase. The second stage is a sole self-assembly of these poly-Au 5 NCs without further reduction, with a relatively slow enhancement of emission but 30% contribution to the emission intensity of the final assemblies ( Figure 3). The intensive emission of the as-prepared poly-Au 5 NC assemblies originates from the more rigid structures reducing the nonradiative excited state relaxation, and the strengthened aurophilic interaction promoting the excited state relaxation dynamics. mesoscale micelles. The micelles of CTAB-Au halide complexes further serve as soft templates to guide the preparation and assembly of Au NCs, which have similar morphologies to the ultimate Au NC assemblies [31]. The concentration of CTAB was found to exert great influence on the morphology and packing density of the NC assemblies. For example, the 1D self-assembled nanorods of Au NCs would transform into hollow vesicles as the CTAB concentration increased. The Au NC assemblies templated by CTAB-Au halide complexes have been reported to have excellent performance and reusability in catalytic studies.
Polymer
Long branching polymers with specific conformations could serve as the backbone for templated synthesis of self-assembled Au NCs. One of the common spherical branching polymers, poly(amidoamine) (PAMAM) dendrimer, has been reported as a soft template to in situ synthesis self-assembled Au NCs. Utilizing the fourth-generation amine-terminated PAMAM dendrimers as capping and hosting ligands, Yang et al. presented a new strategy to successfully prepare highly emissive poly-Au5 NCs with a QY of 25% [32]. The formation of poly-Au5 NCs involves two stages. The first stage is simultaneous self-nucleation and self-assembly of PAMAM-Au ion complexes into poly-Au5 NCs, accompanied with a rapid emission increase. The second stage is a sole self-assembly of these poly-Au5 NCs without further reduction, with a relatively slow enhancement of emission but 30% contribution to the emission intensity of the final assemblies ( Figure 3). The intensive emission of the as-prepared poly-Au5 NC assemblies originates from the more rigid structures reducing the nonradiative excited state relaxation, and the strengthened aurophilic interaction promoting the excited state relaxation dynamics. In addition, nanohydrogels are capable of serving as soft templates for in situ synthesis of self-assembled Au NCs. The nanohydrogel could be formed by spatially confinement of Au(I)-thiolate complexes in a cationic polymer via electrostatic attraction. For example, Xie and coworkers proposed a simple and rapid in situ synthesis of self-assembled GSH-capped Au NCs within chitosan a nanohydrogel [33]. Chitosan is a polycationic polymer with many positively charged amine groups, the pka of which is 6.3-6.7, while the GSH ligands of the Au(I)-SG complex have two carboxylic groups (pKa1 = 2.12 and pKa2 = 3.53). At the gelation pH between the pka of chitosan and GSH, the electrostatic repulsion between negative carboxylic groups in Au(I)-SG complexes would be greatly weakened through the addition of cationic chitosan, leading to a spontaneous self-assembly of Au(I)-SG complexes into monodispersed nanoparticles. The spatial confinements of Au(I)-SG complexes within the chitosan matrix via electrostatic self-assembly facilitate a more rapid formation of Au NCs than free complexes without confinements. The In addition, nanohydrogels are capable of serving as soft templates for in situ synthesis of self-assembled Au NCs. The nanohydrogel could be formed by spatially confinement of Au(I)-thiolate complexes in a cationic polymer via electrostatic attraction. For example, Xie and coworkers proposed a simple and rapid in situ synthesis of self-assembled GSH-capped Au NCs within chitosan a nanohydrogel [33]. Chitosan is a polycationic polymer with many positively charged amine groups, the pk a of which is 6.3-6.7, while the GSH ligands of the Au(I)-SG complex have two carboxylic groups (pK a1 = 2.12 and pK a2 = 3.53). At the gelation pH between the pk a of chitosan and GSH, the electrostatic repulsion between negative carboxylic groups in Au(I)-SG complexes would be greatly weakened through the addition of cationic chitosan, leading to a spontaneous self-assembly of Au(I)-SG complexes into monodispersed nanoparticles. The spatial confinements of Au(I)-SG complexes within the chitosan matrix via electrostatic self-assembly facilitate a more rapid formation of Au NCs than free complexes without confinements. The as-prepared self-assembled Au NCs impregnated in the chitosan nanogel also exhibited a strong emission, due to the inhibition of the nonradiative decay pathway.
Protein Fibrils
The self-assembly of protein-capped Au NCs could be obtained by employing protein fibrils as soft templates to in situ synthesize NCs. It is known that many proteins are able to form well-ordered fibrillar cross-β-sheet structures, which are typical assemblies held together via weak noncovalent forces. For instance, Garcia et al. reported the fibrillation of human-insulin-assisted in situ synthesis of assembled Au NCs [34]. During the fibrillation process of insulin in an alkaline environment, the Au precursor was added under physiological temperature and vigorous stirring and Au NCs were gradually formed by the reduction of insulin. Chattopadhyay and coworkers also proposed the employment of bovine serum albumin (BSA) fibrils as the scaffold for the preparation of self-assembled Au NCs [35]. BSA has a free unpaired cysteine at the 34th position, which helps with dimerization and subsequent well-defined self-assembly, thereby forming BSA fibrils. Using BSA fibrils as stabilizers, the self-assembled Au NCs exhibited enhanced fluorescence and a large red shift of emission, in comparison to their individual counterparts.
Cation Induced Assembly
As a common cationic surfactant, cetyltrimethylammonium bromide (CTAB) can easily bind to nanoparticles with negative charges, such as silica, metal oxides, and quantum dots, due to the coexistence of electrostatic and hydrophobic interactions. As to Au NCs with negative charge, CTAB can build an electrostatic inter-NC connection, thereby guiding their self-assembly in an aqueous solution. Based on this idea, Wu and coworkers recently reported CTAB-induced assembly of GSH-capped Au NCs, which is favored by the electrostatic binding of CTA + to negatively charged carboxyl groups in GSH [36].
Additionally, Zn 2+ has been shown to function as an external metal ion triggering the self-assembly-mediated emission color tunability of Au NCs. For instance, individual 3-mercaptopropionic acid (3-MPA)-capped Au NCs are non-luminescent. By the addition of Zn 2+ , Au NCs self-assemble randomly via the coordination of Zn 2+ with carboxyl groups in MPA ligands, and emit intense yellow emissions owing to the restriction of the MPA ligands vibrations and rotations. After aging for 24 h, the irregular self-assemblies transform into well-defined one-dimensional (1D) architecture with green emissions, with a QY of 20%. Compared to random assembly, this blue shift in emission is attributed to the ascendancy of inter-aurophilic and Zn 2+ -NC interactions over the intra modes [37]. In addition, Kuppan further found that the highly ordered green emissive Au NC assemblies exhibited much better emission anisotropy than random assemblies, revealing that the directionality in self-assembly of Au NCs makes a great contribution to the emission polarization [38].
Polymer Micelles
Polymer micelles are one of the most common drug delivery nanosystems for anticancer therapy, exhibiting excellent biocompatibility and pharmacokinetic control. As the representative of polymer micelles, self-assembled diblock copolymers consisting of a thermosensitive poly(N-isopropylacrylamide) (PNIPAm) and a hydrophilic poly(ethylene glycol) (PEG) block have been proven to be a simple and useful platform for drug delivery. The PNIPAm has a cloud point temperature of 32 • C, leading to their self-assembly into micelles in the biological environment, which is stablished by the PEG corona to prevent aggregation. Thermosensitive and thiol-terminated PEG-PNIPAm could be further employed as a capping agent to synthesize Au NCs, which would self-assemble into micelles above their lower critical solution temperature, accompanied with an enhanced emission. The as-prepared thermosensitive Au NC-polymer micelles show great potential for fluorescent live cell imaging [39].
Protein
Protein-capped Au NCs can self-assemble into larger nanoparticles in the presence of GSH via a protein cross-linking approach. Using GSH as an endogenous reductant, the intramolecular disulfide bonds within the protein ligands on the surface of Au NCs were cleaved. The obtained free -SH groups assembled again through intermolecular disulfide bonds into protein nanoparticles, resulting in cross-linking and self-assembly of the protein-capped Au NCs (Figure 4). Compared to the individual Au NCs, the NC assemblies exhibited good biocompatibility, improved cellular uptake, highly precise tumor targeting, and excellent performance as photosensitizers [40]. Moreover, Shen and Cai successfully encapsulated indocyanine green (ICG) into the as-prepared Au NC assemblies via noncovalent binding for therapeutic real-time monitoring on the basis of fluorescence resonance energy transfer (FRET) [41]. Au NCs-ICG nanoprobes (Au NCs-INPs) also showed excellent dual-modal near-infrared fluorescence and photoacoustic imaging, improved cancer cell killing, and tumor removal efficiency in the simultaneous photodynamic therapy and photothermal therapy. Protein-capped Au NCs can self-assemble into larger nanoparticles in the presence of GSH via a protein cross-linking approach. Using GSH as an endogenous reductant, the intramolecular disulfide bonds within the protein ligands on the surface of Au NCs were cleaved. The obtained free -SH groups assembled again through intermolecular disulfide bonds into protein nanoparticles, resulting in cross-linking and self-assembly of the protein-capped Au NCs (Figure 4). Compared to the individual Au NCs, the NC assemblies exhibited good biocompatibility, improved cellular uptake, highly precise tumor targeting, and excellent performance as photosensitizers [40]. Moreover, Shen and Cai successfully encapsulated indocyanine green (ICG) into the as-prepared Au NC assemblies via noncovalent binding for therapeutic real-time monitoring on the basis of fluorescence resonance energy transfer (FRET) [41]. Au NCs-ICG nanoprobes (Au NCs-INPs) also showed excellent dual-modal near-infrared fluorescence and photoacoustic imaging, improved cancer cell killing, and tumor removal efficiency in the simultaneous photodynamic therapy and photothermal therapy.
Effects of Different Self-Assembled Strategies on Au NCs' Optical Properties.
The strategy of soft template-directed assembly usually has a great influence on the luminescence of Au NCs. On one hand, the components of some soft templates can themselves be used to be as capping ligands to in situ synthesize Au NCs, thereby forming a more compact and rigid ligand shell on the surface of the Au(0) core to enhance Au NCs' luminescence via the AIE mechanism. For instance, Au(I)-cysteine assemblies strongly staple on the surface of inner formed Au(0) cores, which reduces the PL quenching by collision and restrains the intramolecular vibrationand rotation-induced internal nonradiative relaxation pathways, thereby obtaining a high QY of ~10% [27,28]. In our previous work, during the formation of Au NCs within the crystalline Au(I)-GSH networks, some Au(I)-GSH complexes could insert their thiol ligands into the shell of Au NCs to form strong aurophilic interactions with the Au(I) of the staple-like motifs binding to the Au(0) core, leading to the formation of a crystalline and hence more compact shell, thereby contributing to the QY of ~ 14% [29]. On the other hand, some templates could only function as the supporting matrix to confine Au NCs in limited space, which resulted in the matrix-coordinate-induced aggregation restraining the nonradiative relaxation channels via locking the ligands of Au NCs in the matrix, thereby generating an intense AIE effect. For instance, GSH-capped Au NCs impregnated within the confined space in chitosan nanogel have a strong coordination between -COOH of GSH with negative charges, and -NH2 of chitosan with positive charges. This intense coordination could further restrict the GSH ligands' intra-or intermolecular vibration-and rotation-induced nonradiative relaxation pathways, thereby contributing to the luminescent intensity to a major extent [33].
In addition, the cation-induced assembled strategy, which could crosslink Au NCs in an ordered aggregated route, is more like a special case of the conventional cation-induced AIE
Effects of Different Self-Assembled Strategies on Au NCs' Optical Properties
The strategy of soft template-directed assembly usually has a great influence on the luminescence of Au NCs. On one hand, the components of some soft templates can themselves be used to be as capping ligands to in situ synthesize Au NCs, thereby forming a more compact and rigid ligand shell on the surface of the Au(0) core to enhance Au NCs' luminescence via the AIE mechanism. For instance, Au(I)-cysteine assemblies strongly staple on the surface of inner formed Au(0) cores, which reduces the PL quenching by collision and restrains the intramolecular vibration-and rotation-induced internal nonradiative relaxation pathways, thereby obtaining a high QY of~10% [27,28]. In our previous work, during the formation of Au NCs within the crystalline Au(I)-GSH networks, some Au(I)-GSH complexes could insert their thiol ligands into the shell of Au NCs to form strong aurophilic interactions with the Au(I) of the staple-like motifs binding to the Au(0) core, leading to the formation of a crystalline and hence more compact shell, thereby contributing to the QY of~14% [29]. On the other hand, some templates could only function as the supporting matrix to confine Au NCs in limited space, which resulted in the matrix-coordinate-induced aggregation restraining the nonradiative relaxation channels via locking the ligands of Au NCs in the matrix, thereby generating an intense AIE effect. For instance, GSH-capped Au NCs impregnated within the confined space in chitosan nanogel have a strong coordination between -COOH of GSH with negative charges, and -NH 2 of chitosan with positive charges. This intense coordination could further restrict the GSH ligands' intra-or intermolecular vibration-and rotation-induced nonradiative relaxation pathways, thereby contributing to the luminescent intensity to a major extent [33].
In addition, the cation-induced assembled strategy, which could crosslink Au NCs in an ordered aggregated route, is more like a special case of the conventional cation-induced AIE method. The coordination of cations with the negatively charged groups in capping ligands on NC surfaces rigidifies the surface ligand shell, thereby restraining the intramolecular motion (RIM) to generate intense AIE [36][37][38]. Additionally, for the strategy of ligand-induced assembly, when the morphology of the ligand shell of Au NCs undergoes deformation during the ligand-induced self-assembly process, the PL intensity of the Au NCs is more likely to be enhanced via AIE mechanism. For instance, BSA-capped Au NCs can self-assemble via a protein cross-linking approach with negligible change in optical properties compared to the individual NCs, because the BSA shell is relatively rigid and its intramolecular rotations and vibrations are unaffected during assembly [40,41]. Contrarily, self-assembled PNIPAm-capped Au NCs exhibited enhanced luminescent intensity above the clouding temperature, owing to the formation of more compact and rigid PNIPAm self-assembled structures around Au NCs, thereby inducing the RIM effect to generate strong AIE [39].
The Self-Assembly of Ag NCs
Self-assembly studies of Ag NCs are still in the preliminary stages, and have achieved limited success in strategies of capping ligand-and solvent-induced assembly into well-defined structures. However, DNA-directed self-assembly of Ag NCs is being intensively investigated because DNA could not only serve as capping ligands for Ag NCs, but also as versatile building blocks for programmable assembly. Therefore, we will describe the self-assembly of Ag NCs in three ways: DNA-, capping-ligand-, and solvent-induced assembly.
DNA-Induced Assembly
DNA-capped Ag NCs have attracted tremendous attention as a novel powerful fluorescence nanomaterial, which are well-known to exhibit sequence-dependent emission. Some efforts have been made to spatially manipulate the self-assembly of Ag NCs through DNA nanostructures. By utilizing sequence-specific loops as the stabilized ligands, Orbach et al. proposed two approaches to synthesis of self-assembled nucleic-acid-capped Au NC nanowires with red or yellow emission [42]. One is through the hybridization-polymerization process of nucleic acids. The other is through the nucleic-acid-driven hybridization chain reaction. Additionally, Ye et al. presented another simple method to assemble Ag NCs and a G-rich strand into nanowires in the presence of a long, enzymatically produced scaffold [43]. The scaffold drives a number of Ag NCs and G-rich strands in proximity, which leads to an approaching of the end of G-rich strands to NCs, thereby generating a great enhancement in emission.
On the other hand, the double-stranded-DNA-stabilized Ag NCs (dsDNA-Ag NCs) could self-assemble into a large sheet-like membrane in solution driven by bovine serum albumin (BSA), thereby generating AIE-induced five-fold emission enhancement and a blue shift in emission [44]. After addition of digestive enzyme, the irregular morphology of Ag NC assemblies would transform into large, well-defined particles with more significant emission enhancement (30-fold), owing to their altered surface ( Figure 5). In addition, Wu and coworkers recently reported two other self-assembled modes of dsDNA-Ag NCs [45]. Through the co-assembly of dsDNA-Ag NCs and human papillomavirus (HPV), 16 main capsid protein L1, empty HPV virus-like particles (VLPs) were formed in assembly buffer, of which the cavities were further bound with dsDNA-Ag NCs, exhibiting enhanced emission, while the post-assembly induced the binding of dsDNA-Ag NCs to the external surface of VLPs, which showed no enhancement. Accordingly, the co-assembly of capsid and dsDNA-Ag NCs provide a novel emissive method to monitor the process of in situ VLP self-assembly.
Solvent-Induced Assembly
Manipulating the self-assembly of Ag NCs based on non-covalent forces through modulating the interactions between building blocks and solvents. Shen et al. reported the controlled assembly of mercaptonicotinate (MNA)-capped Ag NCs into multilayer vesicles or nanowires in different solvents [46]. After protonating the Ag NCs by adding hydrochloric acid, the morphology of self-assembled vesicles kept stable in aprotic solvents, such as DMSO and CH3CN, while in protic solvents, such as water, MeOH, and EG, the vesicular morphology would transform into nanowires ( Figure 6). The formation of self-assembled nanowires originates from the strong solvent-bridged hydrogen bonding and the π-π stacking interactions between MNA. Furthermore, the obtained nanowires of Ag NCs could self-assemble into hydrogels, which have a high water content of 99.5% and excellent self-healing and mechanical strength properties. Additionally, Gao et al. recently reported another example of solvent-induced self-assembly of Ag NCs [47]. Using water-soluble N-acetyl-L-cysteine (NALC) as reducing and capping ligands, atomically precise Ag6 NCs were synthesized. By introducing ethanol to the aqueous solution of NCs, the NALC-capped Ag6 NCs further self-assembled into ultrafine nanowires, long ribbons, and 3D porous networks, owing to the solvent polarity, van der Waals, and electrostatic interactions between NALC ligands. Such self-assembly of Ag NCs exhibits a great potential in the future manufacture of nanodevices based on Ag NCs.
Solvent-Induced Assembly
Manipulating the self-assembly of Ag NCs based on non-covalent forces through modulating the interactions between building blocks and solvents. Shen et al. reported the controlled assembly of mercaptonicotinate (MNA)-capped Ag NCs into multilayer vesicles or nanowires in different solvents [46]. After protonating the Ag NCs by adding hydrochloric acid, the morphology of self-assembled vesicles kept stable in aprotic solvents, such as DMSO and CH 3 CN, while in protic solvents, such as water, MeOH, and EG, the vesicular morphology would transform into nanowires ( Figure 6). The formation of self-assembled nanowires originates from the strong solvent-bridged hydrogen bonding and the π-π stacking interactions between MNA. Furthermore, the obtained nanowires of Ag NCs could self-assemble into hydrogels, which have a high water content of 99.5% and excellent self-healing and mechanical strength properties. Additionally, Gao et al. recently reported another example of solvent-induced self-assembly of Ag NCs [47]. Using water-soluble N-acetyl-L-cysteine (NALC) as reducing and capping ligands, atomically precise Ag 6 NCs were synthesized. By introducing ethanol to the aqueous solution of NCs, the NALC-capped Ag 6 NCs further self-assembled into ultrafine nanowires, long ribbons, and 3D porous networks, owing to the solvent polarity, van der Waals, and electrostatic interactions between NALC ligands. Such self-assembly of Ag NCs exhibits a great potential in the future manufacture of nanodevices based on Ag NCs.
Solvent-Induced Assembly
Manipulating the self-assembly of Ag NCs based on non-covalent forces through modulating the interactions between building blocks and solvents. Shen et al. reported the controlled assembly of mercaptonicotinate (MNA)-capped Ag NCs into multilayer vesicles or nanowires in different solvents [46]. After protonating the Ag NCs by adding hydrochloric acid, the morphology of self-assembled vesicles kept stable in aprotic solvents, such as DMSO and CH3CN, while in protic solvents, such as water, MeOH, and EG, the vesicular morphology would transform into nanowires ( Figure 6). The formation of self-assembled nanowires originates from the strong solvent-bridged hydrogen bonding and the π-π stacking interactions between MNA. Furthermore, the obtained nanowires of Ag NCs could self-assemble into hydrogels, which have a high water content of 99.5% and excellent self-healing and mechanical strength properties. Additionally, Gao et al. recently reported another example of solvent-induced self-assembly of Ag NCs [47]. Using water-soluble N-acetyl-L-cysteine (NALC) as reducing and capping ligands, atomically precise Ag6 NCs were synthesized. By introducing ethanol to the aqueous solution of NCs, the NALC-capped Ag6 NCs further self-assembled into ultrafine nanowires, long ribbons, and 3D porous networks, owing to the solvent polarity, van der Waals, and electrostatic interactions between NALC ligands. Such self-assembly of Ag NCs exhibits a great potential in the future manufacture of nanodevices based on Ag NCs. Figure 6. Schematic representation of the building block of Ag6-NC for the morphological evolution process controlled by molecular structure and solvents into vesicles and nanowires. Adapted with permission from Ref. [46]. Copyright (2017) Royal Society of Chemistry. Figure 6. Schematic representation of the building block of Ag 6 -NC for the morphological evolution process controlled by molecular structure and solvents into vesicles and nanowires. Adapted with permission from Ref. [46]. Copyright (2017) Royal Society of Chemistry.
Ligand-Induced Assembly
Self-assembled Ag NCs can be obtained from a bottom-up route by choosing appropriate capping ligands. Li et al. developed a ligand etching strategy to direct the self-assembly of NCs into lamellar supramolecular structures [48]. By employing p-aminothiophenol (PATP) as an etchant, self-assembled lamellar Ag nanoleaves composed of Ag 25 NCs and PATP spontaneously formed from etching 4 nm Ag nanoparticles (Figure 7). The mechanism of assembly was revealed as a two-step reaction. First, the 4 nm Ag nanoparticles were rapidly etched by PATP into~1 nm Ag 25 NCs, which were further interconnected by PATP to form Ag 25 -PATP-Ag 25 complexes owing to the electrostatic and covalent interactions. Second, these Ag 25 -PATP-Ag 25 complexes served as building blocks to assemble lamellar Ag nanoleaves, due to the intense dipole-dipole interaction and π-π stacking force between the neighboring rigid benzene skeleton of PATP. Although this strategy shows a vital route to design novel morphologies of Ag assemblies, the two-step assembly process is complicated and time-consuming. Jia et al. reported a more effective and straightforward avenue to bottom-up synthesis of self-assembled Ag NCs [49]. Utilizing D-penicillamine (DPA) as a reducing and capping agent, the self-assembly of Ag NCs could be obtained by one-pot microwave-assisted synthesis. Through tuning the synthesis conditions, including the precursor concentration, chirality of DPA, environment temperature, and limited reaction volume, the Ag NCs would self-assemble into different morphologies with varied emission colors. The as-prepared Ag NC assemblies also exhibited intense emissions with a QY as high as 25.6%, owing to the mechanism of AIE. In addition, one of the morphologies of NC assemblies, the lamellar supramolecular structure, possesses excellent electrical conductivity due to its well-confined and closely packed architecture.
Ligand-Induced Assembly
Self-assembled Ag NCs can be obtained from a bottom-up route by choosing appropriate capping ligands. Li et al. developed a ligand etching strategy to direct the self-assembly of NCs into lamellar supramolecular structures [48]. By employing p-aminothiophenol (PATP) as an etchant, self-assembled lamellar Ag nanoleaves composed of Ag25 NCs and PATP spontaneously formed from etching 4 nm Ag nanoparticles (Figure 7). The mechanism of assembly was revealed as a two-step reaction. First, the 4 nm Ag nanoparticles were rapidly etched by PATP into ~1 nm Ag25 NCs, which were further interconnected by PATP to form Ag25-PATP-Ag25 complexes owing to the electrostatic and covalent interactions. Second, these Ag25-PATP-Ag25 complexes served as building blocks to assemble lamellar Ag nanoleaves, due to the intense dipole-dipole interaction and π-π stacking force between the neighboring rigid benzene skeleton of PATP. Although this strategy shows a vital route to design novel morphologies of Ag assemblies, the two-step assembly process is complicated and time-consuming. Jia et al. reported a more effective and straightforward avenue to bottom-up synthesis of self-assembled Ag NCs [49]. Utilizing D-penicillamine (DPA) as a reducing and capping agent, the self-assembly of Ag NCs could be obtained by one-pot microwave-assisted synthesis. Through tuning the synthesis conditions, including the precursor concentration, chirality of DPA, environment temperature, and limited reaction volume, the Ag NCs would self-assemble into different morphologies with varied emission colors. The as-prepared Ag NC assemblies also exhibited intense emissions with a QY as high as 25.6%, owing to the mechanism of AIE. In addition, one of the morphologies of NC assemblies, the lamellar supramolecular structure, possesses excellent electrical conductivity due to its well-confined and closely packed architecture.
Effect of Different Self-Assembly Strategies on Ag NCs' Optical Properties.
The DNA-induced assembly strategy has an important influence on the luminescence of DNA-capped Ag NCs. On one hand, the luminescent intensity of self-assembled DNA-Ag NCs nanowires could be enhanced though "G-rich" sequences [43]. It is well-known that the proximity of "G-rich" sequences to the Ag NCs would separate Ag NCs from the solvent to form a better protected environment around NCs, thereby reducing the nonradiative relaxation pathways to obtain a luminescent enhancement [50]. On the other hand, the introduction of capsids or protein to DNA-capped Ag NCs could result in the self-assembly process of Ag NCs through capsid-or protein-induced DNA assembly. Compared to the individual Ag NCs, the luminescent intensity of the self-assembled Ag NCs would be enhanced owing to the RIM effect, thereby generating strong AIE [44,45].
Additionally, some specific capping ligands are able to endow Ag NCs with the capability to self-assemble into well-defined structures, accompanied with enhanced luminescent intensity and tunable emission color. The luminescence of Ag NC assemblies originates from LMCT or LMMCT and subsequent radiative relaxation via triplet excited states. Compared with individual Ag NCs, the self-assembled NCs usually have stronger hydrogen bonding and Ag(I)···Ag(I) interaction,
Effect of Different Self-Assembly Strategies on Ag NCs' Optical Properties
The DNA-induced assembly strategy has an important influence on the luminescence of DNA-capped Ag NCs. On one hand, the luminescent intensity of self-assembled DNA-Ag NCs nanowires could be enhanced though "G-rich" sequences [43]. It is well-known that the proximity of "G-rich" sequences to the Ag NCs would separate Ag NCs from the solvent to form a better protected environment around NCs, thereby reducing the nonradiative relaxation pathways to obtain a luminescent enhancement [50]. On the other hand, the introduction of capsids or protein to DNA-capped Ag NCs could result in the self-assembly process of Ag NCs through capsid-or protein-induced DNA assembly. Compared to the individual Ag NCs, the luminescent intensity of the self-assembled Ag NCs would be enhanced owing to the RIM effect, thereby generating strong AIE [44,45].
Additionally, some specific capping ligands are able to endow Ag NCs with the capability to self-assemble into well-defined structures, accompanied with enhanced luminescent intensity and tunable emission color. The luminescence of Ag NC assemblies originates from LMCT or LMMCT and subsequent radiative relaxation via triplet excited states. Compared with individual Ag NCs, the self-assembled NCs usually have stronger hydrogen bonding and Ag(I)···Ag(I) interaction, which restricts the intramolecular motions to reduce the energy loss from the nonradiative pathways, thereby generating enhanced luminescence. Moreover, multicolor emissive Ag NC assemblies can be obtained by adjusting synthetic conditions, for example, the ligand chirality or the precursor concentration. The tunable emission color of NC assemblies is related to their varied distance of the Ag···Ag interaction through the adjustment of the hydrogen bonding and Ag···Ag interaction within assemblies [49].
The Self-Assembly of Cu NCs
In comparison to the noble metals Au and Ag, metal Cu is widely used in industry due to its abundant reserves, relatively low price, and high conductivity. As the size of metal Cu is confined to below 2 nm, Cu NCs exhibit unique photoluminescence (PL) and enhanced electrocatalytic performance compared with their larger nanometer-sized and bulk counterparts. However, the PL intensity of individual Cu NCs is normally very weak and the emission color is hard to control, owing to the weak restriction of the ligands vibrations and rotations. Additionally, individual Cu NCs can be easily oxidized and aggregate both in storage and further employment, which greatly weakens their stability in practical applications.
In contrast to individual Cu NCs, self-assembled Cu NCs exhibit enhanced PL intensity, broad emission color tenability, and excellent stability, showing tremendous potential in many fields, especially biosensing, bioimaging, LED, and electrocatalysis in oxygen reduction reactions (ORR). However, the self-assembly of ultra-small Cu NCs is still challenging and has achieved limited success. Next, we will introduce the self-assembly strategies of Cu NCs from the standpoint of capping-ligand-, hydrogeland cation-induced assembly.
Hydrogel-Templated Assembly
Hydrogel has been recently employed as the soft template to direct shaped-controlled synthesis self-assembled Cu NCs. Rogach and coworkers reported the in situ preparation of composite films incorporating Cu NCs, wherein the NCs were impregnated into a 3D hydrogel network of polyvinylpyrrolidone (PVP) and poly(vinyl alcohol) (PVA) [51]. This strategy can allow us to produce large area films of Cu NCs and avoid the use of toxic organic solvents and heavy metal elements. The as-synthesized Cu NC hydrogel film exhibited strong orange emissions with a QY of 30% through the strengthened LMCT, followed by a radiative relaxation pathway after hydrogel dehydration.
Cation-Induced Assembly
Very recently, Li et al. proposed a metal ion (Ce 3+ )-induced self-assembled strategy to rearrange the morphology of irregular aggregated cysteine-capped Cu NCs through a crosslinking pathway, leading to the formation of well-defined mesoporous self-assembled spheres [52]. Cysteine-capped Cu NCs are well-known to form aggregates in an acidic environment, emitting strong emissions due to the AIE mechanism. However, their morphologies are quite irregular, owing to a random connection with each other in a pretty fast route. To slow down the velocity of aggregation, a two-step reaction was presented as follows. First, a relatively weak alkaline Na 2 CO 3 was employed to disperse the irregular aggregates of cysteine gently by releasing OH − via a hydrolysis route. Second, Ce 3+ was added to neutralize the hydrolyzed OH − from Na 2 CO 3 , leading to an increase of pH to the neutral value, and crosslinking the dispersed NCs to assemble again in an ordered way. The as-prepared Cu NC assemblies exhibited better performance in stability and color purity tests than the irregularly aggregated NCs.
Ligand-Induced Assembly
Capping-ligand-directed assembly is the most common strategy for synthesizing Cu NCs. Besides acting as reducing and stabilizing agents, the capping ligands on the metal core surface have been known to exert a great influence on the PL of metal NCs, via charge transfer from the ligands to the metal atom core (e.g., LMCT and LMMCT) or direct donation of delocalized electrons from electron-rich groups or atoms in the ligands to the metal core. In this respect, the capping ligands of self-assembled Cu NCs not only function as building blocks to construct different morphologies, but also play an important role in enhancing the PL intensity and broadening emission color tunability of the NC assemblies. So far, great efforts have been paid to studying the ligand engineering in self-assembly of Cu NCs to improve their stability and emission, such as utilizing stiffening ligands and ligands with electron-rich groups or atoms. Accordingly, we will describe the capping-ligand-directed Cu NC self-assembly strategy based on different types of stabilizers.
Alkyl Thiols
The alkyl thiols are conventionally adopted stabilizers in the synthesis of metal NCs due to their strong interaction with metals. 1-dodecanethiol (DT), as the most commonly used alkyl thiol capping ligand in self-assembly of Cu NCs, was shown to be an excellent stabilizer for the synthesis of Cu NCs by reducing Cu 2+ in dibenzyl ether (BE). The directly synthesized DT-capped Cu NCs in BE were individual and showed no visible emission by 365 nm excitation. After annealing treatments to facilitate the dynamic mobility of DT ligands, these individual Cu NCs with poor emission could further self-assemble into two-dimensional architectures, orientated by the polar attraction between NCs and reinforced by the van der Waals force attraction between DT. By adjusting the annealing temperature, Zhang and coworkers further prepared self-assembled 2D ribbons of DT-Cu NCs in BE with varied compactness [53]. More compacted assemblies of NCs emit stronger emissions, owing to the strengthened inter-and intra-coprophilic interactions and weakened intramolecular vibration and rotation of DT ligands. Meanwhile, the improved compactness introduces additional Cu(I)-Cu(I) cuprophilic interactions of inter-NCs leading to a blue shift in emission, leading to tunable emission color from yellow to blue-green. Self-assembled DT-capped NCs with different emission colors were employed to fabricate NC-based white LEDs.
Self-assembled ribbons of DT-stablished Cu NCs could also be obtained via direct synthesis of Cu NCs in a mixed solvent of BE and liquid paraffin (LP), using DT as reducing agent. The spontaneous self-assembly of Cu NCs in the colloidal solution was controlled by dipole-induced asymmetric van der Waals attraction. By tuning these two driven forces via annealing treatments, the thickness of the self-assembled ribbons could be adjusted to one single NC scale at high annealing temperatures ( Figure 8). Due to the strong van der Waals force inter-NCs, the self-assembled ribbons were free-standing and could be collected as electrocatalysts for ORR. Compared to the individual Cu NCs, the ribbons exhibited improved stability and excellent electrocatalytic capability [54].
the metal atom core (e.g. LMCT and LMMCT) or direct donation of delocalized electrons from electron-rich groups or atoms in the ligands to the metal core. In this respect, the capping ligands of self-assembled Cu NCs not only function as building blocks to construct different morphologies, but also play an important role in enhancing the PL intensity and broadening emission color tunability of the NC assemblies. So far, great efforts have been paid to studying the ligand engineering in self-assembly of Cu NCs to improve their stability and emission, such as utilizing stiffening ligands and ligands with electron-rich groups or atoms. Accordingly, we will describe the capping-ligand-directed Cu NC self-assembly strategy based on different types of stabilizers.
Alkyl Thiols
The alkyl thiols are conventionally adopted stabilizers in the synthesis of metal NCs due to their strong interaction with metals. 1-dodecanethiol (DT), as the most commonly used alkyl thiol capping ligand in self-assembly of Cu NCs, was shown to be an excellent stabilizer for the synthesis of Cu NCs by reducing Cu 2+ in dibenzyl ether (BE). The directly synthesized DT-capped Cu NCs in BE were individual and showed no visible emission by 365 nm excitation. After annealing treatments to facilitate the dynamic mobility of DT ligands, these individual Cu NCs with poor emission could further self-assemble into two-dimensional architectures, orientated by the polar attraction between NCs and reinforced by the van der Waals force attraction between DT. By adjusting the annealing temperature, Zhang and coworkers further prepared self-assembled 2D ribbons of DT-Cu NCs in BE with varied compactness [53]. More compacted assemblies of NCs emit stronger emissions, owing to the strengthened inter-and intra-coprophilic interactions and weakened intramolecular vibration and rotation of DT ligands. Meanwhile, the improved compactness introduces additional Cu(I)-Cu(I) cuprophilic interactions of inter-NCs leading to a blue shift in emission, leading to tunable emission color from yellow to blue-green. Self-assembled DT-capped NCs with different emission colors were employed to fabricate NC-based white LEDs.
Self-assembled ribbons of DT-stablished Cu NCs could also be obtained via direct synthesis of Cu NCs in a mixed solvent of BE and liquid paraffin (LP), using DT as reducing agent. The spontaneous self-assembly of Cu NCs in the colloidal solution was controlled by dipole-induced asymmetric van der Waals attraction. By tuning these two driven forces via annealing treatments, the thickness of the self-assembled ribbons could be adjusted to one single NC scale at high annealing temperatures (Figure 8). Due to the strong van der Waals force inter-NCs, the self-assembled ribbons were free-standing and could be collected as electrocatalysts for ORR. Compared to the individual Cu NCs, the ribbons exhibited improved stability and excellent electrocatalytic capability [54]. In addition, many efforts have been made to further control the self-assembly of the DT-protected NCs. First, light-controlled self-assembly of DT-capped Cu NCs. Zhang′ s group modified DT with photo-responsive azobenzene (Azo) group, and used the Azo-DT as capping In addition, many efforts have been made to further control the self-assembly of the DT-protected NCs. First, light-controlled self-assembly of DT-capped Cu NCs. Zhang s group modified DT with photo-responsive azobenzene (Azo) group, and used the Azo-DT as capping ligands to synthesize Cu NCs in a colloidal solution of BE and LP [55]. The synthesized NCs subsequently self-assembled into ribbons by dipole-induced asymmetric van der Waals attraction, and could further transform into spheres in response to the irradiation of UV light. The self-assembled ribbons and spheres of Cu NCs were employed in ORR and the ribbons exhibited better catalytic activity. Second, chloride ions oriented self-assembly of DT-capped Cu NCs [56]. Owing to the selective adsorption of chloride ions on the specific facets of Cu NCs, the inter-NC dipolar attraction was weakened, resulting in the redistribution of the DT ligands. Accordingly, the morphology of the self-assembled Cu NCs transformed from 1D nanowires to 2D nanoribbons and nanosheets with increasing concentration of chloride.
Moreover, the metal defects in NC assemblies have been reported to exert a great influence on the emission intensity and color turnability of self-assembled Cu NCs. The contribution of metal defects in NC self-assembly is revealed by accelerating the self-assembly process to deliberately create more metal defects on the surface using ethanol. The metal-defect-rich nanosheets have been determined to possess a high percentage of Cu(I), which facilitates the radiative relaxation pathways by ligand-to-metal-metal charge transfer (LMMCT). Accordingly, the quantum yield (QY) of NC assemblies is greatly enhanced (15.4%), and the emission of the NC assemblies is red-shifted [57]. Inspired by the contribution of Cu(I) metal defects to emission, Au(I) metal defects were doped into the self-assembled nanosheet of Cu NCs to form an additional Au(I)-centered state [58]. The doped Au(I) metal defects introduced new Au(I)-Cu(I) metallophilic interactions, which resulted in the ligand-to-Cu-Au charge transfer facilitating the radiative relaxation pathway, thereby enhancing the emission intensity. Meanwhile, the Au(I) defect doping lowered the energy, leading to a red shift of emission. Only 0.3% of Au(I) doped in assemblies could induce a four-fold emission enhancement and a 100 nm red shift of the emission. The mixture of self-assemblies of NCs with different emission colors and high emission intensity was employed as excellent phosphor in white LED.
Aromatic Thiol
Compared to alkyl thiols, the aromatic thiols possess conjugated benzenes, the electronic structures of which are able to be flexibly controlled by altering substitutional groups. The NCs stablished by the aromatic thiols usually exhibit unique electronic structures, electrochemical properties, and surface chemistry. Moreover, aromatic thiol stabilizers increase the electron delocalization of NCs, leading to red shift of the emissions. On the basis of the advantages of the aromatic-thiol-capped Cu NCs, replacing the conventional alkyl thiols with aromatic ones as reducing and capping agents to synthesize self-assembled Cu NCs has a great effect on LMCT and LMMCT, resulting in emission enhancement and emission color tunability. For example, utilizing 2,3,5,6-tetrafluorothiophenol (TFTP) as a reducing and capping agent, self-assembled nanoribbons of Cu NCs were easily synthesized, for which the absolute quantum yield (QY) was as high as 43.0% [59]. Additionally, by using different aromatic thiols with varied conjugation capabilities as capping ligands, the emission color of NC assemblies can be tuned from yellow to dark red and their QY can reach as high as 15.6% [60]. Moreover, the aromatic-thiol-capped self-assembled Cu NCs could be also obtained by a bottom-up synthesis through a ligand exchange reaction of individual Cu NCs with aromatic-thiols and a spontaneous self-assembly. For example, the self-assembled MUA-capped Cu NCs could be prepared by this bottom-up synthetic strategy, exhibiting permanent excimer-like physics and controlled optical properties [61].
The isomeric effects of aromatic thiols as capping ligands on the self-assembly of Cu NCs were further examined. Using three isomers of mercapto-benzoic acid (MBA) as reducing and capping agentd, one-pot synthesis of self-assembled Cu NCs was developed [62]. The assemblies capped by the three isomers of MBA (TA, 3-MBA, 4-MBA) exhibited different optical and physical properties, such as varied emission color, emission intensity, and pH-induced morphologies. More attempts have been made to control the self-assembly process of aromatic-thiol-capped Cu NCs. By adjusting the experimental variables during self-assembly (e.g., temperature, duration of assembly, the solvents . . . ) to influence the weak interaction between NCs, the inter-NC distance in the assemblies can be further controlled, leading to the variation of photophysical properties, especially allowing emission color tunability and controllability of emission intensity via ligand-to-Cu-Cu charge transfer [63].
Effect of Different Self-Assembly Strategies on Cu NCs' Optical Properties
The ligand-induced assembly strategy plays an important role in self-assembly of Cu NCs. Through control of the compactness of Cu NC assemblies by adjusting the experimental parameters, including the annealing treatment, the concentration of chloride ions, the solvent, etc., enhanced luminescence intensity and tunable emission color of self-assembled Cu NCs can be obtained. High compactness introduced new inter-NC cuprophilic interaction to increase the average distance between adjacent Cu(I) atoms, thereby resulting in the blue emission shift of self-assembled Cu NCs. In addition, the enhanced compactness strengthened the inter-NC cuprophilic interaction, facilitating the excited-state relaxation and restrain the inter-or intramolecular vibrations and rotations of capping ligands reducing the nonradiative pathways, together leading to the enhanced luminescence via AIE mechanism.
In addition, the dehydration process in hydrogel-templated Cu NC assembly exerts an important effect on the PL intensity of Cu NC assemblies through the AIE mechanism [51]. Before dehydration, the Cu NCs impregnated in the hydrogel experience relatively flexible motion of capping ligands on the Cu(0) core, thereby increasing energy loss via nonradiative pathways, and the NC assemblies exhibit weak luminescence. However, after dehydration, the hydrogel becomes more compact and rigid, restricting the RIM of ligands on the NC surface, thereby generating strong AIE. It is also possible that the inter-NC interactions become strengthened, owing to the formation of the additional cuprophilic interaction. Additionally, the luminescence intensity of Cu NCs could be enhanced through cation-induced NC self-assembly into highly ordered structures. For instance, the crosslinking of Cu NCs by Ce 3+ self-assembles into well-ordered mesoporous spheres, which are compact to induce RIM effect, thereby exhibiting enhanced luminescent intensity via AIE mechanism [52].
Summary and Future Perspective
In summary, we discovered that the self-assembly technique could be applied to metal NCs by conducting their capping ligands configuration to introduce additional noncovalent or covalent forces to the inter-NCs interactions, such as amphiphilicity, electrostatic interactions, van der Waals forces, hydrogen bonding, or disulfide bonds, thereby guiding them to spontaneously organize into well-define and stable architectures. Three fundamental self-assembly strategies of metal NCs are presented, including soft template-, capping-ligand-, solvent-, and cation-induced assembly. Moreover, the self-assembly strategies of every metal NC exhibit different characteristics. As to Au NCs, the most common strategy is utilizing different soft templates to direct the shape-controlled assembly, such as amphiphilic hydrocarbon, Au(I)-thiolate complexes, CTAB−metal halide complexes, polymer, and protein fibrils. As to Ag NCs, DNA-induced assembly is most widely studied, in which DNA not only serves as the capping ligand for Ag NCs, but also functions as a building block to construct well-defined structures. As to Cu NCs, using different types of stabilizers to endow Ag NCs with self-assembly capabilities has been well investigated, including alkyl thiols and aromatic thiols.
In addition, we discovered that the self-assembly process has a great influence on the optical properties of NCs. After self-assembly, metal NCs exhibit enhancing emission intensity, owing to the strengthened inter-and intra-metallophilic interactions and weakened ligand vibrations and rotations. Meanwhile, some of the assembled NCs showed tunable emission color by tuning the inter-NC metallophilic interactions to obtain blue or red shifts in emission. Moreover, these NC assemblies exhibit excellent performance in extensive fields, including biosensing, drug-delivery, bioimaging, light-emitting diodes, and electrocatalysts.
Despite that these results are encouraging, the self-assembly of metal NCs is still in a preliminary stage and needs new breakthroughs. First, many of the synthetic routes of assembled NCs involve the use of organic solvents, which causes severe environmental problems. More attention should be paid to the self-assembly of NCs in aqueous solution. Second, most of the NC assemblies are on the large size, from several hundred nanometers to mesoscale, which greatly restricts their bioimaging and biomedical applications in future. Further studies should make more efforts to synthesize novel architecture in nanoscale exploring greater possibilities in the biological area. Third, although several environmental factors have been discovered to have a great influence on NCs self-assembly process, such as solvent, temperature, the solution pH, and so on, rational control of the self-assembly process of metal NCs is still one of the key challenges. More efforts should be made to develop novel stimulus-responsive assemblies of NCs to control their morphology and optical properties. Therefore, new advances of synthetic routes and construction of intelligent nanoassemblies are still needed for metal NCs to obtain better performance in practical applications. | 14,326.2 | 2019-04-01T00:00:00.000 | [
"Materials Science",
"Chemistry",
"Physics"
] |
Machine learning parameterization of the multi-scale Kain–Fritsch (MSKF) convection scheme and stable simulation coupled in the Weather Research and Forecasting (WRF) model using WRF–ML v1.0
. Warm-sector heavy rainfall along the south China coast poses significant forecasting challenges due to its localized nature and prolonged duration. To improve the prediction of such high-impact weather events, high-resolution numerical weather prediction (NWP) models are increasingly used to more accurately represent topographic effects. However, as these models’ grid spacing approaches the scale of convective processes, they enter a “gray zone”, where the models struggle to fully resolve the turbulent eddies within the atmospheric boundary layer, necessitating partial parameterization. The appropriateness of applying convection parameterization (CP) schemes within this gray zone remains controversial. To address this, scale-aware CP schemes have been developed to improve the representation of convective transport. Among these, the multi-scale Kain–Fritsch (MSKF) scheme enhances the traditional Kain–Fritsch (KF) scheme, incorporating modifications that facilitate its effective application at spatial resolutions as high as 2 km. In recent years, there has been an increase in the application of machine learning (ML) models across various domains of atmospheric sciences, including efforts to replace conventional physical parameterizations with ML models. This work introduces a multi-output bidirectional long short-term memory (Bi-LSTM) model intended to replace the scale-aware MSKF CP scheme. This multi-output Bi-LSTM model is capable of simultaneously predicting the convection trigger while also modeling the associated convective tendencies and precipitation rates with a high performance. Data for training and testing the model are generated using the Weather Research and Forecast (WRF) model over south China at a horizontal resolution of 5 km. Furthermore, this work evaluates the performance of the WRF model coupled with the ML-based CP scheme against simulations with the traditional MSKF scheme. The results demonstrate that the Bi-LSTM model can achieve high accuracy, indicating the promising potential of ML models to substitute the MSKF scheme in the gray zone.
Introduction
Warm-sector heavy rainfall often occurs in south China during the pre-flood season, primarily influenced by the East Asian summer monsoon (Ding, 2004).These rainfall events are characterized by intense and localized precipitation over a limited area.Despite their small scale, such unexpected and extreme warm-sector rainfall can cause significant damage, including flooding homes and vehicles, destroying crop fields, and endangering lives, leading to economic losses ranging from millions to even billions of dollars (Tao, 1981;Zhao et al., 2007;Zhong et al., 2015).Accurately predicting warm-sector heavy rainfall with numerical weather prediction (NWP) models is challenging due to the complex interaction of various factors, such as the low-level jet (LLJ), Published by Copernicus Publications on behalf of the European Geosciences Union.
X. Zhong et al.: ML MSKF land-sea contrast, topography, and urban landscape (Zhong and Chen, 2017;Luo et al., 2017;Jian et al., 2002;Di et al., 2006;Xia and Zhao, 2009;Zhang and Ni, 2009).The complex terrain and heterogeneous land surface of the south China region are crucial in promoting active convection.Previous studies (Giorgi et al., 2016;Mishra et al., 2018;Schumacher et al., 2020;Onishi et al., 2023) have demonstrated that a higher spatial resolution improves the performance of convective rainfall forecasts by more accurately resolving topographic features.Acknowledging the importance of resolution in forecasting severe convective weather, both the Chinese government and the community increasingly support the development of high-resolution operational forecast models specifically designed for warm-sector rainstorms and sudden local rainstorms.In early 2017, the China Meteorological Administration (CMA) launched an initiative to develop a comprehensive framework for evaluating the forecast performance of all available models, including highresolution regional models, and advancing key technologies for forecasting high-impact weather.
The increased computational resources have facilitated a shift towards the implementation of regional NWP models with increasingly finer grid spacings, typically within the range of 1 to 10 km.However, when the model grid spacing approaches the scale of convection, entering the so-called "gray zone" (Wyngaard, 2004;Hong and Dudhia, 2012), cumulus convection transitions from being completely unresolved to partially resolved.Theoretically, the accurate representation of the smallest turbulent scales, achievable only through direct numerical simulation (DNS) at resolutions from millimeters to centimeters (Jeworrek et al., 2019), still requires the use of the parameterization of turbulence or convection in weather modeling.There is ongoing debate regarding the efficacy of employing convection parameterization (CP) within the gray zone.Several studies (Chan et al., 2013;Johnson et al., 2013) have found that reducing horizontal grid spacing to below 4 km while using the CP scheme does not enhance and may even degrade precipitation forecast performance.In contrast, other studies (Lean et al., 2008;Roberts and Lean, 2008;Clark et al., 2012) showed that forecasts with a horizontal grid spacing of 1 km, where convection is explicitly resolved, yielded more accurate spatial representation of accumulated rainfall over 48 h compared to forecasts using 12 and 4 km grid spacings.This discrepancy in research findings, with some indicating no benefit from finer grid spacing and others suggesting improved forecast accuracy, seems to stem from the application of the CP at scales beyond its originally intended operational range.Therefore, it remains unclear if utilizing any CP schemes in the gray zone is effective for predicting localized warm-sector heavy rainfall.
To enhance prediction accuracy in the gray zone, researchers have developed scale-aware CP schemes.These schemes dynamically parameterize convective processes based on the horizontal grid spacing, thus facilitating seam-less transitions between different spatial scales.A pivotal study by Jeworrek et al. (2019) demonstrated that two specific scale-aware CP schemes, Grell-Freitas (Grell and Freitas, 2014) and multi-scale Kain-Fritsch (MSKF) (Zheng et al., 2016), surpassed conventional CP schemes in predicting both the timing and intensity of precipitation over the Southern Great Plains of the United States.Additionally, Ou et al. (2020) showed that the MSKF scheme outperformed other CP schemes, including the Grell 3D ensemble (Grell and Dévényi, 2002) and the new simplified Arakawa-Schubert (Han and Pan, 2011), in precipitation simulation.This was evidenced by its lower root mean squared error (RMSE) values when compared against in situ observations and satellite data.Despite the increasing adoption of these scale-aware schemes due to their superior performance, it is crucial to acknowledge that their efficacy also relies on various empirical parameters (Villalba-Pradas and Tapiador, 2022).Therefore, developing specialized CP schemes for the gray zone in NWP models continues to be a significant challenge.
In recent years, an increasing number of studies have investigated the use of machine learning (ML) models as alternatives to conventional physics-based CP schemes.These ML-based schemes have demonstrated the potential for efficacy across various horizontal resolutions, benefiting from being trained on data from simulations that operate at varying grid resolutions (Yuval and O'Gorman, 2020).Unlike conventional CP schemes, which often rely on assumptions such as convective quasi-equilibrium (Arakawa, 2004), MLbased parameterization schemes do not require such assumptions.Notably, random forests (RFs) and fully connected (FC) neural networks (NNs) have become predominant ML models for CP schemes in previous studies.RFs offer the advantages of inherently enforcing physical constraints, such as energy conservation and non-negative surface precipitation, essential for maintaining stable simulations.O'Gorman and Dwyer (2018) demonstrated RFs' capability to emulate moist convection in an aquaplanet general circulation model (GCM), maintaining stability and effectively reproducing key climate statistics.Furthermore, Yuval and O'Gorman (2020) employed the coarse-grained output from a highresolution three-dimensional (3D) GCM model, simulated on an idealized equatorial beta plane, to train the RF parameterization.They showed that the RF parameterization is capable of reproducing the climate of the high-resolution simulation at coarser resolutions.However, FC NNs offer several advantages over RFs, such as the potential for higher accuracy and lower memory requirements.Krasnopolsky et al. (2013) introduced a stochastic CP scheme using an ensemble of three-layer NNs, trained with data generated by a cloud-resolving model (CRM) during the TOGA COARE 1 experiment, demonstrating its capacity for generating reasonable decadal climate simulations across a broader tropical Pacific region when incorporated into the National Center of Atmospheric Research (NCAR) Community Atmospheric Model (CAM).Similarly, Gentine et al. (2018) leveraged deep NN (DNN) trained on data from idealized and aquaplanet simulations performed using the Super-Parameterized Community Atmosphere Model (SPCAM).The DNN predicts temperature and moisture tendencies due to convection and clouds, as well as the cloud liquid and ice water contents.Additionally, Rasp et al. (2018) successfully implemented an NN-based parameterization in a global GCM on an aquaplanet, conducting stable prognostic simulations over multiple years that accurately reproduced the climatology of SP-CAM and capturing crucial aspects of variability, including extreme precipitation and realistic tropical waves.However, Rasp (2020) also found that minor changes to the configuration rapidly led to simulation instabilities, underscoring the need to address the robustness of NN parameterizations in GCMs.Yuval et al. (2021) developed a FC NN that predicts subgrid fluxes instead of tendencies, incorporating physical constraints from coarse-grained high-resolution atmospheric simulation in an idealized domain.Brenowitz andBretherton (2018, 2019) proposed a novel loss function designed to minimize accumulated prediction error over multiple time steps to enhance long-term stability and accuracy, by excluding upper atmospheric humidity and temperature from the input.Nonetheless, the approach of removing certain variables from the input is relatively rudimentary, demanding additional research to enhance the stability of NN-based parameterizations when integrated into the model.
Previous studies have predominantly used FC NNs to emulate convection, while more advanced NN structures have the potential to achieve higher accuracy.In a pioneering study, Han et al. (2020) explored the use of a deep residual convolutional NN (ResNet) (He et al., 2016) for the emulation of convection and cloud parameterization in the SPCAM model using a realistic configuration.They compared the performance of ResNet with various NN architectures, including a FC DNN, a DNN with skip connections, and a convolutional NN (CNN) without skip connections.The results revealed that ResNet and CNNs without skip connections outperformed FC NNs and DNNs with skip connections in accuracy, with ResNet and CNNs without skip connections showing comparable performance.This finding highlights the significant role of convolutions in enhancing accuracy.Furthermore, Yao et al. ( 2023) evaluated multiple ML model structures for simulating atmospheric radiative transfer processes, encompassing FC NNs, CNNs, bidirectional recurrent-based coupling of the ocean and atmosphere in the western Pacific warm pool region from November 1992 to February 1993, encompassing 120 d of field experiments involving the deployment of oceanographic ships, moorings, drifters, and Doppler radars (ship, land, and air).
NNs (RNNs), transformer-based NNs (Vaswani et al., 2017), and Fourier neural operators (FNOs; Li et al., 2020).Their results indicated that models capable of perceiving the global context of the entire atmospheric column significantly outperformed FC NNs and CNNs.Particularly, the bidirectional long short-term memory (Bi-LSTM) achieved the highest levels of accuracy.Similar to radiative transfer modeling, Han et al. (2020) also emphasized the importance of ML having a global perspective of the entire atmospheric column for ML models in convection modeling.They demonstrated that increasing the depths of CNNs from 4 to 22 layers significantly improved model accuracy, a benefit primarily attributed to the expansion of the receptive field in deeper CNN layers.Therefore, ML models that integrate both global and local perception capabilities are better suited for developing ML-based CP schemes.
Previous research has mostly focused on replacing CP schemes in GCM models with ML models for climate forecasting.The complexity of CP schemes in weather forecasting models surpasses that in GCMs (Arakawa, 2004).Generally, CP schemes in GCMs, whether in explicit or implicit form, assume that both the horizontal grid size and the temporal intervals for physics implementation are significantly larger and longer compared to the grid size and duration of individual moist-convective elements.In contrast, CP schemes in high-resolution models must account for dependencies on both the model's resolution and the time interval for implementing the physics (Arakawa, 2004).The ultimate goal is to develop ML models, based on data from superparameterization or cloud-resolving models, to replace conventional CP schemes in weather forecasting models.This replacement seeks to reduce uncertainties and improve the efficacy of ML parameterizations.This study represents an initial effort to employ a ML model as an alternative to conventional CP schemes in weather forecasting models.For our dataset, we used the Weather and Research Forecasting (WRF) (Skamarock et al., 2021) model that covers the south China region, incorporating the scale-aware MSKF scheme employed as the CP scheme.The MSKF scheme, an improved version of the Kain-Fritsch (KF) scheme (Kain andFritsch, 1990, 1993;Kain, 2004), aims to mitigate the overestimation of precipitation and address the premature convection trigger issue, particularly evident in overestimating precipitation during summer.To address these issues, the MSKF incorporates a scale-dependent capability, such as modifying the formulation of the convective adjustment timescale.This vital parameter, which determines the intensity and duration of convection, has been made dynamic and dependent on grid resolution (Zhang et al., 2021b).Furthermore, we utilize a Bi-LSTM model to emulate the convective processes and couple it with the WRF model through the WRF-ML coupler developed by Zhong et al. (2023a).The performance of the ML-based CP scheme is evaluated in both offline and online settings.
The paper is structured as follows.Section 2 provides a description of the WRF model for data generation, as well as the input and output data of the ML model.In Sect.3, the original and the ML-based MSKF schemes are introduced.The results for both offline and online testing of the MLbased MSKF scheme are presented in Sect. 4. Finally, Sect. 5 presents the summary and conclusion.
Data generation
The dataset was generated by running the WRF model version 4.3 (Skamarock et al., 2021).The following subsections provide a comprehensive explanation of the WRF model configurations, as well as the input and output variables employed in the development of the ML-based CP scheme.
The WRF model is compiled using the GNU Fortran (gfortran version 7.5.0)compiler with the dmpar option.The WRF model is run using the domain configuration illustrated in Fig. 1.The WRF model is configured with a single domain consisting of 44 000 grid points, with a horizontal grid spacing of 5 km and dimensions of 220 × 200 grid points in the west-east and north-south directions.The model consists of 45 vertical levels (i.e., 44 vertical layers), with a model top at 50 hPa.Additionally, the WRF model is configured with physics schemes, including a WSM 6-class graupel scheme (Hong and Lim, 2006) for microphysics, a Bougeault-Lacarrère (BouLac) scheme (Bougeault and Lacarrère, 1989) for planetary boundary layer (PBL) mixing, the Monin-Obukhov (Janjic) surface layer scheme (Janjic, 1996), the Unified Noah model (Livneh et al., 2011) for land surface, RRTMG for both shortwave and longwave radiation (Iacono et al., 2008), and MSKF (Zheng et al., 2016) for cumulus.The time step used for all WRF simulations is set to 15 s.
The initial and boundary conditions for this work were derived from the ERA5 reanalysis dataset, which was provided by the European Centre for Medium-range Weather Forecast (ECMWF) (Hersbach et al., 2020).The ERA5 reanalysis dataset used in this study has a horizontal resolution of 0.25°and consists of 29 pressure levels below 50 hPa.To create a dataset for developing the ML model, the WRF simulations were initialized at 12:00 UTC and conducted nine times every 2 d, specifically from 20 May 2022 to 5 June 2022.Throughout the simulations, the MSKF scheme was called every 5 model minutes, generating outputs at each call.The simulations ran for 36 h each time, with the first 24 h used for training and the last 12 h for validation.Therefore, the total number of training samples is 114 444 000 (114 444 000 = 44 000×9×(24×60/5+1)), while the offline validation set contains 57 024 000 (57 024 000 = 44 000×9× 12 × 60/5) samples.Furthermore, given the possible discrepancy between offline performance, we conducted experiments that coupled the ML-based MSKF scheme with the WRF model.This coupling aims at evaluating the online efficacy of the MLbased MSKF scheme by comparing it with the original WRF simulations.These simulations were performed four times every 2 d, with each simulation extending over a period of 168 h (7 d).The initialization days spanned from 12 to 18 June 2022.
Input and output data
Table 1 presents a comprehensive list of the input and output variables used in this study, consistent with those utilized in the original MSKF scheme.There are 17 variables exclusively used as input, while 9 variables serve as both input and output.Specifically, the output variable raincv, representing the time step precipitation due to convection, is calculated through multiplying the precipitation rate by the model's time step.Among all the variables, five are two-dimensional (2D) surface variables, while the remaining ones are 3D variables characterized by 44-layer vertical profiles.Moreover, the ML model used in this study incorporates four derived variables as input.These variables consist of a 2D Boolean variable indicating convection triggering based on nca values, the pressure difference across adjacent vertical levels, the saturated water vapor mixing ratio, and the relative humidity.Furthermore, the output w0avg, which depends on the vertical wind component (w) and input w0avg, is also included as an input to the model.In total, the ML model utilizes 27 input variables.
The variable nca represents the cloud relaxation time and must be an integer multiple of the model time step.For all WRF model simulations conducted in this study, a fixed time step of 15 s is used.Thus, nca is expected to be exactly divisible by 15.To eliminate dependence on the specific model time step, nca is divided by the model time step before normalization is applied during model training.Moreover, within the MSKF scheme, nca plays a crucial role in determining the triggering of convection.Convection is triggered when nca is equal to or exceeds half of the model time step.
To ensure consistency with the dimensions of the 3D variables, the surface variables are padded by duplicating the values of the surface layer for all layers before feeding them into the model.Prior to utilizing the variables in the Bi-LSTM model for training and validation, normalization is applied to ensure uniformity in the magnitudes of all the variables.Each variable is divided by the maximum absolute value in the atmospheric column (for 3D variables) or at the surface (for surface variables).
Method
This section describes the flowchart of the original MSKF scheme for determining the convection trigger, ML model structures and training, and evaluation methods.
Description of the original MSKF module
The MSKF scheme is a scale-aware adaptation of the KF CP scheme, initially developed by Kain andFritsch (1990, 1993) and further refined by Kain (2004).Figure 2 illustrates the convection trigger process within the MSKF scheme.At the beginning of each simulation step, the scheme evaluates the variable nca to ascertain whether it equals or surpasses a threshold, defined as half of the model's time step (dt).Should nca equal or exceed half of dt, there would be no need to update convective tendencies or precipitation rates due to ongoing convection.In contrast, an nca value below this threshold triggers the MSKF scheme to employ a onedimensional cloud model.This model calculates a set of variables related to cloud characteristics to evaluate the potential of convection triggering.Essential variables include the lifting condensation level (LCL), convective available potenhttps://doi.org/10.5194/gmd-17-3667-2024 Geosci.Model Dev., 17, 3667-3685, 2024 tial energy (CAPE), cloud top and base heights, and entrainment rates.The LCL is crucial for determining the emergence of potential convective activities, with a lower LCL favoring more intense convection.CAPE quantifies the buoyant energy available to an air parcel for the formation of deep convective clouds upon reaching its level of free convection (LFC) above the LCL, with higher CAPE values signifying greater potential for intense convection.The cloud base is generally at the LCL, whereas the cloud top is defined at the altitude where buoyancy becomes negligible.Meanwhile, the vertical extent between the cloud base and top affect the cloud's growth and precipitation potential.The MSKF scheme requires surpassing a specific CAPE threshold to trigger convection.Furthermore, it assesses entrainment rates to measure the impact of ambient air on the evolution of the convective system.At grid points where convection is triggered, the MSKF scheme calculates both convective tendencies and precipitation rates; otherwise these values are set to zero.However, the variable w0avg is consistently updated, regardless of convection status.Active convection leads to a decrement in nca by one model time step for each iteration within the WRF model cycle.
Description of the ML-based MSKF scheme
In the original MSKF scheme, atmospheric columns are processed sequentially, one at a time, until all horizontal grid points within the domain have been processed.In contrast, the ML-based MSKF scheme processes data in batches, as indicated by B in Fig. 3, consisting of 27 features across 44 vertical layers.As a result, the input data have a dimension of B × 27 × 44.Before being fed into the ML model, the input data undergo pre-processing through a module incorporating a one-dimensional (1D) convolutional layer.This module expands the feature dimension from 27 to 64.The following sections provide a comprehensive description of the structures of the ML model.
ML model structure
Predicting whether convection is triggered as well as modeling convective tendencies and precipitation rates are two primary objectives of conventional CP schemes.Previous studies have applied ML models to address these objectives, with some dedicated solely to the classification task of the convection trigger (Zhang et al., 2021a), while others have independently pursued the regression of convective tendencies (Rasp et al., 2018;Brenowitz and Bretherton, 2019;Wang et al., 2022).However, regression-based models alone may result in inconsistent convective tendencies, leading to conflicting signals for triggering convection at specific grid points (see Figs. A3 and A4 in Appendix A).In contrast, models that focus exclusively on classification lack the capability to generate essential tendencies for an effective CP scheme.Therefore, the development of a ML-based CP scheme necessitates the integration of both a binary classification model for the prediction of the convection trigger and a regression model for convective tendencies.To address this, we propose a multi-output Bi-LSTM model capable of concurrently conducting regression and classification predictions (Fig. 3).
Our proposed model consists of a shared Bi-LSTM layer for learning features, a classification subnetwork, and a regression subnetwork.The shared Bi-LSTM layer includes three repeated Bi-LSTM blocks, with each block containing a forward and a backward layer that have a feature dimension of 32.The classification subnetwork is composed of a 1 × 1 1D convolutional layer, a FC layer, and a sigmoid activation layer.The output of the sigmoid layer represents the probability distribution of the convection trigger.The binary cross-entropy loss function is employed as the cost function for this classification task.Meanwhile, the regression subnetwork incorporates a FC layer to output precipitation rate, nca, and convective tendencies.Finally, outputs from both subnetworks are processed through a post-processing module to ensure their physical consistency (see Figs. A5 and A6 in Appendix A), with further details provided in the subsequent subsection.
Post-processing module
The post-processing module is designed to ensure physical consistency of all variables.To achieve this, the following rules are applied: (1) at grid points where the input nca is equal to or greater than half the value of dt, all other variables remain unchanged as they are still within the convection lifetime.
(2) The output nca must be an integer.
(3) At grid points where convection is predicted to be inactive, all corresponding output variables are set to zero by default.In addition, the calculation of time step convective precipitation (raincv) follows the methodology outlined in a previous section, Sect.2.2.
Model training
As our model incorporates both classification and regression tasks, we optimize its performance by minimizing a multitask loss function (Ren et al., 2016).The loss function is defined as the sum of the binary cross-entropy loss for the convection trigger and a weighted combination of the L1 loss for all output variables from the regression subnetwork.The https://doi.org/10.5194/gmd-17-3667-2024Geosci.Model Dev., 17, 3667-3685, 2024 specific formulation of the loss function is as follows: Here, i and j denote the grid points in the domain and p i,j represents the probability of convection being triggered.The ground-truth label p gt i,j takes a value of 1 if convection is triggered and 0 otherwise.The classification loss, L cls , is calculated using the binary cross-entropy loss.For the regression loss of different values of variable c, λ c functions as a weight that balances the output variables by considering their respective magnitudes.The term p gt i,j L1 c indicates that the L1 regression loss is activated only for triggered grid points (p gt i,j = 1) and is disabled otherwise (p gt i,j = 0).Both loss terms are normalized by N cls and N reg , which correspond to the total number of grid points and the number of triggered grid points, respectively.
The Adam optimizer (Kingma and Ba, 2014) is used with an initial learning rate of 0.003 to update the parameters of the model.Furthermore, the plateau scheduler is implemented to decrease the learning rate by a factor of 0.5 when the loss fails to decrease for 5 epochs.The model is trained for 150 epochs using a batch size of 44 000.
Evaluation methods
The ML-based MSKF scheme is evaluated in both offline and online settings.The offline performance of the MLbased MSKF scheme is evaluated by comparing it against the outputs of the original MSKF scheme using the validation dataset, including rthcuten, rqvcuten, rqccuten, rqrcuten, nca, and pratec.The overall model performance metrics include the RMSE and correlation coefficient.The mean absolute error (MAE) and mean bias error (MBE) per vertical layer are calculated using the equations below: where Y (i, l) and Y ML (i, l) represent the outputs from the original MSKF scheme and the ML-based MSKF scheme, respectively.Here, i denotes the horizontal grid point of a vertical profile, N is the number of the horizontal grid points in the domain, and l represents the vertical layer index.
Offline validation of the ML-based MSKF scheme
The offline validation was conducted using data that were not used during the training process.Figure 4 compares the cloud relaxation time (nca), precipitation rate (pratec), and convective tendencies predicted by the original MSKF scheme and the ML-based MSKF scheme, respectively.To facilitate the comparison, the units of precipitation rate and temperature tendencies were converted to mm d −1 and K d −1 from mm s −1 and K s −1 , respectively, by applying a conversion factor of 86 400 (24 × 3600).Similarly, the water vapor mixing ratio (rqvcuten), cloud water mixing ratio (rqccuten), and rainwater mixing ratio (rqrcuten), due to convection, were multiplied by 86 400 000 (24 × 3600 × 1000) to convert the output variables listed in Table 1 from kg kg −1 s −1 to g kg −1 d −1 ; the variable w0avg is excluded as it is calculated using an equation with the ground truth as input in this offline validation.Hence, evaluating w0avg in the offline evaluation is unnecessary.Among all variables illustrated in Fig. 4, the variable nca exhibits a significantly higher RMSE of 4.32, with data points widely dispersed across a wide range of values.This suggests that accurately predicting convection poses a considerable challenge.To eliminate the dependency on time steps, nca is divided by the model's time step of 15 s before proceeding with plotting and statistical evaluations.The precipitation rate demonstrates the highest correlation coefficient and minimal variability, as most data points cluster closely around the 1 : 1 line.While temperature and the four moisture tendencies exhibit some degree of variability, the majority of data points align closely with the 1 : 1 line.The correlation coefficient of convection trigger is 0.91 (not shown in Fig. 4).Overall, the ML-based MSKF scheme shows a strong correlation with the original MSKF scheme for all examined variables, with correlation coefficients consistently higher than 0.91.This indicates that the ML-based MSKF scheme has the potential to replace the original scheme.
To obtain a comprehensive understanding of the vertical distribution of errors, Fig. 5 presents the vertical profiles of error statistics associated with convective tendencies.The solid and dashed lines in the figure represent the MAE and MBE of tendencies at each vertical layer, respectively.Additionally, the shaded area corresponds to the 5th and 95th percentiles of differences between tendencies predicted by the ML-based MSKF-predicted scheme and the original MSKF scheme, respectively.The distribution of vertical errors in all tendencies exhibits a notable uniformity, with higher variance observed within the pressure layers between 800 and 1000 hPa.These pressure layers correspond to the atmospheric layer where convection occurs most frequently.Due to the significantly lower cloud and rain content compared to water vapor in the atmosphere, the error magnitudes for rqccuten and rqrcuten are considerably lower than those observed for rqvcuten.
Prognostic validation
This subsection presents the performance of the ML-based MSKF scheme in the online setting.
The ML-based MSKF scheme was integrated into the WRF model as a substitute for the original MSKF scheme to simulate convective processes.Utilizing the WRF-ML coupler (Zhong et al., 2023a), this novel ML-based MSKF scheme was seamlessly incorporated into the WRF framework.A comparative analysis was conducted by initializing both the modified WRF model, which incorporates the MLbased scheme, and the original WRF model on 12, 14, 16, and 18 June 2022 for simulations extending over 168 h.It is worth mentioning that these simulations were performed independently of the training dataset, ensuring the evaluation of the scheme's generalization capability.
Figure 6 presents the averaged spatial forecasts for predictions generated by the original WRF model.These forecast results include the accumulations of both convective precipitation (RAINC) and non-convective precipitation (RAINNC) over a 12 h period, along with the 2 m temperature (T2M) at 24, 72, 120, and 168 h.The figure also demonstrates the mean absolute difference (MAD) between WRF simulations coupled with the ML-based MSKF scheme and those utilizing the original MSKF scheme.Within the spatial forecasts, red and blue patterns signify the magnitudes of the forecasted values, whereas in the spatial differences, these colors denote the positive and negative biases in the ML-based simulations, respectively.Green patterns suggest minimal deviation from the original WRF simulations.Furthermore, we calculate a domain-averaged MAD to evaluate the overall performance of the ML-based scheme in prognostic simulations.Generally, the differences are small, indicating good agreement between WRF simulations coupled with the ML-based MSKF scheme and the original WRF simulations.Notably, the differences do not increase with the progression of simulation time, as evidenced by a comparable domain-averaged MAD at 168 forecast hours compared to that at 24 forecast hours.These findings suggest that the ML-based MSKF scheme achieves stable prognostic simulations.
Figure 7 provides a comparative analysis of domainaveraged time series forecasts from both the original WRF simulations and WRF simulations coupled with the MLbased MSKF scheme.This comparison includes 6 h accumulations of RAINC and RAINNC, as well as T2M forehttps://doi.org/10.5194/gmd-17-3667-2024 Geosci.Model Dev., 17, 3667-3685, 2024 casts.The results demonstrate that WRF simulations coupled with the ML-based MSKF schemes are in close alignment with the original WRF simulations, particularly when capturing the diurnal variations in RAINC, RAINNC, and T2M.Notably, the T2M forecasts demonstrate remarkable consistency, underscoring the efficacy of the ML-based MSKF scheme in maintaining the predictive accuracy of the original scheme.
Conclusions
In this paper, we proposed a multi-output Bi-LSTM model to develop a ML-based MSKF scheme for predicting the convection trigger and reproducing the convective process in the gray zone.The model is trained on data generated by the WRF simulations at a spatial resolution of 5 km, covering the south China region.The output variables of the ML-based MSKF scheme are identical to those of the origi- nal MSKF scheme, encompassing the cloud relaxation time (nca), precipitation rate (pratec), time step convective precipitation (raincv), and convective tendencies.This ML-based scheme ensures physical consistency among all output variables by incorporating a post-processing module to refine the output from the Bi-LSTM model.Offline validation demonstrates the excellent performance of the ML-based MSKF scheme.Furthermore, the ML-based MSKF scheme is coupled with the WRF model using the WRF-ML coupler.The WRF simulations coupled with the ML-based MSKF scheme are compared against the WRF simulation with the original MSKF scheme.Results shows that the ML-based scheme can generate forecasts similar to the original ML scheme in online settings, showing the potential substitution of the MSKF scheme by ML models in the gray zone.This study demonstrates the feasibility of employing ML models as substitutes for the conventional CP scheme within the high-resolution weather forecasting model.Future efforts will focus on the development of ML models based on data generated by super-parameterization or cloud-resolving models to replace conventional CP schemes in weather forecasting models (see Appendix B).The objective of this substitution is to reduce uncertainties and improve the performance of weather forecast models.
Appendix A: Comparison against classification only and regression only models
Two separate Bi-LSTM models were trained with slight modifications to the multi-output Bi-LSTM model illustrated in Fig. 3.The first model aimed at predicting convection triggers alone, termed the Bi-LSTM trigger, while the second model aimed at predicting convective tendencies, termed the Bi-LSTM tendency.In predicting the convection trigger, both the Bi-LSTM trigger model and the multi-output Bi-LSTM model demonstrated comparable accuracy, as observed in Figs.A1 and A2.However, while the convection triggers predicted by the Bi-LSTM trigger model were indistinguishable from those of the multi-output Bi-LSTM model, the former failed to accurately predict the corresponding convective tendencies.Consequently, it cannot serve as a replacement for convection schemes within NWP models.
Figures A3 and A4 present snapshots of rthcuten and rqvcuten predicted by the Bi-LSTM tendency model.These figures reveal that the Bi-LSTM tendency model predicts nonzero values across nearly the entire domain.Since the Bi-LSTM tendency model exclusively focuses on predicting convective tendencies, convection triggers are derived using certain threshold values.The spatial distribution of these triggers is notably influenced by the choice of threshold values, and the patterns of convection triggers derived from rthcuten and rqvcuten exhibit considerable discrepancies.This confirms that models based solely on regression yield inconsistent tendencies.In contrast, the multi-output Bi-LSTM model does not encounter the aforementioned issues of the Bi-LSTM tendency model and generates a more consistent spatial pattern of rthcuten and rqvcuten (see Figs. A5 and A6). https://doi.org/10.5194/gmd-17-3667-2024 Geosci.Model Dev., 17, 3667-3685, 2024 Geosci.Model Dev., 17, 3667-3685, 2024 https://doi.org/10.5194/gmd-17-3667-2024
Figure 1 .
Figure 1.Digital evaluation data of the single WRF domain with a horizontal resolution of 5°.The red lines are the province borderlines, and the black lines are the city borderlines.
Figure 2 .
Figure 2. A flowchart outlining the convection trigger process in the original MSKF scheme.
Figure 3 .
Figure 3.The architecture of the multi-output Bi-LSTM model for combined classification and regression predictions.
Figure 4 .
Figure 4. Comparison of the predicted (y axis) and true (x axis) nca (a), pratec (b), rthcuten (c), rqvcuten (d), rqccuten (e), and rqrcuten (f) when using validation data in the offline setting.Colors indicate the proportion of samples across the entire testing dataset, with values on the color bar normalized through the application of a base 10 logarithm.
Figure 5 .
Figure 5. Vertical profiles of the statistics in rthcuten (a), rqvcuten (b), rqccuten (c), and rqrcuten (d) using validation data in the offline setting data using ML-based emulators.The solid and dashed lines show the MAE and MBE profiles, respectively, and the shaded area indicates the 5th and 95th percentile of differences (prediction-target) at each layer.
Figure 6 .
Figure6.Spatial map of the average WRF simulations using the original MSKF scheme (in the first, third, and fifth rows) along with the average MAD between WRF simulations coupled with the ML-based MSKF scheme and WRF simulation with the original MSKF scheme (in the second, fourth, and sixth rows).The simulations are shown for the 12 h accumulated convective precipitation (RAINC) in the first and second rows, the 12 h accumulated non-convective precipitation (RAINNC) in the third and fourth rows, and the 2 m temperature (T2M) at forecast lead times of 24 h (first column), 72 h (second column), 120 h (third column), and 168 h (fourth column).
Figure 7 .
Figure 7.Comparison of domain-averaged forecasts derived from the original WRF simulations (black lines) and WRF simulations coupled with the ML-based MSKF scheme (light-green lines) of 6 h accumulated RAINC (a) and RAINNC (b), along with T2M (c).
Figure A1 .
Figure A1.Snapshot example of convection trigger, with panel (a) showing the ground truth (GT) and panel (b) showing the difference between convection trigger as predicted by the Bi-LSTM trigger model and ground truth values for the 25 h WRF simulation initialized at 12:00 UTC on 20 May 2021.
Figure A2 .
Figure A2.Snapshot example of convection trigger, with panel (a) showing the ground truth (GT) and panel (b) showing the difference between convection trigger as predicted by the multi-output Bi-LSTM model and ground truth values for the 25 h WRF simulation initialized at 12:00 UTC on 20 May 2021.
Figure A3 .
Figure A3.Snapshot examples of rthcuten summed along the vertical direction, with panel (a) showing the GT values and panel (b) showing the rthcuten predicted by the Bi-LSTM tendency model for the 25 h WRF simulation initialized at 12:00 UTC on 20 May 2021.Similarly, snapshot examples of a trigger, with the GT shown in panel (c) and the predictions from the Bi-LSTM tendency model using varying threshold values of rthcuten shown in panels (d), (e), and (f), respectively.
Figure A4 .
Figure A4.Snapshot examples of rqvcuten summed along the vertical direction, with panel (a) showing the GT values and panel (b) showing the rqvcuten predicted by the Bi-LSTM tendency model for the 25 h WRF simulation initialized at 12:00 UTC on 20 May 2021.Similarly, snapshot examples of a trigger, with the GT shown in panel (c) and the predictions from the Bi-LSTM tendency model using varying threshold values of rqvcuten shown in panels (d), (e), and (f), respectively.
Figure A5 .
Figure A5.Snapshot examples of rthcuten summed along the vertical direction, with panel (a) showing the GT values and panel (b) showing the rthcuten predicted by the multi-output Bi-LSTM model for the 25 h WRF simulation initialized at 12:00 UTC on 20 May 2021.Similarly, snapshot examples of a trigger, with the GT shown in panel (c) and the predictions from the multi-output Bi-LSTM model using a threshold value of 0 shown in panel (d).
Table 1 .
Definition of all the input and output variables, whether they are surface or 3D variables, and their corresponding units.There are 44 model layers. | 8,775.2 | 2024-05-07T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
Determination of Nucleopolyhedrovirus’ Taxonomic Position
To date, over 78 genomes of nucleopolyhedroviruses (NPVs) have been sequenced and deposited in NCBI. How to define a new virus from the infected larvae in the field is usually the first question. Two NPV strains, which were isolated from casuarina moth (L. xylina) and golden birdwing larvae (Troides aeacus), respectively, displayed the same question. Due to the identity of polyhedrin (polh) sequences of these two isolates to that of Lymantria dispar MNPV and Bombyx mori NPV, they are named LdMNPV-like virus and TraeNPV, provisionally. To further clarify the relationships of LdMNPV-like virus and TraeNPV to closely related NPVs, Kimura 2-parameter (K-2-P) analysis was performed. Apparently, the results of K-2-P analysis that showed LdMNPV-like virus is an LdMNPV isolate, while TraeNPV had an ambiguous relationship to BmNPV. Otherwise, MaviNPV, which is a mini-AcMNPV, also exhibited a different story by K-2-P analysis. Since K-2-P analysis could not cover all species determination issues, therefore, TraeNPV needs to be sequenced for defining its taxonomic position. For this purpose, different genomic sequencing technologies and bioinformatic analysis approaches will be discussed. We anticipated that these applications will help to exam nucleotide information of unknown species and give an insight and facilitate to this issue.
Introduction
Baculoviruses are insect-specific viruses which have a large circular double-stranded DNA genome packaged in enveloped, rod-shaped nucleocapsid and occluded within a paracrystalline protein occlusion body (OB) [1,2]. The family Baculoviridae has four genera, including Alphabaculovirus, Betabaculovirus, Gammabaculovirus and Deltabaculovirus. Nucleopolyhedrovirus (NPV) is a member of Alphabaculovirus (lepidopteran-specific NPV) [3]; NPV replicates in the nucleus of the infected host cell and causes a disease of nuclear polyhedrosis. Epidemic outbreak of NPV may play a role in regulation of the host nature population [4]. Thereby, it is a potential agent for biological control with a number of eco-friendly benefits including high virulence and specificity against target insects, environmental safety and sustainable existence with target insects. Several baculoviruses showing promising results have been commercialized as biopesticides for the control of insect pests around the world [5]. For biotechnological applications, baculoviruses have been constructed as a eukaryotic protein expression vectors (baculovirus expression vector system (BEVS)) over the last 30 years and used to gene therapy trials. So far, many recombinant proteins have been expressed in insect cells by BEVS and contribute to human life [6].
To date, baculoviruses are known to infect more than 660 insect species; most of them are belonging to the order of Lepidoptera, Diptera and Hymenoptera [7,8]. Baculoviruses exhibit genetic variations among species and its isolates [9]. Although a large number of baculoviruses in the nature, only a few have been well studied. To the best of our knowledge, a total of 78 fully sequenced genomes have been deposited in GenBank [10] and also several baculoviruses of whole genomes may soon be sequenced and deposited (Table 1). However, these published viral genomes represent only a small fraction and the genetic relationship among nucleopolyhedroviruses (NPVs) in the natural environment remains a puzzle.
Previously, Sanger sequencing was employed to sequence the viral genomic sequences cloned in plasmids. With the advances of sequencing technologies, next-generation sequencing (NGS) is becoming an important technology for large-scale viral genomic sequencing. The high cost of NGS and requirement of intensive bioinformatic analysis remain a hurdle for this application. In a word, NGS is an available tool to facilitate on the study of the genetic relationship of baculoviruses.
Identification of NPVs
Biochemical and biotechnology-based methods are the most common approaches employed to identify the NPVs. In most cases, more than one method is employed to compensate the pros and cons for each other. For example, restriction enzyme profiling of viral genomic DNA was used to reveal genetic variations among different isolates [97][98][99] and to distinguish one species from another between closely related viruses such as Rachiplusia ou (RoMNPV), AcMNPV, Trichoplusia ni (TnMNPV), Galleria mellonella (GmMNPV) [100,101] and the MNPVs of Spodoptera frugiperda [102].
Polymerase chain reaction (PCR)-based methods were then established. These methods have been shown not only to be more sensitive and faster but also more reliable than restriction enzyme analysis for classifying baculoviral species [4,[103][104][105]. Multiple genetic markers (e.g., egt, ac17, lef-2, polh, p35, pif-2) could be used for the identification of baculoviruses [7,[106][107][108][109]. The late expression factor 8 (lef-8), late expression factor 9 (lef-9) and polyhedrin N/A: no information is available either in the paper or GenBank file. The GenBank file with accession number KX1513952 is not available in GenBank website. Table 1. List of sequenced baculoviruses genomes.
(polh) were found in a highly conserved genes among baculoviruses [110], therefore, used as targets for degenerating PCR to characterize lepidopteran NPVs through the amplification of the conserved regions from a variety range of baculoviruses [111][112][113]. The Kimura 2-parameter (K-2-P) distances between the aligned polh/gran, lef-8 and lef-9 nucleotide sequences were described by Jehle et al. for baculoviruses identification and species classification [3]. The K-2-P nucleotide substitution model from aligned nucleotide sequences were determined by using the pairwise distance calculation of MEGA version 3.0 applying the Kimura 2-parameter model [114].
Due to the higher cost of NGS for viral genome sequencing, it is frequently required to combine various approaches to cut down the cost but still ensure precision, e.g., PCR-based K-2-P analysis and NGS approach for identifying the potential new NPV species. Two NPVs were isolated from casuarina moth (Lymantria xylina) and golden birdwing larvae (Troides aeacus) collected from the fields, respectively, will be as representative cases for explanation in the following sections. We will focus on the characterization of these two potential new NPVs first and then the use of the sequences of three genes, lef-8, lef-9 and polyhedrin of two NPV candidates was used to examine their taxonomic position by K-2-P analysis. Finally, we will focus on the genome sequencing technology and bioinformatic analysis on NPVs.
The identification of ambiguous NPVs
In this section, the discussion of molecular identification of NPV species based on K-2-P distance [3] is presented. Two new NPVs were used as examples in this study to reveal different issues regarding the classification of NPVs.
LdMNPV-like virus
The K-2-P distances, based on the sequences of three genes, between different viruses could mostly evaluate the ambiguous relationship among the NPVs. It was defined that distances less than 0.015 indicates that the two isolates are the same baculovirus species. On the other hand, the difference between two viruses is more than 0.05 should be considered as different virus species. For the distances between 0.015 and 0.05, complementary information is needed to determine whether these two viruses are of the same or different species [3,9,115].
A new multiple nucleopolyhedrovirus strain was isolated from casuarina moth, L. xylina Swinhoe, (Lepidoptera: Lymantriidae) in Taiwan. Since the polyhedrin sequence of this virus had high identity to L. dispar MNPV (98%), it was named LdMNPV-like virus [116]. To precisely clarify the relationship of three Lymantriidae-derived NPVs (LdMNPV-like virus, LdMNPV and LyxyMNPV [60]), the K-2-P of polh, lef-8 and -9 was performed. The distances between LdMNPV-like virus and LyxyMNPV exceeded 0.05 for each gene, polh, lef-8, or lef-9 and also for concatenated polh/lef-8/lef-9 (Figure 1). For LdMNPV-like virus and LdMNPV, not only the single lef-8 and lef-9 sequences but also concatenated polh/lef-8/lef-9, the distances were generally lower than 0.015, but only the polh sequence distance (0.016) exceeded slightly 0.015 (Figure 1). These results strongly suggested that LdMNPV-like virus is an isolate of LdMNPV. However, as indicated by our previous report, the genome of LdMNPV-like virus is approximately 139 Kb, due to large deletions compared to that of LdMNPV [116]. To further investigate the LdMNPV-like virus, a HindIII-PstI fragment (7,054 nucleotides) was cloned, sequenced and compared to the corresponding region of LdMNPV. Nine putative ORFs (including seven with full lengths and two with partial lengths) and two homologous regions (hrs) were identified in this fragment (Figure 2) and those genes, in order from the 5′ to 3′ end, encoded part of rr1, ctl-1, Ange-bro-c, LdOrf151, LdOrf-152-like peptides, Ld-bro-n, two Ld-bro-o and part of LdOrf155-like peptides ( Table 2). The physical map of HindIII-PstI fragment of LdMNPV-like virus showed that the gene organization was highly conserved compared to the corresponding region of LdMNPV, although several restriction enzyme recognition sites were different. Additionally, the ld-bro-o gene in the LdMNPV-like virus was split into two ORF7 and ORF8, due to a point deletion in the downstream (+669) of ORF7 and this deletion causes a frameshift that results in the formation of a stop codon (TGA) after 73 bp. Afterward, ORF8 was overlapped with the last four base pairs (ATGA) in ORF7. The nucleotide identities of these genes were 96-100% homologous to those of LdMNPV, except ORF3 which was 68% homologous to Ange-bro-c and ORF7 and ORF8 showing low identities to Ld-bro-o (73% and 26%, respectively). The deduced amino acid sequences of these genes were similar to those of LdMNPV, with identities of 81-100%, except the similarity of ORF3 to Ange-bro-c was 70% and ORF7 and ORF8 also showed low similarity to Ld-bro-o (67% and 26%, respectively). These results imply that the LdMNPV-like and LdMNPV viruses are closely related but not totally identical.
Based on these results, LdMNPV-like virus has a genomic size significantly smaller than that of LdMNPV and LyxyMNPV and appears to be an NPV isolate distinct from LdMNPV or LyxyMNPV. Moreover, a gene, ange-bro-c of LdMNPV-like virus, was truncated into two ORF7 and ORF8 and the sequence showed relatively low identity to that of LdMNPV ( Table 2). Taken together, these results indicate that LdMNPV-like virus is a distinct LdMNPV strain with several novel features. Otherwise, LdMNPV-like virus and LdMNPV have distinct geographical locations (from subtropical and cold temperate zones, respectively) and are Table 2. Reference from the genome of LdMNPV (Kuzio et al. [63]) * The nine potentially expressed ORFs are numbered in the order in which they occur in the LdMNPV-like virus genomic fragment from the 5′ to 3′ end. Two ORFs extend past this cloning site are printed in bold; only the N-terminus which contains 217 amino acids (654 nucleotides) and 99 amino acids (297 nucleotides) was examined. distinct in genotypic and phenotypic characteristics and it also showed broad genetic variation among LdMNPV isolates [9].
An NPV isolate from T. aeacus larvae
A nucleopolyhedrosis disease of the rearing of the golden birdwing butterfly (T. aeacus) larvae was found and the polyhedral inclusion bodies (PIBs) were observed under light microscopy (Figure 3). PCR was performed to amply the polh gene by 35/36 primer set (Figure 3) to further confirm NPV infection [117,118]. Therefore, this NPV was named provisionally TraeNPV.
The three genes, polh, lef-8 and lef-9 of TraeNPV, were cloned and sequenced and then the K-2-P distances between the aligned single and concatenated polh, lef-8 and lef-9 nucleotide sequences were analyzed. The results indicated that TraeNPV belonged to the group I baculoviruses and closely related to BmNPV group. Figure 4 showed that most of the distances between TraeNPV and other NPVs were between 0.015 and 0.050, whereas the distances for polh between TraeNPV, PlxyNPV, RoNPV and AcMNPV group exceeded 0.05. It should be noted that for all the concatenated polh/lef-8/lef-9 sequences, the distances were apparently much more than 0.015 and even to 0.05. These results left an ambiguous situation of this NPV isolate; so far, we could conclude that TraeNPV neither belongs to BmNPV group nor AcMNPV group. More complementary information is needed to determine the viral species of TraeNPV.
In summary, K-2-P distances were employed to further clarify the relationship between closely related NPVs. We discussed two different cases analyzed by K-2-P. From the sequence data of LdMNPV-like virus, results strongly supported that LdMNPV-like virus is an isolate of LdMNPV. Since the RFLP profiles of the LdMNPV-like virus showed the genome of this isolate was deleted tremendously, this deletion also showed coordinately in our partial sequences of genomic DNA fragments and the results of K-2-P. The K-2-P distances between TraeNPV and BmNPV or AcMNPV were among 0.05 and 0.015. Anyway, we cannot define that this virus is a new species with the evidences of RFLP, part gene sequences and K-2-P results; therefore, it is necessary to get more data, especially the whole genome sequence of TraeNPV.
The importance of whole genome sequencing on baculoviruses
The rapidly growing mass of genomic data shifts the taxonomic approaches from traditional to genomically based issues. The K-2-P distance supported LyxyMNPV as a different viral species (K-2-P values = 0.067-0.088), even though they were still a closely relative species phylogenetically. But, "how different did LyxyMNPV and LdMNPV?" become another question. Thus, the whole genome sequence could provide deep information of this virus. For example, as the genomic data revealed, the most part of the ORF (151 ORFs) between LyxyMNPV and LdMNPV was quite similar while still have several different ORF exhibits or absent in LyxyMNPV, e.g., two ORFs were homologous to other baculoviruses and four unique ORFs were identified in the LyxyMNPV genome and LdMNPV contains 23 ORFs that are absent in LyxyMNPV [60]. Besides, there is a huge genomic inversion in LyxyMNPV compared to LdMNPV [60]. Another example is Maruca vitrata NPV (MaviNPV). All of the K-2-P distance- supported MaviNPV is quite different from other NPVs (K-2-P values = 0.092-0.237) (Figure 6). While the gene content and gene order of MaviNPV were highly similar to that of AcMNPV and BmNPV, through the genomic sequencing, it showed the 100% collinear to AcMNPV [27] and MaviNPV shared 125 ORFs with AcMNPV and 123 with BmNPV. The detailed information could only be captured after whole genome sequencing rather than partial gene sequences or other phylogenetic analyses. Sometimes, usage of K-2-P data may raise other problems, which we mentioned above; it seems LdMNPV-like virus and LdMNPV were the same viral species. While through the restriction enzyme profile and partial genomic data, we could identify that there are some deletion fragments and different gene contents within the LdMNPV-like virus genome. For the TraeNPV, most of the K-2-P values were ranged from 0.015 to 0.05; thus, whole genome sequencing could be one of the best ways to figure out this ambiguous state. The more detailed information we can get, the more deep aspect we can evaluate, e.g., the taxonomic problems and further evolutionary studies.
Genome sequencing technology
Previous NPV genome sequencing employed three types of approaches: plasmid clone (or template) enrichment, NGS, or a combination of the two methods. Initially, the most common approach used restriction enzymes to fragmentize the viral genome into smaller pieces.
Plasmid-based clone amplification was then employed to enrich templates for sequencing. Later, conventional Sanger sequencing and/or next-generation sequencing was employed for genome assembly. In addition, purely high-throughput sequencing-based approach from isolated viral genome was also employed [9,15]. To date, next-generation sequencing technology plays an increasingly important role on viral genome assembly. Previous researches showed that Illumina HiSeq has superior performance in yield than 454 FLX [119][120][121]. Baculoviruses usually contain a novel homologous region (hr) feature, which comprises a palindrome that is usually flanked by short direct repeats located elsewhere in the genome [122]. Thereby, the shorter single-read length of Illumina sequencers might lead the difficulty during genome assembly. Further application of paired-end read sequencing method could certainly provide alternative for sequencing overlap the hrs in baculoviral genomes.
Bioinformatic analysis
Construction of a complete genome map is essential for future genomic investigations. Besides sequencing, bioinformatic approaches are also required for determining the order and content of the nucleotide sequence information for the viral genome of interest. In general, bioinformatic approaches can be separated into three consecutive steps: genome assembly, genome annotation and phylogenetic relationship inference (Figure 5).
Genome assembly
Sequence reads are the building blocks for genome sequencing and assembly. Thus, quality control of sequence reads plays a key role in determining the fidelity of a genome assembly.
The procedure of read quality checking includes, but not limited to, the removal of unrelated sequences such as control sequences, adaptors, vectors, potential contaminants, etc., trimming of low-quality bases and selection of high-quality reads. The control sequences (e.g., PhiX control reads in Illumina sequencers, control DNA beads in Roche 454 sequencer) are routinely used by sequencer manufacturers to evaluate the quality of each sequencing run. There are software applications made available to be utilized to identify and remove control sequences and low-quality bases. For NGS, sequencing adapters could be identified in reads if the fragment size is shorter than read length. Cutadapt [123] was implemented to trim the adapter sequences. Ambiguous bases or bases with lower-quality values can be removed by PRINSEQ [124] from either 5′ or 3′ end. NGS QC Toolkit [125] has programmed module to select high-quality reads. If paired-end technology was applied, paired-end reads could be joined by PANDAseq [126], PEAR [127], FLASH [128] and COPE [129], if a fragment size is shorter than read length.
Genome can be assembled from quality paired-end or single-end reads with de novo or reference-guided approaches. There are two standard methods known as the de Bruijn graph (DBG) approach and the overlap/layout/consensus (OLC) approach for de novo genome assembly. The idea of de Bruijn graph is to decompose a read into kmer-sized fragments with sliding window screening. Each kmer-sized fragment will be used to construct graph for longer path (e.g., contigs). Then, long-range paired reads can be utilized to build scaffolds from contigs with given insert size and read orientation. SOAPdenovo [130] is one of the DBG assembler that has an extreme speed by utilizing threads parallelization [131]. The OLC assembler starts by identifying all pairs of reads with higher overlap region to construct an overlap graph. The contig candidates are identified by pruning nodes to simplify the overlap graph. The final contigs are then output based on consensus regions. Additionally, Newbler [132] is a widely used OLC assembler distributed by 454 Life Sciences.
Reference-guided genome assembly is another solution for genome assembly if the genome of a closely related species is already available. For viral genome assembly, closely related species can be identified by mapping quality reads against sequenced viral genomes deposited in GenBank (http://www.ncbi.nlm.nih.gov/genome/viruses/) and select top-ranked species as the reference genome(s) to facilitate the assembly of the genome of interest. Referenceguided assembler is also called mapping assembler that the complete genome is generated by mapping quality reads with variant (single nucleotide polymorphism (SNP), insertion and deletion) identification. For example, MIRA (a computer program) [133] can create a reference-based assembly by detecting the difference between references.
During the assembly process, gap filling (or gap elimination) is conducted to resolve the undetermined bases either by bioinformatics or other approaches such as PCR and additional sequencing. Bioinformatic approaches normally use paired-end reads to eliminate gaps. PCR coupled with Sanger sequencing is a common approach to finalize the undetermined regions [134]. In addition, Sanger sequencing can also be used for genome validation and homologous region (hr) checking.
Phylogenetic analysis
Phylogenetic relationship inference reveals the evolutionary distances of various, especially closely related, species. MEGA [141] was the most widely used software suite that provides the sophisticated and integrated user interface for studying DNA and protein sequence data from species and populations. Alternatively, phylogenetic relationships among species based on the complete viral genomes or functional regions could also be estimated with Clustal Omega [142]. Clustal Omega was employed for multiple sequence alignment on the complete genomes and DNA fragments, respectively. ClustalW [143] was employed to do file format conversion of multiple sequence alignment. Ambiguously aligned positions were removed by using Gblocks version 0.91b [144,145] under default settings. Phylogenetic tree inference could be constructed by hierarchical Bayesian method (e.g., MrBayes [146]) or maximum likelihood method (e.g., RAxML [147]) to estimate phylogeny [148]. Tree was depicted with FigTree version 1.4.2 (http://tree.bio.ed.ac.uk/software/figtree/). The divergence times of different species were estimated using BEAST version 1.8 or version 2.3.2 [149]. In addition, pairwise sequence identity was determined by BLASTN (NCBI BLAST Package) [150] to analyze sequence-level variation. Also, whole genome pairwise alignment can be done by LAGAN [151]. CGView comparison tool (CCT) [152] was used to represent the block similarity among different species. Mauve [153], one of the multiple genome alignment tools, can help us to visualize the consensus sequence blocks among distant-related species.
Up to 78 baculoviruses have been reported; most of baculoviruses have a narrow host range, only infect their homogenous hosts, such as BmNPV, SpltNPV, SpeiNPV, MaviNPV and so on; LyxyNPV can infect LD and LY cell lines, while AcMNPV has a wide host range; at least 40 hosts in vitro have be found. Therefore, a new baculovirus isolate needs to define its taxonomic position and to analyze its phylogenetic relationship with a known baculovirus member.
Conclusion
With the accomplishment of the sequencing technologies, more NPV genomes were sequenced. So far, more than 78 baculoviruses have been fully sequenced and based on the sequencing methods, we can divide into two parts, one is sequencing by Sanger method and another is sequencing by NGS method ( Table 1). Among these sequenced genomes, 35 genomes were sequenced by Sanger method and 43 genomes were sequenced by NGS methods. It could be expected that whole genome sequencing by NGS method would get much common in this field; however, the upcoming metagenomic era is imperative that one remains aware of and careful about the shortcomings of the information presented about the organisms that are being sequenced and that these databases can oversee neither the correctness of the organismal identifications nor of the sequences entered into the databases.
The natural environment harbors a large number of baculoviruses. However, only a few of them have been sequenced and studied. A lot more information related to the genetic relationship of NPVs in the natural environment is needed to facilitate our understanding of these creatures. Though NGS technology has become an important technology for viral genomic sequencing, high cost of NGS for whole viral genome sequencing remains a barrier. To reduce the cost, it is necessary to evaluate whether the newly collected NPVs are suitable for whole genome sequencing or not. Alternatively, biochemical approaches and biological tools, such as PCR-based K-2-P analysis, can be good options to facilitate the process. As expected, all these applications are anticipated to help us reveal the genetic information of unknown species, so that more detailed insights of their genetic makeup and functional composition can be obtained to help us better understand the nature of these viruses. By using the powerful sequencing technique, the metagenomic progress (e.g., transcriptome analysis of insect host), new pathogen species in the natural environment would be easier to be found in the future. With the increase of new baculoviral genomic data, improvement of bioinformatic analysis methods and further validation of biological information would generate a group of genes, which connect to the viral host range and solve the contradiction situation in the baculoviral genomics. | 5,289.2 | 2017-04-05T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
A Condition for Complete Flattening of Asperities in a Rough Contact
Considering rough surface profiles in a contact model is of decisive importance. In the up-to-date rough contact models there remained underexplored the opportunity of complete flattening of smaller asperities and therefore the need of using the multilevel roughness models, including fractal ones. If higher level asperities are not flattened completely when pressed, then they will be able to impact on the contact process. This paper considers model problems of elastic-plastic contact with hardening for a body with protrusions and two pyramids as the objects similar to asperities. Modeling results show that asperities are completely flattened only on condition of confined compression. For real contacting rough surfaces under low pressures, complete flattening of asperities will not occur. It is shown that roughness elements on the surface of the asperities do not disappear even at severe deformation of the latter. The reason is a combination of the asperity form and hardening of material, while the consequence is a reduction of the real contact area.
Introduction
The problems relating to static and dynamic contact of solid bodies play an important role in such fields as tribology, heat transfer, electrodynamics, welding, and also in automobile, space, nuclear and precision instruments industries. Considerable issues emerge at describing the contacts in such intensively developing areas as micro-and nanotechnologies, especially at design of micro-electro-mechanical systems.
At the beginning of the 20th century, a great influence of rough contacts on the thermal state of some assembled structures was discovered. Real contact area of rough surfaces makes only a small share (usually less than 1%) from the nominal area outlined by geometrical dimensions of the adjoining bodies. This circumstance determines decrease in thermal contact conductance, the assessment of which was always a problem area in researches on contact heat transfer [1]. So far, it is not developed a reliable technique for quantitative prediction of the parameters of contact heat transfer. Capabilities of experiment in this case are limited and insufficient [2]. The model yielding the most reliable results on contact heat transfer is mainly necessary at the design stage of future device.
There are systems requiring the temperature prediction with high accuracy, for example, navigation systems for launch vehicles [3] and similar devices of precision instruments industry. To meet these requirements, the thermal contact model has to consider the roughness structure of contacting surfaces and contact heat transfer features in the micron-sized contact zone. It becomes necessary to model deformation of separate roughness elements (asperities).
Thus, there is a question whether asperities flatten completely at the contact of rough surfaces. Under what conditions is it possible? There are smaller asperities based on the surface of bigger ones and called asperities of higher level. These asperities are included in widely adopted multilevel [4] and fractal [5][6][7][8][9][10][11][12] 3D models of roughness. The additional question is specific for such models -whether preservation of rough profile on an asperity surface is possible while deforming by a rigid plane i.e. whether asperities of higher level will be flattened completely first at deformation? If yes, then at certain values of contact pressure the asperities of a higher level could be neglected, thus considerably simplifying the contact model. Otherwise, if no preliminary flattening of higher level asperities occurs, then the roughness model in the contact model is rather to be multilevel. Then, the adequate picture of rough surfaces deformation including that for the real contact area determination could be obtained through numerical modeling, for example, with the use of finite element method.
To date a number of models of thermal contact conductance through multi-spot contact are developed, for example, see [7,[13][14][15][16], from which it is possible to distinguish finite element models [7,15,16]. It is to finiteelement models that they are increasingly turning to validate analytical stochastic models, for example, [17][18][19][20]. Finite element modeling of rough surfaces contact is considered to be superior for elastic-plastic material behavior [21] and, moreover, is expected to deliver reference results for other contact mechanics approaches [17]. The essential advantage over other methods is also that it allows to consider influence of the form of asperities and change of material properties in the deformation process.
These advantages seem important, because, for example, Greenwood [22] has already shown (for electric contact) that contact resistance is defined not only by number and size of individual contact spots, but also by distance between them and regularity of their arrangement. Besides, it is easy to confirm experimentally (see, for example [23]) that under the pressure of a flat polished punch applied to a rough surface, the notches, grooves and all the more cracks remain on the surface even at considerable plastic deformation of the whole body. The rough profile of a print after Brinell hardness indentation test is presented in [22]. In addition, Kochergin [24] shows residual rough profile of aluminum surface after pressing at 200 MPa with Johansson gauge block. The indirect evidence of the profile flattening inability is the indication, reproduced in a range of works (for example, in [25]), of decrease in the real contact area with increase in scatter of the asperities heights.
To answer the raised questions, this paper considers solution of some model problems in finite element software ANSYS intended to elucidate the role of surface form in contact deformation of roughness. The problems are solved in an elastic-plastic formulation with strain hardening, inclusion of which in contact models is the problem of great importance [26]. Section 2 considers the model problem of elastic-plastic flattening of a single protrusion on a surface for clarification of the influence of the asperities presence on surfaces interaction and, therefore, on the size of the real contact area at finite element modeling. The condition is defined under which complete flattening is possible. Section 3 considers the model problems of upsetting a regular square and a doubled truncated pyramids as models of asperities. These models have been chosen taking in mind deterministic approach to rough surface description. The models have to be 3D and cannot be considered as 2D as they have variable thickness. The section shows features of deformation defined by a pyramid form and presence of hardening. The doubled pyramid with a valley at top is considered to be a model of an asperity containing asperities of the second level. The obtained result shows that the valley is not smoothed even at a severe deformation of the whole pyramid. Discussion of obtained results is given in section 4.
Note that the problem of deformation of single asperity is considered here not for the first time. In an elastic formulation, in fact, it was solved by Hertz in 1882 in the problem of the contact of two parabolic bodies [27]. But a purely elastic contact of the asperities is impossible, since all calculations and a series of experiments give stresses at the asperities much exceeding the yield strength of the material [28,29]. Therefore, a lot of work is devoted to single asperity models. More often, three approaches were used: the flattening of the deformed sphere between two absolutely hard surfaces [30][31][32], the flattening of the sinusoidal asperity by a rigid flat [21,28,29], and the indentation of a rigid hemisphere into the deformable space. Ghaednia et al. [33], using the finite element method, solved a thermo-electro mechanical axisymmetric problem for elastic perfectly-plastic deformation of a hemisphere and a substrate. A FEM solution and a comparison with experiment are given by Wadwalkar et al. [34] for the problem of deformation of a sphere between two rigid flat surfaces in the case of large pressures (up to 2.6 GPa). Modeling of an elastic perfectly-plastic creep deformation of a hemispherical asperity is carried out by Goedecke et al. [35].
But the aim of these and similar works was to obtain simple relations for inclusion in multi-spot statistical contact models [36,37], including those that take into account the interaction of asperities [38][39][40][41][42] and hardening [28]. However, in these works there is no answer to the question of whether a complete flattening of the asperities is possible. This study aims at clarifying issues that arise not in statistical contact models, but in finite-element spatial modeling of contact, when the individual geometry of each asperity is specified. For this case, the adopted models of asperities in the form of pyramids seems to be more suitable for modeling real roughness than hemispheres. Moreover, a deformation of a single asperity with a defect in shape (a valley at top), to the extent of the authors knowledge, has not previously been considered.
While solving the problems we assume lack of initial cold work hardening, residual stresses and influence of indentation size effect [43] considerably strengthening the surface when penetrating to several microns [44]. To obtain general patterns of asperities' behavior at flattening, carrying out calculations within assumptions of continuum mechanics, we accept the sizes of the bodies, the protrusion and the valley to be macro-scale. Static analysis is used, that is the deformation happens in the isothermal mode and does not depend on loading speed. The flow theory of plasticity and additive approach to forming of strain increments are applied.
Modeling the flattening of surface protrusions
To clarify the opportunity of complete flattening of asperities we consider the following model problem. Let us compare behavior of two bodies at compression. The first body has a flat upper surface. The second has protrusions on the upper surface (see Figure 1), imitating asperities. Due to the symmetry the first and the second body can be represented by models of parallelepipeds 0.05х0.05x0.5 m in size (see Figure 2). The upper surface of the second body has a protrusion in the form of a rectangular parallelepiped of 0.03x0.03x0.01 m in size. The Cartesian coordinate system axes are directed as shown in Figure 2. Material of the bodies is copper type UNS C12500 ASTM B224-16 (the modulus of elasticity 120 GPa, Poisson's ratio 0.38). It is deformed elastically and plastically with isotropic hardening. The stress-strain curve for the material [45] was recalculated for logarithmic strains and approximated with multilinear model (see Figure 3). As the last point in the experimental loading curve is a stress of about 375 MPa at logarithmic strain 0.7, then to make the possibility to calculate at a bigger strains the multilinear curve was continued by means of linear extrapolation to the point with logarithmic strain equal 2 (flow stress of 570 MPa). After this point the material flows as rigid-perfectly plastic. The punch in the form of a parallelepiped of 0.3x0.3x0.1 m exerts pressure P on the upper surfaces of bodies. To consider the deformation of the punch insignificant its material is assumed to be hypothetical, elastic, with the modulus of elasticity of ܧ = 2 • 10 ଵ଼ Pa and Poisson's ratio ߥ = 0.3. The lower surface is fixed to avoid displacement. Lateral surfaces are fixed to avoid displacement in the directions orthogonal to them. The friction is neglected in the solution.
Let us consider a 3D quasi-static mathematical model of contact between the plate of ܸ ଵ volume with ܵ ଵ surface and one of the two bodies of ܸ ଶ volume with ܵ ଶ surface. We designate the upper surface of the plate as ܵ ଵ ଵ and the rest of the surface as ܵ ଵ ଶ , so that ܵ ଵ = ܵ ଵ ଵ ∪ ܵ ଵ ଶ . For the body chosen we allocate in the surface ܵ ଶ an upper surface (together with protrusion, if any) ܵ ଶ ଵ , the lower surface ܵ ଶ ଷ , and the rest of the surface ܵ ଶ ଶ , so that ܵ ଶ = ܵ ଶ ଵ ∪ ܵ ଶ ଶ ∪ ܵ ଶ ଷ . The surface ܵ ଵ ଵ is affected by the external pressure P. The mathematical model comprises equilibrium equations (1), the generalized Hooke's law (2), the flow rule (3), strain-displacement relations (4), von Mises yield condition (5), ratio for calculation of contact pressure of augmented Lagrangian method (6), and boundary conditions (7)-(9) ߪ , = 0; where ߪ and ߝ are the Cartesian components of tensors of stress and strain, ݑcomponents of displacements vector, Ethe modulus of elasticity, ߥ -Poisson's ratio, ߜ -Kronecker delta, ݏcomponents of stress deviator tensor, ߪ ଵ , ߪ ଶ , ߪ ଷthe principal stresses, ߣ, ߣ -Lagrange multipliers, Ф(ߝ ) -function of the material's stressstrain curve, pthe contact pressure, Pexternal pressure applied to the punch, Kcontact stiffness, ߜcontact gap size.
After preliminary calculations it was obtained that the second body's protrusions are completely flattened at external pressure of 15 MPa, and this requires defining two pairs of contacting surfaces in the finite element model: 1) the upper planes of the body and protrusion -TARGE170 elements; the lower surface of the punch -CONTA174 elements; 2) the lateral surfaces of the protrusion -TARGE170 elements; the upper plane of the body -CONTA174 elements.
The model is meshed with 3D, 20-node hexahedral finite elements of SOLID186 type. SOLID187 and SOLID285 elements can be also used for solution. The mesh of the second body consists of 44875 SOLID186 elements.
Rigid fixing of the lower plane of the bodies is admissible in such model elastic-plastic problem, but it disallows comparing results at high pressures with those of real constructions (especially on x 3 -axis). When modeling near-surface volumes of real constructions with columnar models similar to those provided in [7,46], the adequate deformation of the column volume is possible if stress ߪ ଷଷ on the lower plane do not exceed the initial elastic limit of the material used. In case of uneven tri-axial compression, the deformation under the set pressure occurs elastically and plastically. The plastic strain is calculated on the basis of the von Mises distortion energy theory without taking into consideration the elastic volume change that is the usual approach to a deformation model of metals [47,48]. At the same time the volume of the columnar model decreases because of confined elastic compression. The material is redistributed due to plastic deformation in the zones which are set free because of elastic compression. Then, plastic strain on x 3 -axis (for the second body out of the area near the protrusion) is equal to the sum of elastic strains on axes ݔ ଵ and ݔ ଶ . Complete flattening is possible only in the presence of tri-axial elastic compression. It means for columnar models that in the absence of noticeable elastic strain on axes ݔ ଵ and ݔ ଶ complete flattening is impossible. In the calculations made the elastic strains on axes ݔ ଵ and ݔ ଶ on the lower surface ܵ ଶ ଷ of the first and second bodies appeared beginning with P = 5 MPa.
It is not possible to forecast the pressure value of complete flattening of asperities without the use of numerical calculations yet. In general case due to uneven profile of the surface complete asperities flattening state is reached only at completely close contact equivalent to perfect one.
Equivalent von Mises stress in the first body is equal to 71.4 MPa and is constant in all volume ܸ ଶ . Von Mises stress on the lower surface of the second body ܵ ଶ ଷ is also equal to 71.4 MPa. However, because of uneven upper surface, a non-uniform distribution of stresses and strains in the volume ܸ ଶ appears. Areas of both higher and lower stresses and strains exist, and the area with lower strains is the layer of less hardened material lying in some depth from the surface (see Figure 4). Strains in the protrusion area reach values exceeding 2 thus allowing to suppose an essential cold work hardening in this zone. For example, Figure 5 shows the distribution of equivalent strains from 0.025 to 0.18 when stresses considerably grow in plasticity zone (from 100 to 240 MPa) or in other words, when a noticeable hardening takes place. The projecting upwards rectangular parallelogram is pressed to the level of the upper surface while this level itself rises by 0.0022 m due to redistribution of material to the area not affected by punch pressure. The finite element mesh refinement leads to a noticeable correction of results relating to equivalent plastic strain and punch displacement.
After complete flattening of the protrusion the second body turned to the same form as the first body, but of increased height taking into account the volume of the flattened protrusion. Geometrically, based on the ratio of the areas of the top of the protrusion and the plane on which the protrusion is located, the volume of protrusion material is sufficient to increase the height of the body by 3.6 mm. In calculations the displacement of the upper plane after deformation of the first body was 1.52 mm, that is, assuming equal displacement of the parts identical for two bodies, the increase in height of the second body after deformation as compared with the first one has to be not 3.6 mm, but 3.6 − 1.52 = 2.08 mm. This very value was received in numerical solution for the second body 10 − 7.78 = 2.22 mm with the accuracy of 7% since the initial height of the protrusion was 10 mm, and the displacement of the deforming plane to complete crushing of the protrusion made 7.78 mm (see Table 1). Thus the received values of displacements for the punch confirm that the plastic deformation was calculated taking into account volumetric invariance, and compression of the first body by 1.52 mm is made within elasticity.
So it can be seen that complete flattening is possible at condition of uneven tri-axial compression provided that the material does not have an opportunity to widen in any direction. The asperities usually differ in size and the state mentioned above can take place only with such pressure with which asperities of all sizes in the considered area are flattened simultaneously. It occurs possibly at the close contact approaching to the perfect one when the real contact area comes to, say, 80-90% of the nominal one. Can asperities on asperities be completely flattened? No, as at flattening of a small asperity the material of underlying big asperity always has space where to move and the small asperity will preserve to some extent. To strengthen the above let us consider the following example.
Deformation of truncated pyramids
The result obtained in the previous section testifies to impossibility of full flattening of asperities. At the same time, it is not clear whether asperities of higher level in multilevel or fractal models of roughness are completely flattened. For this purpose, let us consider problems of deformation of two truncated pyramids -regular square and doubled. The regular square truncated pyramid represents an asperity model, and doubled one -model of asperity containing asperities of higher level. Let us consider 3D mathematical model of the contact of the rectangular parallelepiped and truncated pyramid with volumes ܸ ଵ and ܸ ଶ , bounded by surfaces ܵ ଵ and ܵ ଶ , respectively. We designate for the parallelepiped an upper flat surface ܵ ଵ ଵ and the rest of the surface ܵ ଵ ଶ , so that ܵ ଵ = ܵ ଵ ଵ ∪ ܵ ଵ ଶ . For the pyramid we allocate in the surface ܵ ଶ the contacting surface ܵ ଶ ଵ , the surface of the bottom base ܵ ଶ ଷ , and the rest of the surface ܵ ଶ ଶ , so that ܵ ଶ = ܵ ଶ ଵ ∪ ܵ ଶ ଶ ∪ ܵ ଶ ଷ . The bottom base of the pyramid ܵ ଶ ଷ is fixed in all axes. The surface ܵ ଵ ଵ is affected by external pressure P. The mathematical model of this problem comprises the equations (1)-(6) with the following boundary conditions
ANSYS contact elements of CONTA174 type cover surfaces ܵ ଶ ଵ and ܵ ଶ ଶ , and elements of TARGE170 type cover the bottom surface of the punch. The pyramid is meshed with hexahedral finite elements SOLID186 of uniform height (see Figure 6(a)). By results of the calculation (see Figure 7), it can be seen that of all elements of the pyramid the equivalent plastic strain growth rate is maximal for the first row elements. However, the strain growth with the maximal rate for the first row elements sharply stops when about 25% of the load is applied. The further strain growth is hardly noticeable. It is due to the upper row of elements is completely pressed into the underlying layer of elements at that moment (see Figure 8). Slowing the rate of strain growth can be explained with the fact that the first row of elements obtained higher hardening than the elements of the underlying second row and therefore stresses arising at this stage cause primarily strains of the less strengthened second row. For the first row in addition to weak deformation there is also a movement of elements downwards. stops because the first and second rows elements are fully pressed into the position of the third row elements. Figure 9 shows the change in the maximal increment of equivalent plastic strains in the deforming body for each substep. In general, as it is possible to assume from the pyramid form, the decrease trend in strain increment values can be seen. Irregularity can be explained with features of the numerical implementation, such as discrete contact and a variety of the form of deformed finite elements. As it can be seen from Figure 10, the nodes of the top base move downwards within each row of elements with a constant rate. This rate decreases at indentation of one row into another row. The increased hardening of the material in a row closer to the top of the pyramid in relation to the material of underlying rows showed a potential possibility of preserving the rough profile of overlying rows at further deformation of the pyramid. This rough profile may be initial or created at the contact deformation. On the other hand, the rough profile of an asperity, which is a roughness of the second level, is similar to a pyramid with an uneven surface. Preservation of such rough profile at deformation leads in turn to decrease in the real contact area. In the following problem such behavior is shown for a valley at the top of the doubled pyramid.
Deformation of the doubled pyramid
The model represents the figure consisting of two truncated square pyramids, same as the pyramid in the above problem, joined in one body. The one of corners of the first pyramid base forms the point of reference. The second pyramid is shifted along ݔ ଵ -axis by 0.15 m. Thus a valley is created between top surfaces of the pyramids 0.05 m wide and 0.05 ത m deep.
Here the figure of the model in a whole, that is the doubled pyramid, may be seen as an approximation model of asperity of the first level, and the valley between tops as a model of asperities of the second level actually existing on the first level asperity.
Mathematical model, material properties, the load, and solution options are the same as in the previous problem. The model is also meshed with hexahedral finite elements allowing to decrease numerical error. At the same time, the mesh is somewhat denser (see Figure 6(b)).
The numerical results are shown in Figure 9, 11-13. Due to the three-dimensional nature of the problem, the valley is deformed with a tendency to decrease its depth and to bring the opposing walls closer. When the top nodes were displaced 0.098 m downward, the opposite walls were closed at the ends of the valley.
After the full load applied and a considerable crushing occurred, it can be seen that irrespective of the pressure, the valley remains and creates a steady gap between the punch and the surface being deformed ( Figure 11). The maximal depth of the valley is 0.0186 m, or more than 30% of the initial. Growth rate of the valley depth sharply decreases after flattening the tops of the pyramid (at about 20% of the load) ( Figure 12). As the valley tends to close, the presence of friction would apparently increase the gap. Change in the maximal increment of the equivalent plastic strains (see Figure 9) is similar to that in the regular pyramid problem. The curve is less rugged because of the refined mesh.
Discussing the results
Solution of the deformation problem for the model with the protrusion made of isotropic hardening material has shown that on condition of uneven tri-axial compression when there is no free space for the material to move in the lateral directions the protrusions on surface are completely flattened at certain pressure. In this case the surface is considerably strengthened although at some depth a layer is formed, even less hardened compared to the model without the protrusion.
But asperities material on a real surface, when the contact is not close to perfect, have a space where to move in the lateral directions, as the contact of the asperities generally happens serially, not concurrently. Then the asperities are not under condition of uneven triaxial compression. Therefore, complete flattening of asperities of the rough surface will not occur even if we assume the absence of such preventing from flattening factors as indentation size effect, initial cold work hardening and friction. It again emphasizes the role of asperities form in modeling the contact of rough bodies.
While deforming with a plane punch all domains protruding above the nominal level of rough surface in which hardening exceeds the hardening of material at the nominal level, will be further deformed slower, than the material at the nominal level, and can create a rough profile. Perfectly plastic material would apparently behave in another way, that is would be completely deformed adapting to the punch. In case of elastic-plastic material behavior, the forming of rough profile happens because of inhomogeneous spatial distribution of stresses and is possible only in combination with a "favorable" form of the objects being deformed. It seems that the models of asperities (for example, set of several pyramids) possess such a form, especially in case of inclusion asperities of the second level in them.
At the beginning when the top of the asperity is touched the maximal stresses is in the top area. In the course of deformation there comes a moment when stresses in top decrease to the level below the actual yield stress and the plastic deformation continues in deeper areas. So the strengthened top begins to hold its profile. Thus, the rough surface profile can be formed and supported at severe deformation in the near-surface layer of the body. At sufficient hardening of deeper layers and at the appropriate current surface profile, which leads to the necessary redistribution of pressure, the plastic deformation of the top can resume.
So, the assumption of perfect contact in the area of direct contact of asperities is mistaken. The asperities are not flattened completely because of the roughness of the second and higher levels, and also because of the hardening.
Complete flattening of asperities might perhaps occur only at a very high pressure when the displacements exceed several times the size of asperities being smoothed. It seems that such pressure cause plastic deformations noticeable with the naked eye and is not often met in assemblies.
Let us note that we did not consider the compliance of the bottom surface of the pyramids taking place for a real near-surface layer of roughness. This assumption could somewhat promote the smoothing of the rough profile. The indentation size effect and friction at their inclusion in solution could have the opposite effect. However, the above factors do not result in a qualitative change.
Conclusion
The results showed that, in the case of elastic-plastic behavior of material, even under considerable pressure and strains, complete flattening of asperities will not occur in the general case, because there will be no confined compression. The relief will contain asperities of different scale. The asperities of higher level will change the real contact area. The calculations carried out in the first approximation show the tendency for it to decrease [7,16]. The quantitation of influence of higher level asperities at elastic-plastic material behavior is a key issue for future research. Perhaps for some cases, higher level asperities can even be neglected. Nevertheless, multilevel or fractal descriptions of roughness, apparently, can be considered more truly characterizing the processes in contact.
One of the most important in determining the degree of influence of higher level asperities will be a combination of geometric shape of rough surface with the elastic-plastic properties of the material, especially with its stress-strain curve. In view of the geometric complexity of contacting bodies and the nonlinear behavior of materials, the use of 3D-models and numerical methods, finite element method in particular, could be promising here. Moreover, asperities of higher level can play a special role if we take into account the change of mechanical properties of near-surface material due to the indentation size effect. This will be explored as the future 7 , 0100 (2018) MATEC Web of Conferences matecconf/201 220 82200 development of the finite-element rough contact micromodel of the author, described in [7]. | 6,808.2 | 2018-01-01T00:00:00.000 | [
"Materials Science"
] |
Distributed Adaptive Cooperative Control With Fault Compensation Mechanism for Heterogeneous Multi-Robot System
In this paper, a distributed adaptive consensus law with fault compensation for a heterogeneous multi-robot system (MRS) is proposed. The design paradigm adopted in this work involves a leader-following cooperative algorithm featuring two distinct adaptive coupling gains to compensate for multiple additive time-varying faults. Exacerbating the situation, the follower robots commissioned in the leader-following mission are non-identical due to their dynamic characteristic as normally exist in a physical setup. The capability of the proposed scheme is investigated and compared with the other two recent works in two facets; one is to gauge how the algorithm is able to mitigate faults of varying nature in the presence of heterogeneous robot(s) while maintaining the platoon formation during the leader-following task; two is the ability to cope with subsequent topology reconfiguration. The stability and the robustness of the proposed scheme against bounded time-varying faults are proven using rigorous Lyapunov analysis. The proposed control strategy exempts the use of an observer or estimator, thereby simplifies the synthesis and implementation on mobile robots. The simulation results of the proposed adaptive consensus law demonstrate the best performance as compared to the other two recent works in the presence of multiple faulty robots.
I. INTRODUCTION
In recent decades, there have been a plethora of research studies on the cooperative control of multi-agent systems (MASs) [1]- [4]. The application of MASs has gained interest especially in multi-robot systems (MRSs) whereby a variety of automated applications such as surveillance, search and rescue, and exploration are notable examples. Without the loss of generality, the term MRS is referred in this paper to address a practical concern involving a platoon configured MASs which is not always dynamically homogeneous as illustrated in Fig. 1. These are deployed mostly in autonomous modes with a minimum of human supervision to travel autonomously in a strategic group formation or The associate editor coordinating the review of this manuscript and approving it for publication was Choon Ki Ahn . alignment in various geographic locations and under various terrain conditions. For agility and flexibility in carrying out a remote mission, each of the so-called agent robots is equipped with different on-board instrumentation, thereby exhibiting distinctive dynamics. Such heterogeneous characteristics pose a great challenge in controlling all the robots in a network to work cooperatively [5], [6]. When deploying a cooperative MRS autonomously, the mission time of the MRS is often governed by the finite energy reserve on board. This can be remedied by careful path planning in the mission field to reduce surplus travel. Such method requires terrain description, i.e., a priori information, which often is unavailable.
It is imperative to preserve the integrity of the platoon formation of an MRS by ensuring that any faults occurrence can be effectively compensated. Faults in question can be either emanating from individual robot agent or it can be environmentally induced during the autonomous mission. According to Chen, a 'fault' is described as an unexpected change in the system's operation [7].
Many effective fault-tolerant control methods (FTCs) have been extensively investigated for MRSs to guarantee system stability at an acceptable level. One possible solution for achieving MRS coordination in the presence of faulty robot(s) is to locally modify the control input of the faulty robot(s) [8]. In general, FTC solutions can be divided into two main categories: passive and active. A passive FTC refers to a control design that is robust to a fault occurrence without any modification of the control system, and this method is well-suited for low-dimensional scale application [9]. An active FTC, on the other hand, allows controller configuration for fault detection, estimation, compensation, and isolation [10].
For the active FTC solutions, many effective methods based on observer and estimator have been presented in the literature [2], [4], [10]- [13]. However, the study of FTC for MRSs is relatively new [14]. For MRSs, the distributed FTC observer-based is designed for a leader-follower consensus problem with constant additive faults and multiplicative faults in [15] and [16], respectively. However, the solutions presented in that literature are generally subjected to two significant constraints as follows: (1) depending on the nature of the system dynamics in question, the observer design may require some states as inputs, and it is important to have a state measurement that is free from noise; (2) certain estimator designs may require a persistent excitation condition for convergence, which is not always achievable in practice. Recently, several published studies explored the application of neural network (NN) as estimator in the FTC. The NN has self-learning capability, which is able to estimate unknown components of the system including faults [3]. In [17], the NN is proposed to compensate faults for homogeneous MAS. Nevertheless, since NN, depending on the designer's neural nodes choice, is computationally exhaustive, event-control is employed in conjunction with NN to reduce the computational burden [3], [18].
With an increase in the number of agents, the fault compensation becomes more challenging as more data are exchanged within the system [14]. In the absence of estimation, adaptive control is also an effective tool with proven application in the FTC for both linear and nonlinear single systems [19]. In a relatively large network of agents, it is possible to design an adaptive control by adjusting the coupling gain adaptively so that the system can counteract faults to fulfills the desired objective. In [20]- [23], the robustness and convergence of an MAS are improved by selecting a sufficiently strong coupling gain. The work in [24] infers that strong coupling gain and a large number of agents imply synchronization robustness in the MAS against heterogeneity. In [25]- [29], an extensive study on distributed adaptive consensus was presented with linear homogeneous MASs with and without considering faults. In [30], a distributed adaptive consensus law was designed for a heterogeneous MAS with scalar faults, which required all followers to know the leader dynamics to compute their control inputs. In [31], a robust adaptive consensus protocol was presented with the use of a threshold update protocol (TUP), in which exchanging information with neighbors is mandatory, thus limiting the capability of the proposed law to the undirected topology.
Motivated by the abovementioned studies, this paper proposes an adaptive consensus law for a linear heterogeneous MRS with time-varying faults, where the MRS can be regarded as a nontrivial nonlinear system. Two distinctive adaptive coupling gains approach is used to compensate for the fault existence without requiring any extra a priori information about faults. This method exemplifies a robust approach that is pragmatic since observer or estimator design is not needed. A unidirectional communication approach is considered to ensure the practicality of the proposed solution through minimum power consumption in communication activity. Compared to the existing works in FTC methods, the proposed control method has several key contributions: (1) a new adaptive consensus control is designed for a cooperative heterogeneous MRS under the presence of multiple additive time-varying faults; (2) An adaptive consensus law is designed based on two distinct adaptive coupling gains, that rely only on the relative state information and the agent's own dynamics, both of which are practically accessible; (3) the novel adaptive consensus law is designed using a Lyapunov analysis to compensate for the effects of the fault. In addition, not only the proposed scheme is robust against fault, the scheme is also evidently robust against changes in the interagent communication whereby deliberate re-configuration in communication was introduced between robots to testify its prowess. This paper is organized as follows. Section II summarizes the problem formulation and provides some basic notation of the vectors used in the rest of the paper. Section III introduces the distributed adaptive consensus with a fault-tolerant mechanism. Section IV shows the results of numerical simulations, and finally, Section V presents some conclusions and speculates on future works. VOLUME 9, 2021
II. PROBLEM FORMULATION
Cooperative control of a platoon consisting of N + 1 heterogeneous robots moving in a straight line on a flat or rough surface along an x-axis with a constant velocity is considered in this work. The MRS leader is indexed by 0, and the N followers are indexed from 1 to N . The control objective is to ensure that all MRS agents (robots) are moving at the same velocity as the leader while keeping a constant distance between one another to avoid collisions.
The MRS is said to have achieved the desired control objectives with the lead robot having constant velocity if for any given bounded initial states where d xi0 is a pre-specified distance vector on the x-axis between the followers and the leader that remains constant for all i and p xi (k) and v xi (k) are the position and velocity along the x-axis, respectively, for i = 0, 1, . . . , N . The leader robot moves with constant velocity under the steady-state condition (i.e., v x0 = 1).
A. GRAPH THEORY
Suppose that the information links among the follower robots within the platoon are unidirectional and there exists at least one directional link from the leader to the followers. Consider a directed graph G = (V, E) with a non-empty set of nodes V = {0, 1, . . . , N }, a set of edges E ⊆ V × V, and the associated adjacency matrix A = [a ij ] ∈ R N ×N . An edge rooted at the i-th node and ended at the j-th node is denoted by (i, j), which means that information can flow from robot i to robot j. a ij is an unweighted edge (j, i) and a ij > 0 if (j, i) ∈ E. Robot j is called a neighbor of robot i if (j, i) ∈ E. The in-degree matrix is defined as D = diag{d ij } ∈ R N ×N with d ij = N j=1 a ij . The Laplacian matrix L ∈ R N ×N of G is defined as L = D −A. If the i-th follower observes the leader then an edge (0, i) between them is said to exist with the pinning gain g i > 0. We denote the pinning matrix as G = diag{g i } ∈ R N ×N , where g i is the pinning gain with g i > 0 if and only if robot i can receive information directly from the leader robot; otherwise, g i = 0. It is assumed that at least one follower is connected to the leader. Denote H = L + G, and all the eigenvalues of the matrix H are denoted by λ i for i = 0, 1, . . . , N are real and positive [32], [33].
B. DISTRIBUTED HETEROGENEOUS MRS MODEL
Consider a group of N + 1 heterogeneous MRS agents, consisting of N followers and a leader that moves in a 2-D plane; the general dynamics of each robot in the platoon can be expressed as where x i (k) ∈ R n is the state, u i (k) ∈ R m is the control input, and f i (k) ∈ R n is the signal indicating the occurrence of faults in the i-th follower. This means that f i (k) = 0 when node i is subject to a fault at k, and index i ∈ {0, 1, . . . , N } is the index of i-th robot in a network. We assume that the leader is fault-free. The introduced fault signal can be viewed as a system fault in the dynamics of (2), i.e., actuator faults which may be caused by physical effects or cyberattacks over the communication network [34]. Moreover, the system dynamics in (2) can be considered as those of a nonlinear system since the fault being introduced here is time-varying in nature.
To further elucidate the heterogeneity of the MRS system considered in this paper, without loss of generality, a particular basic structure of the dynamics of heterogeneous MRS agents is considered. Let position, (2) can also be written as a double integrator form wherebyṗ i = v i and m ivi = −v i + u i , where m i represents the mass of the i-th robot. Let the system and input matrices (A i , B i ) be described as The heterogeneity introduced in the i-th agent is different compared to the work in [34] because each robot in the network may exhibit different masses due to the variety of mobile platforms deployed in an autonomous mission. The inertial time lag in the differential acceleration or jerk in [30] may not be suitable for the smaller rigid body dynamics of the mobile robot exemplified in this paper.
Even though the considered MRS system is specific, the main concept of this paper is applicable to other types of MAS systems or other cooperative control problems since the proposed adaptive law is only dependent on neighbors and their local information, i.e., the agents' own dynamics and relative state information.
C. FAULT MODEL
Any type of fault at any level of magnitude may immediately or gradually degrade the overall MRS performance, which leads to instability and eventually collision among the members within the platoon. Therefore, fault compensation should be investigated in designing a practical consensus law. In a case where a fault with ''high'' severity occurred among the MRS agent and reaches a magnitude beyond the acceptable threshold, the mission is suspended if there is no change to the current robot coordination setting. Nevertheless, to ensure that the mission can continue and complete the objective, isolation of the faulty robot within the MRS and reconfiguration of the robot's coordination setting may be required, which leads to alteration of the current communication topology.
In this paper, the considered additive fault is represented by a sudden unintended acceleration or deacceleration of a robot, which often can be due to mechanical, electronic, or software-related problems. Furthermore, the fault could transpire momentarily or continuously as represented by Intermittent fault at time k = true if at the steady-state v xi = v x0 and v xi <v xi , Permanent fault at time k = true if at the steady-state, v xi = v x0 and v xi ≥v xi wherev xi is the isolation threshold. Each robot will observe its control input once per control step since the first consensus convergence is achieved during the mission. To ensure a stable, reliable, and robust MRS, an accurate measurement of the maximum allowable or tolerated fault magnitude should be quantified before the fault isolation and reconfiguration can be designed.
The main objective of the proposed consensus adaptive law is to minimize the fault strength produced by any follower robot(s) in the MRS. The magnitude of the adaptive parameters in the consensus law increases or decreases to reduce the fault magnitude at every step. In the proposed consensus law, two distinct adaptive coupling gains are employed to provide better consensus convergence for the MRS. For a continuous or permanent fault signal, the isolation threshold could be initially specified, which leads to exclusion of the faulty robot(s) and reconfiguration of the MRS coordination setting for the remaining healthy and semi-healthy robots to continue and complete the assigned mission. The semihealthy robot is the robot with a fault magnitude below the isolation threshold value within a particular interval.
The communication graph G among the N + 1 agents is assumed to satisfy the following assumptions.
Assumption 1: The graph G contains a directed spanning tree with the leader as the root node. The graph G is connected and there exists at least one path from the leader to the follower.
The stated assumption here is to highlight that a directed tree communication graph is assumed [32], [33]. A platoon of heterogeneous robots is aligned in a queue form as illustrated in Fig. 1.
This assumption is necessary for the state feedback control design and sufficient for the existence of a positive definite matrix, P.
Assumption 3: The desired trajectory is the Lipschitz condition and bounded, which exists a positive real constant κ such that, for all real x 1 and x 2 , This assumption is required to ensure that the trajectory for all robots is continuously differentiable for (4) to function [35]. [40]): Under Assumption 1, the matrix L + G is symmetric and positive definite.
Lemma 1 ([36]-
Lemma 2 ( [28], [41]): If a and b are nonnegative real numbers and p and q are positive real numbers such that 1 p + 1 q = 1, then ab ≤ a p p + b q q, and the equality holds if and only if a p = b q . Lemma 3 [42]: The Cauchy-Schwartz inequality states that the absolute value of the vector dot product is always less than or equal to the product of the vector norms a T b ≤ a b .
III. DISTRIBUTED ADAPTIVE CONSENSUS DESIGN
The proposed control objective is to ensure that all follower robots maintain the same velocity as the leader while keeping a constant distance to avoid collision during and after the unexpected fault occurrence at any follower robot. Fig. 2 illustrates the framework of the proposed adaptive scheme. Taking the relative states of neighboring agents, the cooperative control objectives of the heterogeneous MRS in (2) and (3) are achieved when the following adaptive control law is applied to the i-th follower robot for all i.
Let the consensus error indicates the desired static formation vector, c i and β i denotes the adaptive coupling gains associated with c i (0), β i (0) ≥ 1, K i and i are the feedback gain matrices, w i is the smooth function and r is a small positive constant that needs to be determined later. The control law at agent i is calculated using the most recently received states for the position and velocity of itself and its neighbors. Two distinct adaptive gains are employed in control input, u i to further improve the consensus convergence and tracking. This protocol aims to ensure all robots reach consensus in position and velocity and that The closed-loop dynamics of the heterogeneous MRS can be obtained by substituting (4) into (2) as follows: Based on (5), the closed-loop consensus error dynamics,ξ i can be expressed aṡ whereḋ i = [ 0 0 0 0 ] T and A i d i = 0, which yields the network-based error dynamicṡ Remark 1: Note that the consensus law is based solely on the dynamics of the agent itself and the information of the neighboring agent. The formulation of agent consensus law, u i is inspired by the adaptive strategy in [28] and [30]. In comparison to the adaptive law in [28] and [30], the novel adaptive law has two distinct features. First, unlike the adaptive protocol in [28], which employs a single term and an adaptive coupling gain, the adaptive protocol in (4) introduces two terms and two distinct adaptive coupling gains, c i and β i . As a consequence, the errors in the synchronization and control input are effectively suppressed, thus improving the convergence. Second, contrary to the adaptive protocol in [30], which is dependent on the leader's dynamics and uses a combination of constant and adaptive gains in two separate terms to further attenuate the heterogeneity of the agents, the proposed adaptive strategy (4) introduces a law that is independent of the leader's dynamics and has two distinct adaptive gains in two separate terms, allowing the MRS to be robust against time-varying and ''high'' severity faults while also improving the execution characteristics of the distributed MRS.
The following theorem presents a result on the design of the robust adaptive consensus law.
Theorem 1 For a graph satisfying Assumption 1, the N robots in (2) and (3) for i = 1, . . . , N reach consensus under a leader-follower based protocol (4) with two distinct adaptive gains, c i and β i , and gain matrices,
which is a solution to the algebraic Riccati equation (ARE),ĀP
whereP diag(P 1 , . . . , P N ),Q diag(Q 1 , . . . , Q N ), and Q i is a symmetric positive definite matrix and the coupling gains,c i and β i converges to finite values for k → ∞.
Proof: Consider the following Lyapunov function where γ i1 and γ i2 are positive scalars to be determined later. From (9), since P i > 0, it can be seen that V is positive definite with respect to ξ i , c i , and β i for i = 1, 2, . . . , N . The time derivative of V along the trajectory of (6) can be obtained aṡ By substituting K i = −B T i P i , with the adaptive law,ċ i anḋ β i defined in (4), it is obvious to arrive at the following: Define i = P i B i B T i P i ; then, it follows from (11) thaṫ Let¯ =PBB TP where¯ diag( 1 , . . . , N ) for i = 1, . . . , N . Then, (12) can be rewritten into a compact form as followṡ diag(γ 12 , . . . , γ N 2 ) ∈ R N ×N ,β = diag(β 1 , β i , . . . , β N ) ∈ R N ×N , and r are the positive scalars.
Invoking Lemma 1, (L + G) > 0 and taking the upper bound of the solution to ARE in (8) with careful selection of γ 1 such that 0 < (c +w) holds, then, Taking the triangular inequality as in Lemma 3, yields the following upper bound for (15), where tr(•) is the trace operation of a matrix and ϕ diag(ϕ 1 , . . . , ϕ N ) ∈ R N ×N . Then, proceeding from (16), where • F indicates the Frobenius norm. Further applying the triangular inequality to (17) yields, Completing the square of the term in (18) further concludes the upper boundṡ where λ min = (•) represents the minimum singular values of the matrix in question andF = 2 ξ Pf represents the faults occurring within the network.
From (17), r is chosen such that the following equation holds r (L + G) ≥ (I N ⊗ 1). Eventually, To guarantee the consensus convergence, it is important to have λ min (Q) > 2(Please see Remark 2). By choosing γ 2 such that R ≥ 0 +F, we can obtain thatV ≤ 0, and thus V is bounded. From (20), according to LaSalle's Invariance principle [35], it follows that the consensus error ξ asymptotically converges to zero within the compact set of which the size of the invariance depends on the fault magnitude, the boundedness of the leader trajectory x 0 and the leader's control input. The adaptive gains, c i and β i are ultimately bounded. Thus, the proof is completed.
Remark 2: The simulation parameters were selected by design based on the aforementioned Theorem 1. Suppose and the condition λ min (Q) > 2 holds; then, the term in (20): −λ min Q ξ 2 should be bounded asV 1 ≤ −ηV 1 , where η = λ min (Q)/λ max (P) denotes the convergence rate of the consensus. We should choose r appropriately so that R ≥ 0 +F holds. In addition, the adaptive coupling gain condition, i.e.,c > γ 1 should be adhered to guarantee the overall performance of the MRS during the adversity of time-varying faults while driving the consensus convergence error close to zero.
IV. SIMULATION RESULTS
Consider a heterogeneous MRS that moves along the x-axis of a two-dimensional coordinate frame and is connected by a directed communication topology, as shown in Fig. 3. Let where ω is the white noise, k is the time index and T is the sampling period, which is equal to 0.001 s. The fault parameters are set to a = 2, b = 1.5, and d = 1.5, which yields the specific fault signal depicted in Fig. 4. The fault signal is simulated either as a rectangular signal or a soft bias signal (slope) at a different instance. The fault magnitude in Fig. 4 is categorized as ''low'' severity. There is no fault for robot 3 and robot 5.
The parameters m i are arbitrarily selected and tabulated below in Table 1. Hence, the dynamics of the i-th mobile robot is characterized by the following matrices It is assumed that the robots are communicating with one another according to the information graph shown in Fig. 3. For the proposed adaptive law, the simulation parameters are designed as r = 5, c i (0) = 2, and β i (0) = 1 for i = 1, . . . , N . By solving the LMI in (8), the feasible solution matrix P i is obtained as The leader moves along the x-axis with a constant velocity. To show the effectiveness of the proposed adaptive law, the results are compared with [28] and [30]. Referring to Fig. 5, for the first 30 s of the simulation, all robots move synchronously together to achieve cruising velocity v x0 = 1m/s with a constant distance vector of 20 m.
However, since the robots were unable to achieve convergence within 30 s, fault occurrence further deteriorates both position and velocity signals, causing large oscillations. Following (20) in Theorem 1, without adaptive gains c i and β i , any increase in fault magnitude (F term) would causė V > 0, thus, V is no longer guaranteed to be bounded value. During the fault occurrence, the changes in trajectories are influenced not only by the fault signals but also by each particular robot's dynamics. In comparison to robot 1, robot 2 produced position and velocity signals that were unsmooth as the rate of acceleration slowed. Robot 4, however, followed the robot 2 trajectories since the output of robot 2 indirectly connected to robot 4 via robot 3. In the absence of the proposed adaptive law, the resulting trajectories of the agents are similar to those reported in [34], as shown in Fig. 5. The velocity curve shown in Fig. 7(b) indicates that the adaptive law proposed in [28] has a slower convergence performance than that of the proposed adaptive law in Fig. 8(b). The proposed adaptive law has good tracking properties; however, consensus results for [30] outweighed the proposed law as depicted in Fig. 6(b).
In comparison to [28], only one adaptive coupling gain is used in the control input. In [30], the control input contains two coupling gains, but only one of them is designed to be adaptable. Therefore, this paper introduces two distinct adaptive coupling gains in the consensus law to produce relatively rapid velocity convergence while ensuring robust stability of the mobile robot system, as shown in Fig. 8.
According to Theorem 1, adaptive gain β further attenuate the heterogeneity as long as R ≥ 0 +F is hold. This is the key advantage of the proposed adaptive compared to [28] and [30]. In addition, the r parameter is also part of the R, which VOLUME 9, 2021 simply acts as a tuneable parameter to increase or decrease the rate of the agents' response according to the user preference. For position tracking, all simulated adaptive algorithms are capable of minimizing fault strength, avoiding collisions, and allowing faulty robots to quickly revert to the desired position after the fault is removed. In Fig. 9, the coupling gains, c i and β i for the proposed law converged to a new finite value to counteract the occurrence of the fault.
To demonstrate the robustness of the algorithm, a further comparison is made between [30] and the proposed law, with the magnitude of f x2 and f y2 increased tenfold to signify ''high'' severity and with the remaining faults unchanged. The performance of the laws is illustrated in Figs. 10 and 11.
Figs. 10 and 11 show that with a larger magnitude of f x2 and f y2 , [30] produces a longer convergence time, more than twice that of the proposed adaptive law.
In addition, the proposed adaptive law produces the same convergence time as in the previous results in Fig. 8(b) demonstrating the robustness of the proposed consensus law. Furthermore, as shown by the relative velocity difference in Figs. 8(b) and 11(b), all robots are strongly connected during both normal and fault conditions. Remark 3: [30] and the proposed consensus law both have two coupling gains in two separate control input terms. Unlike [30], which used a combination of constant and timevarying gains, the adaptive design of the proposed consensus law employs two distinct adaptive coupling gains to enhance convergence.
The trajectories of the control input u xi for the three adaptive laws are depicted in Fig. 12. According to Fig. 12(a) and (c), it is observed that both Hu's law and the proposed law exhibit very high overshoot in the beginning instances. Referring to the inset images in Fig. 12(a-c), Hu's law produced high control effort at 50 s during the fault occurrence period, in contrast to the smooth and non-fluctuating control input u x2 obtained using Lv's law and the proposed law. When compared to Lv's law, the proposed law has a slightly shorter convergence time at the start of the mission and at t > 70s. With the application of two distinct adaptive gains, the proposed consensus law shows the capability to effectively suppress the MRS heterogeneity under both transient and steady-state conditions. Despite the slightly aggressive value of the control input u xi as shown in Fig. 12(c), the control effort produced is acceptable and satisfactory.
Remark 4: The results in Fig. 12 highlight that the introduced novel approach of two distinct adaptive gains did not incur exhaustive control efforts, while awarding a high degree of robustness against the time-varying faults.
As Remark 4 implies, low control effort, as evidently illustrated by Fig. 12, translates to light controller computation, which is amenable to a remote practical application when energy resources are scarce.
Moreover, to analytically compare the transient controllers' efforts for the three adaptive laws, ISE and IAE are utilized for the velocity error, |v xi (k) − v x0 (k)| as the controller performance indices. According to the performance indices in Table 2, the proposed adaptive law outperforms Hu's law [30] for relatively large fault magnitudes but outperforms Lv's law [28] for small fault magnitudes. The presented results, validating the effectiveness of the proposed adaptive law, demonstrate that it is more applicable than the existing adaptive laws.
In this paper, the general additive type of faults is explicitly considered. The fault is assumed to be intermittent, and the maximum magnitude that can be tolerated is bounded by f m . A fault magnitude that exceeds f m represents a condition where the entire MRS may become unstable, and the positioning of the robots can result in a collision. According to Fig. 11(b), as the magnitude of the fault increases, so does the magnitude of the follower robot's velocity. In practice, the limits on the actuator operation range should not be exceeded to prevent mechanical failure of the robots and to maintain optimal MRS operation. Hence, to complete the mission despite faulty teammates with f m or permanent faults, an active fault tolerance strategy can be designed to remove the faulty robot(s) from the team and allow the remaining healthy and semi-healthy (fault occurrence below the maximum limit) robots to automatically reconfigure themselves.
The exclusion of faulty robots from the team could be executed by employing fault isolation thresholds. In this case, all robots must observe their control inputs and isolate themselves if they exceed a certain threshold by withdrawing from the mission and cutting off communication so that the remaining healthy and semi-healthy robots can automatically adjust their adaptive coupling parameters in their consensus laws to account for changes in the communication topology. Since all the robots rely solely on the relative state difference with their neighbors to compute the consensus law, this isolation process can be achieved. There is, however, a clear limitation of the automatic isolation sequence using the current unidirectional topology. For instance, in Fig. 13, due to the presence of a fault above the threshold, robot 2 automatically initiates self-removal from the MRS and stops moving, while the remaining robots reorganize themselves to continue participating in the MRS to complete the assigned mission.
However, because all robots are unidirectionally connected to a single robot, the ejection of robot 2 from the MRS causes the immediate neighbor of robot 2 to adjust the adaptive parameters based on the position and velocity of robot 2. A cascading effect on the remaining robots that are indirectly connected to robot 2 leads to a failed mission. Therefore, alternatively, each robot can be connected to at least two neighbors to reduce the possibility of total failure.
Remark 5:
To ensure that the coordination of the MRS remains stable during isolation, a new restriction is implemented where all the followers must be connected to two or more neighbors to maintain global synchronization. However, the optimal topology should be investigated since more neighbors does not always guarantee better consensus convergence. For further discussion, please refer to [39].
For comparison, the topology in Fig. 3 is modified by adding a communication link between robots 1 and 3, as illustrated in Fig. 14 by the red dashed line. With the additional communication link, the isolated faulty robot 2 does not affect the remaining healthy robots in the MRS, as shown in Fig. 15. The immediate neighbors of robot 2 automatically recalculate their adaptive consensus law to cope with the changes in their relative state information with the remaining neighbors. In addition, fast convergence can be achieved immediately after the faulty robot is removed. Both adaptive gains converge to finite values. It is noted that since both robots 1 and 4 are relatively lightweight compared to robot 2, the ''low'' severity fault subjected to these robots, as depicted in Fig. 4, has a minimal effect on the agents' position and velocity.
It is worth mentioning that the MRS with a permanent fault required more information exchange to effectively isolate the fault. However, because the amount of information exchanged in the network is proportional to the number of communication links, communication demand can be minimized by limiting the number of neighbors with whom each agent is permitted to communicate and determine the optimal network topological design for a high probability of permanent fault. The results obtained are congruent with the analysis in Theorem 1, whereby as long as the condition of R ≥ 0 +F and Lemma 1 are fulfilled, multiple timevarying faults in the MRS can be accommodated by using the proposed adaptive law. Contrary to work in [31], the proposed law can be applied to the case of directed and undirected network topology.
The proposed adaptive law performance indices for each robot before and after isolation are tabulated in Table 3 .
The results presented in Table 3 suggest that the proposed adaptive law with modified topology is much more acceptable for efficient and robust fault-tolerant control, mainly for multiple time-varying faults. The simulation proved that MRS reconfiguration can be done adaptively without the use of a sophisticated control algorithm.
V. CONCLUSION
In this paper, a distributed leader-follower adaptive consensus law for a linear heterogeneous MRS is proposed. The proposed consensus law employs two distinct adaptive gains to improve tracking and convergence performance to ensure a safe separation between the robots in the presence of multiple additive time-varying fault occurrences. The proposed strategy allows for maintaining a limited communication burden; i.e., the unidirectional information exchanged among neighbors for relative state computations. Simulation results of the MRS verified the effectiveness of the proposed adaptive law. Future research may be devoted to an extension of the current work to a nonlinear MRS, switching topology, and communication delay. | 8,429.4 | 2021-01-01T00:00:00.000 | [
"Engineering"
] |
Widely tunable fiber optical parametric oscillator synchronized with a Ti:sapphire laser for stimulated Raman scattering microscopy
Stimulated Raman scattering (SRS) microscopy is a powerful vibrational imaging technique with high chemical specificity. However, the insufficient tuning range or speed of light sources limits the spectral range of SRS imaging and, hence, the ability to identify molecular species. Here, we present a widely tunable fiber optical parametric oscillator with a tuning range of 1470 cm−1, which can be synchronized with a Ti:sapphire laser. By using the synchronized light sources, we develop an SRS imaging system that covers the fingerprint and C–H stretching regions, without balanced detection. We validate its broadband imaging capability by visualizing a mixed polymer sample in multiple vibrational modes. We also demonstrate SRS imaging of HeLa cells, showing the applicability of our SRS microscope to biological samples.
Introduction
Vibrational imaging techniques based on stimulated Raman scattering (SRS) provide high chemical specificity and fast imaging speed [1][2][3][4].In SRS microscopy, two synchronized ultrashort pulses, referred to as pump and Stokes, coherently excite a molecular vibration that matches their frequency difference.Wide and fast tuning of this frequency difference is necessary to visualize distinct molecular species in diverse biomedical applications, such as cancer detection [5], monitoring drug delivery/interaction [6,7], and imaging cell metabolism [8][9][10][11][12][13].Ideally, the tunability from 500 to 3100 cm −1 is desired to cover the entire chemically informative molecular vibrational region [14,15].The high sensitivity of an imaging system is also crucial for acquiring clear images in a short time.The signal level is proportional to the power of pump and Stokes pulses, and the noise level is determined by the intensity noise of pump or Stokes pulses used for the lock-in detection [16,17].
One of the major challenges of current SRS imaging systems is the limited tuning range or speed of light sources.The most commonly used tunable light source for SRS is a synchronously pumped optical parametric oscillator (OPO) [18,19].While OPOs offer a wide tuning range over 4000 cm −1 , covering the entire Raman spectra, their tuning process can extend beyond one minute, especially when involving substantial wavelength changes.This lengthy time is attributed to the time-consuming temperature adjustments of the nonlinear crystal within the OPO.Recently, the spectral focusing method using a femtosecond OPO is becoming popular [20][21][22].This method uses two-color linearly chirped pulses, and tuning of the excitation wavenumber is realized by changing their temporal delay.Typically, the tuning range is limited to ∼300 cm −1 by the spectral widths of femtosecond pulses.Another commonly used tunable laser is a picosecond Yb-or Er-doped fiber laser that can tune the wavelength in the millisecond order or less [23][24][25].However, the tuning range of rare-earth-doped fiber lasers is, again, limited to ∼300 cm −1 by the gain bandwidth of the doped materials.
To overcome these limitations, tunable light sources using optical nonlinearities such as supercontinuum generation and four-wave mixing (FWM) have been proposed [26][27][28][29][30].Among them, a picosecond fiber optical parametric oscillator (FOPO) pumped by a tunable fiber laser is promising because of not only its remarkable tuning capability but also its high power spectral density [31][32][33].Wide and fast tuning with a FOPO was realized based on dispersion filtering and changing the repetition rate and the wavelength of a seed oscillator [31].Adjusting the repetition rate is advantageous over a previous tuning method that involved changing the cavity length by moving a delay stage, which hinders rapid wavelength tuning [34][35][36].For the application to SRS microscopy, however, the substantial noise associated with a FOPO and a fiber laser, which is a straightforward pump-Stokes combination, is a critical issue [32,37,38].Balanced detection can suppress the large excess noise, while it sacrifices at least 3 dB of signal-to-noise ratio (SNR) compared to shot-noise-limited detection with a single photodiode (PD).
In this work, we present a widely tunable picosecond FOPO with a tuning range from 819 to 931 nm, which corresponds to 1470 cm −1 .By synchronizing this FOPO and a tunable Yb fiber laser with a Ti:sapphire laser, we develop an SRS imaging system covering the fingerprint and C-H stretching regions.The wavelength of the FOPO can be tuned using an intracavity grating-based spectral filter while the pulse repetition rate is kept constant.This tuning method enables the FOPO to be synchronized with the external Ti:sapphire laser for the low-noise detection of SRS signals.To validate its broadband imaging capability, we demonstrate SRS imaging of a mixed polymer sample at several vibrational modes.We also demonstrate SRS imaging of HeLa cells to confirm the applicability of our SRS microscope to biomedical imaging.
Yb fiber laser for pumping the FOPO
Figure 1 shows the schematic of the SRS imaging system including the widely tunable FOPO and the Yb fiber laser for pumping it.Note that the Yb fiber laser produces pump pulses for the FOPO and Stokes pulses for SRS.The Yb fiber laser consists of an oscillator, a spectral broadening and filtering part, and amplifiers.The Yb fiber oscillator in a figure-nine configuration [14,39,40] generates seed pulses with a center wavelength of 1030 nm and a repetition rate of 38 MHz.The seed pulses are spectrally broadened via self-phase modulation (SPM) and spectrally filtered by a tunable filter [14,24], which consists of a diffraction grating and a galvanometer scanner.The spectral width of the filtered pulses is approximately 0.2 nm with a tuning range of more than 30 nm.The filtered seed pulses are amplified by cascaded two Yb-doped fiber amplifiers (YDFAs) and are divided into two branches.One branch is directed toward a FOPO port, and the other is toward a Stokes port for SRS.In the first branch, the pulses are further amplified by a double-clad YDFA to excite the following FOPO.The double-clad gain fiber (Yb1200-20/125DC-PM, Liekki) has a core diameter of 20 µm and is coiled for single-mode operation [41].This fiber has a low nonlinear coefficient and a large gain per length, both of which mitigate SPM-induced spectral broadening in the main amplifier.High power spectral density in the pump pulses is important for effectively pumping the FOPO.
The spectrum and intensity autocorrelation trace of the pump pulses generated from the Yb fiber laser are shown in Fig. 2. The spectral full-width at half-maximum (FWHM) is 0.48 nm when the average power is 870 mW at 1036 nm (Fig. 2(a)).Although the spectrum is broader than that before the power amplification, it does not exhibit the large SPM-induced spectral distortion, leading to the high power spectral density of the pump pulses compared with our previous work [33].The average power can exceed 2 W by increasing the pump power of the double-clad
FOPO
The setup of the FOPO presented here is based on that in our previous work [33].The pump pulses produced by the Yb fiber laser pass through an isolator and are focused into a 48 cm photonic crystal fiber (PCF, SUP-5-125-PM, Photonics Bretagne).The isolator prevents the pump pulses reflected on a PCF facet from returning to the fiber amplifier.The pulse power at the lens entrance is 840 mW, and the coupling efficiency to the PCF is approximately 75%.
Via FWM in the PCF, the pump pulses generate frequency up-shifted signal and down-shifted idler pulses, whose frequencies are determined by the phase-matching condition.The residual pump pulses and the idler pulses are blocked by a short-pass filter.This short-pass filter plays an important role in preventing pump pulses from entering the microscope or causing unwanted scattering within the free-space optics.At a polarizing beam splitter, most of the power of the YDFA.At such a high power, a spectral dip due to SPM appeared.The pulse duration at the same wavelength and at 870 mW is 7.2 ps under the assumption of a Gaussian waveform (Fig. 2(b)).The absence of oscillation in the spectrum and pedestal in the intensity autocorrelation trace indicates the high quality of the pulses.
FOPO
The setup of the FOPO presented here is based on that in our previous work [33].The pump pulses produced by the Yb fiber laser pass through an isolator and are focused into a 48 cm photonic crystal fiber (PCF, SUP-5-125-PM, Photonics Bretagne).The isolator prevents the pump pulses reflected on a PCF facet from returning to the fiber amplifier.The pulse power at the lens entrance is 840 mW, and the coupling efficiency to the PCF is approximately 75%.
Via FWM in the PCF, the pump pulses generate frequency up-shifted signal and down-shifted idler pulses, whose frequencies are determined by the phase-matching condition.The residual pump pulses and the idler pulses are blocked by a short-pass filter.This short-pass filter plays an important role in preventing pump pulses from entering the microscope or causing unwanted scattering within the free-space optics.At a polarizing beam splitter, most of the power of the
FOPO
The setup of the FOPO presented here is based on that in our previous work [33].The pump pulses produced by the Yb fiber laser pass through an isolator and are focused into a 48 cm photonic crystal fiber (PCF, SUP-5-125-PM, Photonics Bretagne).The isolator prevents the pump pulses reflected on a PCF facet from returning to the fiber amplifier.The pulse power at the lens entrance is 840 mW, and the coupling efficiency to the PCF is approximately 75%.
Via FWM in the PCF, the pump pulses generate frequency up-shifted signal and down-shifted idler pulses, whose frequencies are determined by the phase-matching condition.The residual pump pulses and the idler pulses are blocked by a short-pass filter.This short-pass filter plays an important role in preventing pump pulses from entering the microscope or causing unwanted scattering within the free-space optics.At a polarizing beam splitter, most of the power of the signal pulses is coupled out for the Stokes pulses of SRS, and the small remaining power is sent into the cavity.The output coupling ratio is adjusted by a half-wave plate (HWP).The signal pulses in the cavity pass through an automatic delay stage adjusting the cavity length, which is dependent on the wavelength due to the group velocity dispersion (GVD) of the PCF.They also pass through an intracavity spectral filter that consists of a galvanometer scanner, a 4f optical system, and a diffraction grating.The use of the spectral filter is essential to stably generate picosecond pulses; without it, the spectral width could exceed 10 nm.The passband of the spectral filter is controlled by changing the angle of the galvanometer scanner.After the delay stage and spectral filter, the signal pulses meet with the next pump pulses at a dichroic mirror.These pulses overlap in time and space, and the signal pulses are amplified via FWM in the PCF.Tuning of the pump wavelength shifts the resonant signal wavelength according to the phase-matching condition, leading to wide wavelength tuning of the FOPO.Depending on the signal wavelength, the delay stage and the spectral filter are adjusted.In contrast to dispersion tuning, the use of the intracavity filter allows us to keep the repetition rates unchanged during wavelength tuning.
To characterize the FOPO, we measured the spectrum, power, and intensity autocorrelation when tuning the wavelength.We controlled the automatic delay stage and two spectral filters in the Yb fiber laser and the FOPO.The HWP was also adjusted to maximize the output power.There was no need for alignment of other components.signal pulses is coupled out for the Stokes pulses of SRS, and the small remaining power is sent into the cavity.The output coupling ratio is adjusted by a half-wave plate (HWP).The signal pulses in the cavity pass through an automatic delay stage adjusting the cavity length, which is dependent on the wavelength due to the group velocity dispersion (GVD) of the PCF.
They also pass through an intracavity spectral filter that consists of a galvanometer scanner, a 4f optical system, and a diffraction grating.The use of the spectral filter is essential to stably generate picosecond pulses; without it, the spectral width could exceed 10 nm.The passband of the spectral filter is controlled by changing the angle of the galvanometer scanner.After the delay stage and spectral filter, the signal pulses meet with the next pump pulses at a dichroic mirror.These pulses overlap in time and space, and the signal pulses are amplified via FWM in the PCF.Tuning of the pump wavelength shifts the resonant signal wavelength according to the phase-matching condition, leading to wide wavelength tuning of the FOPO.Depending on the The measured spectrum and power are shown in Figs.3(a) and 3(b), respectively.The tuning range defined by the FWHM of powers is from 819 to 931 nm, which corresponds to 1470 cm −1 .This wide tuning range spans across the entire fingerprint region or multiple Raman regions.The FWHM of the spectrum is between 0.8 and 1.6 nm (10-19 cm −1 ).This spectral width is comparable to a typical Raman linewidth (<20 cm −1 ) and is narrow enough to distinguish Raman peaks [42].The output power reaches up to 81 mW.Considering the transmittance of our imaging system in the FOPO's tuning range, we can achieve an average power of tens of mW on the sample plane, which is sufficient for SRS. Figure 3(c) shows the intensity autocorrelation trace when the center wavelength is 885 nm.The pulse duration is 2.7 ps under the assumption of a Gaussian waveform, and it differs from 1.4 to 3.4 ps depending on the center wavelength.
Imaging system
The light sources of the SRS imaging system are the Ti:sapphire laser (Mira900D, Coherent) for the pump and the Yb fiber laser and the FOPO for the Stokes, as shown in Fig. 1.The pump pulses have a repetition rate of 76 MHz, a pulse duration of 3.5 ps, and a fixed wavelength of 789 nm.The Stokes pulses have a repetition rate of 38 MHz, which is exactly half of the pump pulse repetition rate.The Yb fiber laser is subharmonically synchronized with the Ti:sapphire laser by active feedback control.Specifically, this synchronization mechanism relies on adjusting the cavity length of the Yb fiber oscillator using an intracavity electro-optic phase modulator and piezo stage, based on the time delay between pump and Stokes pulses detected by a two-photon absorption PD [14].The Yb fiber laser and the FOPO are passively synchronized through synchronous pumping.These synchronizations result in the synchronization of the Ti:sapphire laser and the FOPO.The tuning range of the Yb fiber laser is from 1014 to 1047 nm (310 cm −1 ) [14,23], and that of the FOPO is from 819 to 931 nm (1470 cm −1 ).Considering the pump wavelength of 789 nm, the spectral region that can be accessed is 460-1930 cm −1 and 2810-3120 cm −1 , covering the fingerprint and C-H regions.
Two Stokes beams from the fiber laser and the FOPO are switched by a mirror.The pump beam and the selected Stokes beam are combined by a short-pass dichroic mirror (DMSP805R, Thorlabs), and a 2D galvanometer scanner scans the combined beam.The scanner plane is imaged to the pupil of a water immersion objective (60×, NA1.2, UPLSAPO60XW, Olympus) to focus the beam on the sample plane.The outgoing light transmitted through the sample is collected using another objective (UPLSAPO60XW, Olympus).The remaining Stokes beam is completely blocked by a short-pass filter (FESH0800, Thorlabs).After beam size reduction through a 4f optical system, only the pump beam is incident to a PD.The SRS signal is demodulated using a custom-made 38 MHz filter circuit and a homemade lock-in amplifier and is detected by a data acquisition system (USB-6363, National Instruments).All SRS images were acquired with an image size of 80 × 80 µm 2 , 500 × 500 pixels, 4 µs pixel dwell time, and no averaging.
HeLa cells used in this study were cultured in Dulbecco's modified Eagle's medium (12320-032, Gibco) supplemented with fetal bovine serum (SH30079.01,HyClone, GE Healthcare) and penicillin-streptomycin (15140148, Invitrogen) in an environment maintained at 37 • C with 5% CO 2 .The HeLa cells were seeded at a density of 4 × 10 4 cells in 500 µL of the medium onto a coverslip (C012001, Matsunami) in a 4-well dish (Thermo Scientific) and incubated for 3 days.The cell medium was replaced with 250 µL of fixation buffer (420801, BioLegend) and incubated for 20 minutes, followed by washing with phosphate-buffered saline (PBS), (166-23555, FUJIFILM Wako).For SRS imaging, the fixed cells were enclosed between two coverslips in PBS using an imaging spacer.
Results
To verify the broadband imaging capability of our SRS microscope, we first performed SRS imaging of a mixture of PMMA and PS beads.In the C-H region, PMMA and PS have Raman peaks at ∼2950 and ∼3050 cm −1 , respectively.They also have several vibrational modes in the fingerprint region [43].For example, the ring breathing mode of PS provides a strong Raman signal at ∼1000 cm −1 [44].We acquired a total of seven SRS images at 600, 813, 1000, 1452, and 1600 cm −1 in the fingerprint region as well as 2950 and 3050 cm −1 in the C-H region.SRS signals at 1000, 1600, and 3050 cm −1 targeted PS, while those at the other wavenumbers targeted PMMA.It took about one minute to tune the wavelength and to adjust the time delay between pump and Stokes pulses.The pump power was set to 5 mW on the sample plane to prevent signal saturation and minimize damage to the sample.The Stokes powers were 31-53 mW depending on the wavenumber.
Figure 4 shows the obtained SRS images of the mixed polymer sample.The images are arranged in two rows based on whether the targeted vibrational modes belong to PMMA or PS.Ring-like artifacts observed at 1600 cm −1 are attributed to the Stokes pulses that are generated by cascaded FWM and transmitted through the short-pass filter in front of the PD.These parasitic pulses can be blocked by placing a long-pass filter on the FOPO Stokes path.We can differentiate PMMA and PS beads from their signal levels in each SRS image.The SNR of an SRS image is defined by SNR = µ(S)/σ(N), where µ(S) is the signal mean measured in the area with the SRS signal, and σ(N) is the noise standard deviation measured in the background.The SNR of each SRS image is from 9 to 113.The difference comes mainly from the Raman cross section as well as from other factors such as pulse power and duration.The SRS image at 1000 cm −1 exhibits the highest SNR of all images owing to the strong PS signal.The relatively small difference between PMMA and PS at 813 and 1452 cm −1 is due to spectral overlap.Around these wavenumbers, while PS has very weak Raman signals compared to its own strong peaks, such as at ∼1000 cm −1 , Raman signal levels of PMMA and PS are not far apart [43].Next, we performed SRS imaging of fixed HeLa cells to demonstrate the applicability of our SRS microscope to biological imaging.Proteins and lipids are abundant and provide strong Raman signals in the C-H region.In the fingerprint region, a Raman peak at ∼1655 cm −1 is attributed to the amide I band of proteins and acyl C=C band of lipids [13,45].To visualize the distributions of proteins and lipids, we acquired SRS images at 1653, 2850, and 2940 cm −1 .The pump power was 62 mW on the sample plane.The Stokes powers were 36, 48, and 54 mW at 1653, 2850, and 2940 cm −1 , respectively.SRS images of HeLa cells are shown in Fig. 5.In contrast to the SRS image at 2850 cm −1 (Fig. 5(b)), which is associated with the CH 2 bond of lipids, the SRS image at 2940 cm −1 (Fig. 5(c)) exhibits the signal in nuclei, especially in nucleoli, whose signals are attributed to the CH 3 bond of proteins.Compared to the images acquired in the C-H regions, the SRS image at 1653 cm −1 (Fig. 5(a)) has a relatively low SNR because of the small Raman cross section in the fingerprint region.Nevertheless, we can see similar distributions of the SRS signal coming from proteins and lipids in Figs.5(a) and 5(c).These results validate the effectiveness of our SRS imaging system in biomedical applications.
Discussion
The present SRS imaging system has various advantages over previous ones.The FOPO has the potential of faster tuning capability than solid-state OPOs which require temperature adjustment of the nonlinear crystal [18,19].Compared with dispersion-tuning-based FOPOs [31,46], the present FOPO can tune the wavelength without changing the repetition rate, making it easier to synchronize the FOPO with the Ti:sapphire laser.Taking advantage of the low-noise property of the Ti:sapphire laser, SRS imaging can be accomplished without balanced detection, which is required to eliminate excess noise when noisy pulse sources are employed, at the expense of a 3 dB sensitivity drawback.As a result, SRS imaging of a polymer sample and HeLa cells was realized in a moderate pixel dwell time even in the fingerprint region, where the Raman cross section is small.Another notable advantage of our light source is its ability to access low vibrational wavenumbers such as 600 cm −1 or less, due to the independence of pump and Stokes wavelengths.
To improve this SRS imaging system, fast wavelength tuning can be realized by implementing automatic control of the spectral filters and delay stages.Its tuning rate is from a tenth of a second to a second depending on how far the wavelength is changed.It is possible to further reduce the tuning time to several milliseconds by passively compensating the GVD of the PCF to eliminate
Discussion
The present SRS imaging system has various advantages over previous ones.The FOPO has the potential of faster tuning capability than solid-state OPOs which require temperature adjustment of the nonlinear crystal [18,19].Compared with dispersion-tuning-based FOPOs [31,46], the present FOPO can tune the wavelength without changing the repetition rate, making it easier to synchronize the FOPO with the Ti:sapphire laser.Taking advantage of the low-noise property of the Ti:sapphire laser, SRS imaging can be accomplished without balanced detection, which is required to eliminate excess noise when noisy pulse sources are employed, at the expense of a 3 dB sensitivity drawback.As a result, SRS imaging of a polymer sample and HeLa cells was realized in a moderate pixel dwell time even in the fingerprint region, where the Raman cross section is small.Another notable advantage of our light source is its ability to access low vibrational wavenumbers such as 600 cm −1 or less, due to the independence of pump and Stokes wavelengths.
To improve this SRS imaging system, fast wavelength tuning can be realized by implementing automatic control of the spectral filters and delay stages.Its tuning rate is from a tenth of a second to a second depending on how far the wavelength is changed.It is possible to further reduce the tuning time to several milliseconds by passively compensating the GVD of the PCF to eliminate the need for delay stage movement.The tuning range of the FOPO can be widened by optimizing the Yb fiber laser pumping the FOPO so that the laser power is kept high and relatively constant across the tuning range of the fiber laser.The output power of the FOPO can also be improved in two ways.First, suppressing the SPM of FOPO pump pulses by shortening the double-clad gain fiber increases FWM energy conversion efficiency.Second, the output power becomes higher by increasing the pump power incident to the PCF.This power was set much lower than the maximum power (>2 W) in order to ensure a large margin for damage to a PCF facet.
Conclusion
We have developed the SRS imaging system using the widely tunable FOPO.The FOPO provided a maximum output power of 81 mW and a tuning range as broad as 1470 cm −1 .By synchronizing this FOPO and another tunable Yb fiber laser with the Ti:sapphire laser, we realized the SRS imaging system that covers the range from 460 to 1930 cm −1 in the fingerprint region and from 2810 to 3120 cm −1 in the C-H stretching region, without balanced detection.Its broadband imaging capability was verified by SRS imaging at multiple vibrational modes in the two regions.Furthermore, SRS imaging of HeLa cells shows the applicability of our SRS microscope to biological imaging.We expect that this imaging system with the FOPO will expand applications of SRS microscopy in various biomedical fields.
Fig. 1 .
Fig. 1.Schematic of the SRS imaging system.The light sources are a Ti:sapphire laser for the pump pulses and an Yb fiber laser and a FOPO for the Stokes pulses.The Stokes light sources can tune the wavelength across the C-H stretching and fingerprint regions, respectively.The three light sources are all synchronized through an active synchronization of the Yb fiber laser and the Ti:sapphire laser and a passive synchronization of the Yb fiber laser and the FOPO by synchronous pumping.TBPF: tunable bandpass filter, YDFA: Yb-doped fiber amplifier, DC: double clad, DM: dichroic mirror, PCF: photonic crystal fiber, SPF: short-pass filter, HWP: half-wave plate, PBS: polarizing beam splitter, OB: objective, PD: photodiode.
Fig. 2 .
Fig. 2. Characteristics of the Yb fiber laser in the FOPO port.(a) Spectrum at 1036 nm.The spectral width is 0.48 nm.(b) Corresponding intensity autocorrelation trace.The pulse duration is 7.2 ps under the assumption of a Gaussian waveform.
Fig. 3 .
Fig. 3. Characteristics of the FOPO when tuning the wavelength.(a) Spectrum and spectral FWHM, and (b) power.The tuning range is 819-931 nm, which corresponds to 1470 cm −1 .The power is up to 81 mW.(c) Intensity autocorrelation trace at 885 nm.The pulse duration is 2.7 ps under the assumption of a Gaussian waveform.
Fig. 5 .
Fig. 5. SRS imaging of fixed HeLa cells in the fingerprint and C-H regions.SRS images at (a) 1653 cm −1 in the amide I band of proteins and the acyl C=C band of lipids, (b) 2850 cm −1 in the CH 2 band of lipids, and (c) 2940 cm −1 in the CH 3 band of proteins and lipids.Scale bar: 10 µm.Pump power: 62 mW, Stokes power: 35-54 mW.Pixel dwell time: 4 µs.No averaging. | 5,951.4 | 2024-04-09T00:00:00.000 | [
"Engineering",
"Physics",
"Medicine",
"Materials Science"
] |
Quasielastic Charged-Current Neutrino-Nucleus Scattering with Nonrelativistic Nuclear Energy Density Functionals
Charged-current neutrino-nucleus scattering is studied in the quasielastic region with the KIDS (Korea-IBS-Daegu-SKKU) nuclear energy density functional. We focus on the uncertainties stemming from the axial mass and the in-medium effective mass of the nucleon. Comparing the result of theory to the state-of-the-art data from MiniBooNE, T2K, and MINER$\nu$A, we constrain the axial mass and the effective mass that are compatible with the data. We find that the total cross section is insensitive to the effective mass, so the axial mass could be determined independently of the uncertainty in the effective mass. Differential cross sections at different kinematics are, on the other hand, sensitive to the effective mass as well as the axial mass. Within the uncertainty of the axial mass constrained from the total cross section, dependence on the effective mass is examined. As a result we obtain the axial mass and the effective mass that are consistent with the experimental data.
I. INTRODUCTION
Measurement of the neutrino-nucleus (ν −A) scattering cross section in the last decade at MiniBooNE [1][2][3][4][5], MINERνA [6][7][8][9] and T2K [10,11] has improved the accuracy of the data dramatically, so the era of precision neutrino physics is dawning. One major purpose of the experiments is to resolve long-standing puzzles such as the neutrino mass, flavor oscillation, and CP violation in the leptonic sector. Success of the forthcoming experiments is expected to identify the limit of the standard model more stringently and lead to a new physics beyond the standard model. Interaction of the neutrino with nuclei plays a crucial role in understanding the result of the experiment. For a precise measurement of the standard model physics, uncertainties stemming from both hadronic and nuclear structures should be understood correctly, and should be reduced as much as possible. Those uncertainties also play a critical role in the interaction of neutrinos with nuclear matter at finite density and temperature, which has an essential consequence in the explosion of supernovae, and thermal evolution of the neutron star.
KIDS (Korea-IBS-Daegu-SKKU) nuclear energy density functional (EDF) was initiated with a prospect to construct a nuclear model in which finite nuclei and infinite nuclear matter can be described to desired accuracy within a single framework. A series of works applied the model to nuclear matter and nuclei [12][13][14]. The results showed that a unified description of nuclei and nuclear matter is feasible by expanding nuclear EDF in the power of the Fermi momentum. Combining the nuclear data and the neutron star observations, parameters in the symmetry energy could be constrained within narrow ranges [15][16][17].
Extending the range of application, we considered quasielastic electron scattering off nuclei with the nuclear wave functions obtained with KIDS EDF [18,19]. Without any adjustment of the model parameters to scattering data, KIDS EDF reproduces the experimental data accurately. Uncertainties in the nuclear structure arising from the nucleon effective mass in nuclear medium and the symmetry energy have been explored in detail. Some results turn out to depend on the effective mass sensitively, so it is demonstrated that the electron scattering could be a tool to constrain the effective mass of the nucleon in nuclear medium.
Stimulated by the success in the electron scattering, we apply the KIDS EDF to the ν −A scattering, and explore the uncertainty due to the in-medium effective mass of the nucleon and the axial mass, simultaneously. Nuclear wave functions are obtained by solving Hartree-Fock equations in which nonrelativistic nuclear potentials are imported from the KIDS EDF.
Role of the effective mass is examined by using four models KIDS0, KIDS0-m*77, KIDS0m*99 and SLy4, in which isoscalar and isovector effective masses at the saturation density are (µ s , µ v ) = (1.0, 0.8), (0.7, 0.7), (0.9, 0.9) and (0.7, 0.8), respectively in the unit of free nucleon mass. Axial mass is defined in terms of the form factor slope at four-momentum denotes the axial form factor of the nucleon. Dependence on the axial mass is considered by employing a standard value M A = 1.032 GeV, and a large value M A = 1.30 GeV.
In the result we find that in several kinematic conditions, the effect of the effective mass appears to be clear, and the result agrees with data better when the isoscalar effective mass at the saturation density is close to the free mass. On the other hand, when the difference due to the effective mass is small, theoretical results agree well with the data regardless of the effective mass. Contribution of the axial mass is discriminated well in the total cross section of the neutrino. Large axial mass M A = 1.30 GeV reproduces the MiniBooNE data better than the standard value M A = 1.032 GeV. Interestingly the total cross section is insensitive to the effective mass, so the role of the axial mass can be singled out and probed without being interfered by other uncertainties. In the comparison of the differential cross section, large axial mass combined with large effective mass gives better agreement to data on average. However more accurate measurements are demanded to constrain the axial mass with the differential cross sections.
In the present paper, the formalism of the charged-current (CC) ν − A scattering is briefly introduced in Sec. II, and Section III presents the results and discussion. Finally, we summarize the work in Sec. IV.
II. FORMALISM
The ν(ν) − A scattering is described by the connection of the electromagnetic interaction and weak interaction. In order to calculate the ν(ν) − A scattering, we choose that the target nucleus is seated at the origin of the coordinate system.
, and p µ = (E N , p) represent the four-momenta of the incident neutrino, outgoing neutrino, target nucleus, the residual nucleus, and the knockedout nucleon, respectively. For the CC reaction in the laboratory frame, the inclusive cross section is given by the contraction between lepton and hadron tensors: (1) where M N is the nucleon mass in free space, θ l denotes the scattering angle of the lepton, θ N is the polar angle of knocked-out nucleons, T N is the kinetic energy of the knocked-out nucleon, and h = −1 (h = +1) corresponds to the intrinsic helicity of the incident neutrino (antineutrino). The R L , R T and R ′ T are longitudinal, transverse, and transverse interference response functions, respectively. Detailed forms for the kinematical coefficients v and the corresponding response functions R are given in Refs. [20,21]. The squared four-momentum transfer is given by For the CC reaction, the kinematic factor σ W ± M is defined by where M W is the rest mass of W -boson, and M l is the mass of an outgoing lepton. θ C represents the Cabibbo angle given by cos 2 θ C ≃ 0.9749. G F denotes the Fermi constant.
The recoil factor f rec is written as The nucleon current J µ represents the Fourier transform of the nucleon current density written as whereĴ µ is a free weak nucleon current operator, and ψ p and ψ b are wave functions of the knocked-out and the bound state nucleons, respectively. The wave functions are generated with the same approach as the previous work [19]. For a free nucleon, the current operator of the CC reaction consists of the weak vector and the axial vector form factors: By the conservation of the vector current (CVC) hypothesis, the vector form factors for the , are expressed as The axial form factors for the CC reaction are given by with g A = 1.262 and two values 1.032 GeV and 1.30 GeV are assumed for M A .
The induced pseudoscalar form factor is parameterized by the Goldberger-Treimann re- where m π is the pion mass. But the contribution of the pseudoscalar form factor vanishes for the neutral-current reaction because of the negligible final lepton mass participating in this reaction.
III. RESULT
With the KIDS EDF model, we calculate the various differential cross sections and total cross sections in the quasielastic CC ν − A scattering off 12 C, and compare the result with MiniBooNE, MINERνA, and T2K data. In order to obtain the wave functions of bound and final nucleons from nonrelativistic nuclear model, the relativistic wave functions are generated by using the nonunitary transformation [18,19,22,23]. For the Coulomb distortion of the final lepton, the same approximation exploited by the Ohio group [24] is used. In these neutrino experiments, the energy of the incident neutrino cannot be fixed but has an energy spectrum, so the cross sections have to be averaged over the flux of the incoming neutrino beam. of the effective mass appears to be clear, and the results of the large effective mass agree with the data better than the small ones.
In Fig. 3, the flux-averaged double-differential cross sections are shown for the outgoing muon antineutrino in terms of p T , where p T and p represent the transverse and longitudinal component of the muon momentum with respect to the incident antineutrino beam, respectively. According to Ref. [9], this kinematics was exploited to include the nuclear effects in the ν − A scattering like the final state interaction, meson production, and so on.
In this work, we show the results of low momenta because the inelastic processes like meson production are excluded and the numerical difficulty is avoided due to partial-wave expan- Recently, a new experiment was performed at MiniBooNE [5] with monoenergetic muon neutrinos at 236 MeV, which are created when a positive kaon at rest decays, called kaondecays-at-rest (KDAR). We calculate the differential cross sections in terms of the kinetic energy of the outgoing muon and compare the result with the data in Fig. 6. The red solidus part is with shape-only 1σ error band, where σ denotes total cross section and yields σ = (2.7 ± 1.2) × 10 −39 cm 2 /neutron. The legend of the curve is the same as Fig. 1. In the low incident neutrino energies, the effect of the M A is small, so gives a difference less than 10 %. The cross sections of small effective mass are out of the data around the peak for both M A =1.032 GeV and 1.30 GeV.
IV. SUMMARY
Charged-current quasielastic scattering of the neutrino and antineutrino with 12 C target has been considered within a nonrelativistic nuclear density functional theory. Both the wave functions of bound nucleon in 12 C nucleus and final state interactions of the outgoing nucleons are obtained by using the effective nuclear potentials obtained from KIDS EDF. Parameters of the KIDS EDF have been fixed to satisfy well-defined nuclear matter properties and nuclear data, and there is no calibration of the model to scattering data. We found that the model reproduces experimental data at various kinematics very well.
At the same time, dependence on the in-medium effective mass of the nucleon and its axial mass is identified clearly. Dependence on the effective mass is probed by using two groups of models, one group with isoscalar effective mass close to the free mass (µ s ≃ 1), and the other group with µ s ≃ 0.7. Comparisons with the data from T2K and MINERνA Collaborations are crucial in diagnosing the effect of the effective mass. Results of the KIDS EDF are in good agreement with the T2K and MINERνA data with µ s ≃ 1. It is also confirmed that the dependence on the effective mass is dominated by the isoscalar effective mass, and the role of the isovector effective mass can be neglected. We observed the same behavior in the quasielastic electron scattering, in which µ s ≃ 1 models agree with the data better than the µ s ≃ 0.7 models.
In the comparison with the MiniBooNE data, role of the effective mass becomes less dominant compared to the T2K and MINERνA data, but the effect of the axial mass becomes crucial. A highlighting result is the total cross section of the neutrino, where the standard value of the axial mass M A = 1.032 GeV fails to reproduce the neutrino data. With M A = 1.30 GeV, theory results reside within the experimental uncertainty. It is notable that the total cross section of the antineutrino is insensitive to M A . More importantly, total cross sections depends on the effective mass very weakly, so they provide a unique opportunity to constrain the uncertainty of the axial mass. The effect of the axial mass M A is small at low incident energies of the neutrino, but the effect increases with higher neutrino energies. It is argued that the axial mass could be interpreted to play a role to subsume the higher-order contributions such as the multi-meson-exchange currents or multi-particlemulti-hole processes. We assumed impulse approximation in the calculation. It seems that large M A values are favorable if the transition matrix elements are evaluated in the impulse approximation. | 3,014.2 | 2022-03-31T00:00:00.000 | [
"Physics"
] |
Adaptation of Postural Sway in a Standing Position during Tilted Video Viewing Using Virtual Reality: A Comparison between Younger and Older Adults
This study aimed to investigate the effects of wearing virtual reality (VR) with a head-mounted display (HMD) on body sway in younger and older adults. A standing posture with eyes open without an HMD constituted the control condition. Wearing an HMD and viewing a 30°-tilt image and a 60°-tilt image in a resting standing position were the experimental conditions. Measurements were made using a force plate. All conditions were performed three times each and included the X-axis trajectory length (mm), Y-axis trajectory length (mm), total trajectory length (mm), trajectory length per unit time (mm/s), outer peripheral area (mm2), and rectangular area (mm2). The results showed a significant interaction between generation and condition in Y-axis trajectory length (mm) and total trajectory length (mm), with an increased body center-of-gravity sway during the viewing of tilted VR images in older adults than in younger adults in both sexes. The results of this study show that body sway can be induced by visual stimulation alone with VR without movement, suggesting the possibility of providing safe and simple balance training to older adults.
Introduction
Fractures due to falls in older adults can significantly limit their daily lives, making it a global challenge [1].Balance improvement is an important factor in fall prevention [2].Exercises using traditional balance discs or balls have demonstrated improvements in balance capabilities.Such training activates the postural control responses required to maintain stability on an unstable surface, where the body's center of gravity sways forward, backward, and sideways on the stable board.Consequently, the body's ability to balance external interference is enhanced [3].However, for many older adults, maintaining a static standing posture may be challenging, and balance exercises that are too difficult may diminish the learning effect [4].In contrast, if the difficulty level of balance exercises is too easy, the advantages of learning are insufficient [5].Therefore, there is a need to develop postural control exercises that allow for gradual adjustment of difficulty levels and maintain participant engagement and enjoyment.
The virtual reality (VR) technology uses a head-mounted display (HMD) to present 360 • images by visually placing users in a virtual space.One of the attractive features of VR is that it can provide motor-related stimulation to the brain with relative safety [6].Promoting physical activity (PA) through exercise or interactive virtual games using VR has emerged as a potential technology for improving balance, posture, gait, and overall health in older adults [7,8].A systematic review has also shown that training in VR using an HMD could be useful for fall prevention and postural control in older people; however, there are concerns that the research in older people is of poor quality due to challenges in ensuring safety and a high risk of bias [9].Urabe et al. used an HMD to demonstrate fluctuations in the center-of-gravity sway during a static standing posture in younger adults [10].This study was a milestone compared to conventional balance training in terms of safety, which induces center-of-gravity sway during stationary standing without physical movement but was limited to a young cohort.Therefore, it is necessary to examine whether similar results apply to older adults who are at high risk of falling in their daily lives.The 2020 Scoping Review reported that attempts to use HMD to improve balance in older adults have great potential [11]; however, there are still few reports on their impact.
In addition, whether changes in postural balance are influenced by sex has been debated.According to recent sex-difference analyses in older adults, the decline in static balance capacity appears to be greater in men than in women [12,13].For example, older men demonstrate decreased accuracy in center-of-gravity positioning, especially with somatosensory and visual deprivations, which are associated with altered postural control strategies [14].However, to the best of our knowledge, the effects of VR interventions on sex differences in postural control have not yet been examined.Previous studies focusing on similar scenarios to improve ADLs in older people through the application of pioneering devices have increased in recent years [15,16].Our research has the potential to serve as a basic database for providing balance training that is more familiar and easier to use for older people.
The purpose of this study was to investigate the effects of viewing tilted VR images while wearing HMD on the displacement of the body's center of gravity while standing in younger and older adults.A scene without an HMD (real world) was set as a control condition to confirm the effect of wearing an HMD on the displacement of the body's center of gravity.A sex comparison was also conducted, with the hypothesis that men would show a more pronounced center of gravity sway than women.
Study Design and Participants
This was an observational cross-sectional study.Here, 20 younger adults (10 women) aged 18-29 years and 34 older adults (24 women) aged >65 years participated in the study.Younger adults were recruited from among those who had read research posters at Hiroshima University and were willing to participate in the study.To recruit older participants, we reached out to representatives of welfare facilities in Hiroshima Prefecture to request research cooperation.We identified exercise communities in different regions of Hiroshima.Among them, those who expressed a willingness to cooperate were included in the study.All older participants belonging to these exercise communities were attending a weekly gymnastics class.The exclusion criteria were as follows: (1) visual impairment (total blindness and low vision); (2) diseases that may affect balance function (Parkinson's disease, vertigo, Meniere's disease, and dysfunction of the semicircular canal or inner ear); (3) difficulty viewing VR images due to poor health or physical pain on the day of measurement; (4) sensitivity to light stimuli; and (5) history of image sickness.
The sample size was calculated using G* power 3.1, with a one-way repeated-measures (ANOVA) test (effect size = 0.25 [medium], alpha error = 0.05, power = 0.80, number of groups = 6, number of measurements = 3, Corr among rep measures = 0.5, Nonsphericity correction ε = 1) [17].Due to its novelty and lack of precedent, average figures were applied to the power analysis.Based on this test, a minimum of 54 participants was required for the study.
This study conformed to the guidelines of the Declaration of Helsinki and was approved by the Ethics Committee for Epidemiology of Hiroshima University (E-2299).Informed consent was obtained from all study subjects.
Assessment of Physical Activity
Physical activity and sedentary time were assessed using the International Physical Activity Questionnaire-Short Form (IPAQ-SF) [18].We assessed vigorous PA, moderate PA, and average walking time per week, and total PA was subsequently calculated (Mets*mins/week).The participants were also questioned about their sedentary time on weekdays.
Study Setting
All assessments were conducted under three conditions: a quiet environment where no additional external information such as visual or auditory stimuli interferes; a resting standing posture with eyes open without an HMD (Oculus Quest 2, Meta Inc., California, United States) (control condition); a resting standing posture while wearing an HMD and viewing a VR image tilted by 30 • (VR30 • ) gradually; and a resting standing posture while wearing an HMD and viewing a VR image that gradually tilts 60 • (VR60 • ) (Figure 1).For the tilt of the VR image, an image of an outdoor landscape seen from a room on the 9th floor of Hiroshima University was captured beforehand with a 360 • camera (Key Mission 360, Nikon, Tokyo, Japan).In the control condition, the participants were instructed to stand facing a blank wall and stare at a target placed 2 m in front of them at eye level.In the VR30 • and VR60 • conditions, visual input was provided only from the glasses, without any other peripheral visual input, so that the gaze was maintained on the VR screen.The speed of image tilt was 3 • /s for VR30 • and 6 • /s for VR60 • .Each condition was performed every alternate day in a randomly assigned order using a computerized random number.
Assessment of Physical Activity
Physical activity and sedentary time were assessed using the International Physical Activity Questionnaire-Short Form (IPAQ-SF) [18].We assessed vigorous PA, moderate PA, and average walking time per week, and total PA was subsequently calculated (Mets*mins/week).The participants were also questioned about their sedentary time on weekdays.
Study Setting
All assessments were conducted under three conditions: a quiet environment where no additional external information such as visual or auditory stimuli interferes; a resting standing posture with eyes open without an HMD (Oculus Quest 2, Meta Inc., California, United States) (control condition); a resting standing posture while wearing an HMD and viewing a VR image tilted by 30° (VR30°) gradually; and a resting standing posture while wearing an HMD and viewing a VR image that gradually tilts 60° (VR60°) (Figure 1).For the tilt of the VR image, an image of an outdoor landscape seen from a room on the 9th floor of Hiroshima University was captured beforehand with a 360° camera (Key Mission 360, Nikon, Tokyo, Japan).In the control condition, the participants were instructed to stand facing a blank wall and stare at a target placed 2 m in front of them at eye level.In the VR30° and VR60° conditions, visual input was provided only from the glasses, without any other peripheral visual input, so that the gaze was maintained on the VR screen.The speed of image tilt was 3°/s for VR30° and 6°/s for VR60°.Each condition was performed every alternate day in a randomly assigned order using a computerized random number.
Assessment of Center of Gravity
A force plate (T.K.K. 5810, Takei Measuring Instruments, Inc., Niigata, Japan) was used to assess the center of gravity.The force plate sampling frequency was 100 Hz.The center of gravity was assessed simultaneously during the implementation of each condition.The participants maintained a standing posture on a force plate with both feet shoulder-width apart and both arms on the side of the body.To ensure safety, two assistants were placed on either side of the participant during the measurement.
Assessment of Center of Gravity
A force plate (T.K.K. 5810, Takei Measuring Instruments, Inc., Niigata, Japan) was used to assess the center of gravity.The force plate sampling frequency was 100 Hz.The center of gravity was assessed simultaneously during the implementation of each condition.The participants maintained a standing posture on a force plate with both feet shoulder-width apart and both arms on the side of the body.To ensure safety, two assistants were placed on either side of the participant during the measurement.
On the force plate, the following six parameters were measured: X-axis trajectory length (mm), Y-axis trajectory length (mm), total trajectory length (mm), trajectory length per unit time (mm/s), outer peripheral area (mm 2 ), and rectangular area (mm 2 ) [19].Participants performed three trials for each condition.The X-axis trajectory length (in mm) denotes the distance covered horizontally, whereas the Y-axis trajectory length (in mm) represents the vertical distance covered by a point or object.The total trajectory length (mm) combines both the X-and Y-axis movements.The trajectory length per unit time (mm/s) was used to measure the speed of movement.The outer peripheral area (mm 2 ) refers to the region enclosed by the outermost points, and the rectangular area (mm 2 ) signifies the space enclosed by the outer boundary of an object or trajectory.
Statistical Analysis
Statistical analysis was performed using SPSS software (version 27.0; SPSS Japan Inc., Tokyo, Japan).The normality of all variables was confirmed using the Shapiro-Wilk test.To compare the physical characteristics and physical activities between the younger and older adults, either an unpaired t-test or Mann-Whitney U test was employed.To investigate the intersection effect of COP movement, a two-way repeated-measures analysis of variance was conducted with the age group (younger and older adults) as the between-subject factor and condition (control, VR30 • , VR60 • ) as the within-subject factor.If interaction effects were observed, post-hoc tests were conducted using unpaired t-tests for Generation and the Bonferroni test for condition.A Mann-Whitney U test was performed to compare sex differences in the COP movement.The significance level was set at 5%.
Results
In this study, none of the participants withdrew from the VR intervention because of simulator sickness.Regarding the participants' descriptive statistics (Table 1), significant differences existed in age, height, and weight between the younger and older adult groups.In terms of PA levels, significant between-group differences were observed in total PA (Mets*mins/week), vigorous PA (Mets*mins/week), moderate PA (Mets*mins/week), walking (Mets*mins/week), and sedentary time (min/day).Regarding the results of center of pressure (COP) movement (Table 2), significant interaction effects of generation and condition were observed for the Y-axis trajectory length (mm) (F = 3.436, p = 0.036) and total trajectory length (mm) (F = 7.878, p < 0.001).The participant generation significantly influenced the Y-axis and total trajectory lengths (p = 0.045 and p < 0.001, respectively).This condition also had a significant effect on the Y-axis and total trajectory length (p < 0.001).Post-hoc tests revealed that the Y-axis trajectory length was significantly greater in younger adults at −5.58 [−7.12 to −1.14] mm for VR60 • than the 2.05 [−5.20 to 5.33] mm in the control (p < 0.05).In the VR60 • condition, there was a significant difference in the Y-axis trajectory length between −5.58 [−7.12 to −1.14] in younger adults and 1.08 [−4.08 to 3.91] in older adults (p < 0.05).In terms of total trajectory length, the values of 157.45 [125.53-249.02]for the VR30 • condition and 188.85 [112.75-283.36]for the VR60 • condition were both significantly greater than those for the control in older adults, whereas no differences between conditions were observed in younger adults (p < 0.05, respectively).In generational comparisons, the total trajectory length was greater for the older adults than for the younger adults under all conditions (p < 0.001, respectively).A comparison of COP movements between men and women is presented in Table 3.For the X-axis trajectory length, women had significantly larger values than men in the VR30 • and VR60 • conditions in younger adults, whereas men had larger values than women among older adults (p = 0.043 and p = 0.015, respectively).Focusing on women, significant differences were observed in the X-axis trajectory length between younger and older adults in the VR30 • and VR60 • conditions (p = 0.002 and p = 0.001, respectively).In terms of the Y-axis trajectory length, men showed significantly greater values than women at VR60 • (p = 0.010).In women, there was a significant difference in the Y-axis trajectory length between younger and older adults in the VR60 • condition (p = 0.023).For total trajectory length, older adults showed significantly greater results than younger adults in the control, VR30 • , and VR60 • conditions in men (p = 0.035, p < 0.001, and p = 0.002, respectively) and women (p = 0.010, p < 0.001, and p < 0.001, respectively).The trajectory length per unit time in older men (p = 0.035, p < 0.001, and p = 0.002, respectively) and women (p = 0.006, p = 0.004, and p < 0.001, respectively) under the control, VR30 • , and VR60 • conditions were significantly greater than those in younger adults.When examining the outer peripheral area, older adults showed larger values than younger adults in women under the VR30 • and VR60 • conditions (p = 0.046 and p = 0.013, respectively).
Discussion
As the world population ages, preventing the negative consequences of falls has become a critical public health mission [1].This study aimed to investigate how the tilt of images displayed through HMD affected the displacement of the center of mass in both younger and older individuals.The results revealed that older adults experienced greater displacement of their center of mass due to the tilt of the VR images than younger individuals.Furthermore, the tilt of the VR images at 60 • had a greater impact on the COP than that at 30 • .To the best of our knowledge, this is the first study to capture the displacement of the center of mass solely through the tilt of VR images without accompanying physical movements using HMD technology.
The most important finding of this study was that the total trajectory length was greater in the VR condition for older adults than for younger adults.Balance impairments are known to be proportional to aging, and a similar trend was observed in our validation using VR.A cross-sectional validation study by Imaoka et al. also reported that postural movements were greater in older adults than in younger adults after viewing VR [20].There are two main explanations for the differences in postural control between younger and older adults.One explanation for this is the effect of aging on physical function.A 2015 systematic review showed a relationship between age-related muscle weakness and a reduced ability to control posture [21].Our results also showed that in the control condition without VR, older adults had a greater total trajectory length than younger adults.Another possible explanation is the relationship between aging and visual feedback.A previous study using optic flow stimulation found that adaptation to visual stimuli in the standing position was longer in older adults than in younger adults [22].Another study on optic flow stimulation during treadmill walking found that the visual attention required to process visual flow during walking was delayed in older adults than in younger adults [23].Thus, these and other declines in perceptual inhibition in older adults could contribute to an increased body center-of-gravity sway in response to slowly changing VR image tilts.
According to previous studies, men have been reported to exhibit greater oscillations in their body's center of mass, which may stem from differences in anthropometric characteristics, such as height, between the sexes [12,24].With increased height, the center of mass of the body tends to increase, thereby necessitating more postural control strategies in men than in women.Although men tend to be taller than women from puberty onward, contrasting results were obtained in our study's younger cohort, in which women showed greater oscillations in their body's center of mass.An individual's sense of immersion when viewing a tilted VR image is notable.The Simulator Sickness Questionnaire 2 (SSQ) score, which is often used in VR research to assess sickness when using HMD, has shown inconsistent results for sex differences.In some studies, women are more susceptible to VR sickness than men, reporting higher SSQ scores [25][26][27].However, Lawson (2014), based on a review of 46 previous studies, found that claims of sex differences in VR sickness were inconclusive [28].These may explain the discrepancy between men and women in the X-axis trajectory lengths obtained in this study.
The failure to measure SSQ to assess VR immersion is a limitation of this study.Various factors influence sex differences between immersions when using HMD, such as women's hormone levels [29] and history of motion sickness [30], which need to be considered to examine sex differences in future VR-induced sway of the body center of gravity.
The second limitation is that the data did not encompass a wide range of age groups.Third, an assessment of immersiveness in the VR intervention was not included.Finally, regarding the interpretation of postural sway through the HMD, evaluations of parameters such as neuromuscular, sensory, and proprioceptive receptors were not performed.
The present study indicates that watching a tilting VR movie is sufficient to induce center-of-gravity sway in elderly people.Therefore, we suggest that viewing a tilting VR movie can be used in the future as a balance exercise that makes it easy to adjust the level of difficulty.However, to make it a more effective training method, future studies should consider adopting other latest technologies.For example, in the field of artificial intelligence, it is possible to estimate human posture online [31,32], and balance information during VR movie viewing can be feedbacked to make posture control exercises for the elderly by viewing tilting VR movies even more effective.
Conclusions
In conclusion, wearing VR head-mounted goggles and viewing tilting VR images increased body center-of-gravity sway more in older adults than in younger adults during standing posture in both sexes.Our study suggests that the use of HMD may be a safe and convenient way to provide balance training to older adults in the future, without physical movements.
Figure 1 .
Figure 1.Three conditions were used in this study: (a) control condition without a head-mounted display; (b) VR30° condition while viewing a 30°-tilted image with a head-mounted display; and (c) VR30° condition while viewing a 60°-tilted image with a head-mounted display.
Figure 1 .
Figure 1.Three conditions were used in this study: (a) control condition without a head-mounted display; (b) VR30 • condition while viewing a 30 • -tilted image with a head-mounted display; and (c) VR30 • condition while viewing a 60 • -tilted image with a head-mounted display.
Table 1 .
Descriptive statistics of participants.
Table 1 .
Cont.Data are expressed as means ± standard deviation or medians [interquartile range], a means p-value for paired t-test, b means p-value for Mann-Whitney U test, IPAQ-SF International Physical Activity Questionnaire.
Table 2 .
Results of two-way analysis of the center of pressure movement.Medians [interquartile range], η 2 : Eta-squared, VR Virtual Reality, * post-hoc test (p < 0.05) compared to control in younger adults, † post-hoc test (p < 0.05) compared to control in older adults, ‡ post-hoc test (p < 0.05) between younger and older adults.
Table 3 .
Results of COP movement in younger and older adult men and women. | 4,956 | 2024-04-24T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
PERFORMANCE ANALYSES AND EVALUATION OF CO2 AND N2 AS COOLANTS IN A RECUPERATED BRAYTON GAS TURBINE CYCLE FOR A GENERATION IV NUCLEAR REACTOR POWER PLANT
As demands for clean and sustainable energy renew interests in nuclear power to meet future energy demands, Generation IV nuclear reactors are seen as having the potential to provide the improvements required for nuclear power generation. However, for their benefits to be fully realised, it is important to explore the performance of the reactors when coupled to different configurations of closed-cycle gas turbine power conversion systems. The configurations provide variation in performance due to different working fluids over a range of operating pressures and temperatures. The objective of this paper is to undertake analyses at the design and off-design conditions in combination with a recuperated closed-cycle gas turbine and comparing the influence of carbon dioxide and nitrogen as the working fluid in the cycle. The analysis is demonstrated using an in-house tool, which was developed by the authors. The results show that the choice of working fluid controls the range of cycle operating pressures, temperatures and overall performance of the power plant due to the thermodynamic and heat properties of the fluids. The performance results favored the nitrogen working fluid over CO 2 due to the behavior CO 2 below its critical conditions. The analyses intend to aid the development of cycles for Generation IV
INTRODUCTION
Nuclear energy is plays an important role in providing clean energy to mitigate the increasing world energy demand [1], with over 400 units of nuclear reactors in service around the world.In addition, various development projects are currently running [2,3] as part of efforts to improve on the limitations of currently deployed nuclear power plants.The on-going research into Generation IV (Gen IV) aims to improve the design and performance of the next generation nuclear reactor technologies [4].However, for the benefits to be fully realised, the design and performance has to be explored.This is achieved using different closed gas turbine cycles, which utilise different working fluids over a range of operating pressures and temperatures.Hence, the foremost consideration as an initial step in to the successful development and deployment of this technology is performance simulations.
Performance simulation is a necessary step in the planning, execution, analyses and evaluation of operations specific to nuclear power plant designs.The purpose is to minimise risks and cost of development.
The Gen-IV systems applicable to this analysis are the Very High-Temperature Reactors (VHTR) and Gas-cooled Fast Reactors (GFR) concepts.Both reactors are high-temperature gas cooled, with core outlet temperatures between 750 0 C and 950 0 C. The GFRs uses a fast-spectrum core, while the VHTRs utilize graphite moderation in the solid state.With regards to a coolant such as helium, it brings several benefits to plant operations such as chemical inertness, single phase cooling and neutronic transparency [5][6][7].However, adopting other working fluids and mixtures for reactor cooling such as carbon dioxide, nitrogen, argon have been proposed in different studies [8,9].There are planned and on-going developmental projects for GFR and VHTR which focus on testing the basic concepts and performance phase validation [4,10,11].
The objective of this paper is to undertake performance analyses at the design and off-design conditions for a Generation IV nuclear-powered reactor with a recuperated closed-cycle gas turbine configuration.The effects of carbon dioxide and nitrogen as working fluid will also be analysed in the recuperated cycle loop.The analyses is carried out using an in-house modelling and simulation tool, which was developed by the authors for closed-cycle gas turbine simulations [12].The results suggests that the choice of working fluid greatly influences the range of cycle operating pressures, temperatures and overall performance of the power plant due to the thermodynamic and heat properties of the fluids.However, the choice of working fluid for the proposed Gen IV system is dictated by availability, material compatibility, and thermal stability [13][14][15].
BRIEF DESCRIPTION OF THE POWER PLANT CASE STUDY
The Gen-IV system in this study utilises an indirect heat source configuration with the recuperated closed-cycle gas turbine as shown in Fig. 1.Using an indirect configuration provides flexibility that allows the same working fluid or a different fluid from that of the reactor to be used. .A growing research interest into the use carbon dioxide (CO2) and nitrogen (N2) [18][19][20] is prompted by the research whereby helium is utilised as working fluid for the closed-cycle gas turbine as noted in [10,11,16,17].These studies show that the low molecular weight of helium affects the size and number of stages in the gas turbine turbomachinery set [6,15].Furthermore, the aerodynamic and sealing design of its turbomachinery component presents challenges.Nonetheless, these have been mitigated as described in the referenced literature.The use of other working fluid alternatives provided additional mitigation and provided the justification to carrying out performance studies on both fluids selected.Concerns relating to the safety and operation of the plant using a working fluid that is different to the reactor coolant such as the chemistry and compatibility have been discussed in [21].
The recuperated closed-cycle that is shown in Fig. 1, utilises some of the heat from the turbine exhaust to preheat the working fluid prior to entering the gas heater.Thus, this allows more working fluid to pass through, thereby increasing the overall efficiency at every pressure ratio whereby recuperation is possible.The reference design point variables that were chosen for the plant system is listed in Table 1.The studies assumed that the heat source transfers a fixed heatrate to the working fluids at some specified temperature.Overall system pressure loss of 7% was assumed (recuperation (ReX) 3%, pre-cooler (PC) 2% and gas heater (GH) 2%.The mechanical efficiency was taken as 98%, and the heat sink temperature was assumed to be 21 0 C.For consistency purposes, the same values for the turbomachinery and heat exchangers component efficiencies have been assumed for each working fluid.The overall performance is a function of the individual components of the Generation IV nuclear power plant [22].The performance parameters were determined at design point.The off-design condition was simulated by changing operating design point variables such as compressor inlet temperature and pressures, turbine entry temperature or core outlet temperature, and turbine exit pressure.An estimation of the properties of the working was modeled using empirical correlations and coefficients which were compared with NASA SP-273 [23].The thermodynamic equations implemented in within the tool for the assessment of the recuperated closed-cycle case study are given as follow: Turbo-set: This includes the compressor and the turbine.The behavior of the turbo-set is described with dimensionless parameters such as corrected mass flow, corrected speed, pressure ratio, component efficiencies and work functions.These parameters are plotted on graphs with lines of pressure ratio against corrected mass flow for different corrected speed lines and contour lines of constant efficiency.It is essential when expressing these parameters that the properties of the working fluid are taken into consideration, which is expressed as: Where,
= , =
The compressor exit temperature is given by the expression The compressor exit pressure is derived from the given pressure ratio as: The compressor work (CW), is a product of the mass flow, specific heat capacity at constant pressure and the overall temperature rise in the compressor.This is given as: Similarly, turbine exit temperature is given by: And turbine work (TW) is expressed as: The turbine discharge pressure ratio is calculated using Eq (7) Heat Exchangers: The heat exchangers which include the recuperator, gas heater and pre-cooler were modeled using the ɛ-NTU method and a counter-flow shell and tube configuration was assumed.The ɛ-NTU method was used since the inlet condition (temperature and pressure) of the fluid stream can be easily obtained and simplifies the iteration involved in predicting the performance of the flow arrangement.This method is fully described in references [24,25].The approach also assumes that the heat exchanger effectiveness is known and the pressure losses are given.Therefore, the effectiveness of the heat exchanger is the ratio of the actual heat transfer rate to the thermodynamically limited maximum heat transfer rate available in a counter flow arrangement. Where, For counter flow shell and tube heat exchangers, number of transfer unit (NTU) is given by: Where, * = = (12) The inlet and out pressures of the heat exchangers were calculated from the relative pressure losses given by: Where ∆ is the percentage (%) pressure loss Reactor Model: The reactor was modeled as a heat source supplying reactor thermal power at a specified temperature and efficiency.The heat gained is given by: The heat source pressure loss is calculated in a similar way as shown in Eq. ( 14).The power plant thermodynamic states of temperature and pressure at all components were obtained by solving Eqs. ( 1) - (15) Cycle Performance Calculation: The overall plant cycle assessment is represented as shaft output power (SOP), specific output power (SP), and cycle thermal efficiency.These are given by the following equations: The capacity of the plant is represented as specific power (SP), given by: The cycle thermal efficiency (%) is given by: Component Matching: Component matching refers to the interactions between the gas turbine components which satisfies the engine matching conditions of mass and energy conservation to produce the system operating line.To be able to predict an accurate design and off-design point performance of the closedcycle gas turbine would require matching of both the turbomachinery and heat exchangers.The following relationships in equations ( 19) -( 23) are realized in order to obtain maximum matching in the recuperated closed-cycle system shown in fig.1.The matching process is comprehensively discussed in references [9,26,27] Map Scaling: The maps for different components were obtained using multi-fluid scaling methods which multiplies scaling factors derived at design point to the original component map at the off-design point.The following equations were used to obtain the scaling factor for off-design assessment. Where, Similarly, the pressure ratio scaling factor is obtained as: Component efficiency scaling factor is given by:
RESULTS AND DISCUSSION Optimum Pressure ratio:
It can be observed from Fig.
( 2) that up to a certain point, there is a positive benefit in terms of cycle efficiency due to recuperation.After the limit is reached, a drop in cycle efficiency is observed regardless of increases in pressure ratio.The optimum pressure ratios for which the cycle efficiencies are maximum for both cycles are different for a given overall temperature ratio.The curves also show that the maximum feasible pressure ratio occurs when the compressor exit temperature equals the turbine inlet temperature.The optimum pressure ratios for the maximum cycle efficiency occur at 3.0 for N2 and 4 for CO2.Similarly, the optimum pressure ratios for a maximum specific power shown in fig. 3 occur at 6.5 for N2 and 7.5 for CO2.The reason for this can be explained by considering the ratio of their heat capacities (gamma).N2 with a higher ratio of heat capacity tend to have better performance at lower pressure ratio compared with CO2.It can also be noted that the optimum pressure ratios when considering efficiency versus plant capacity (specific power) are different.This is because the recuperator improves the efficiency greatly meaning that less power is required to raise the temperature of the reactor coolant.Furthermore, the specific power or capacity of the plant is dependent on increasing the reactor thermal power.With regard to the pressure ratio, higher pressure ratio will pose higher design challenges.Thus, the pressure ratios obtained for each working fluid used in this study will require an advanced turbomachinery design to achieve optimal performance at design conditions.A compromise between the cycle efficiency, turbomachinery design challenges, size of plant and plant cost is required to meet the Gen-IV expectations.In reality, a slightly lower pressure ratio has been proven to be easier in terms of aerodynamic design, mechanical stresses and satisfactory level of efficiency for closed-cycle gas turbine [15,26,[28][29][30].
Impact on efficiency and specific work
Fig.
( 2) and ( 3) show graphs of efficiency and specific work against pressure ratio respectively.From this analysis, the working fluid cycle efficiencies and specific power seem to peak at 38% and 33%, 171MWs/kg and 101MWs/kg for N2 and CO2 respectively at TET of 750 0 C. The cycle efficiency of N2 appears to be higher than CO2 at lower pressure ratios due to its ratio of heat capacities.In addition, the system pressure and temperature at these points are higher than the critical temperature and pressure of N2, hence, above this points its thermodynamic properties are usually stable at 3.35 MPa and 126.2K.CO2 undergoes rapid changes in its thermodynamic properties due to variations of the system pressures and temperatures, which negatively influence the cycle efficiency and specific power, especially below its critical conditions.The design for an optimal CO2 performance will mean that it operates above its critical points and the use of recompression within the cycle will be an added advantage for its selection [8,19].
Looking at the trend of existing nuclear power in operation and theoretical concepts, cycle efficiency above 40% will seem to be at a competitive advantage for future development and deployment.Hence, increasing the TET will be desirable to achieve a competitive efficiency and compact system.
Impact of Turbine entry temperature
The turbine entry temperature was increased to 850 0 C, and 950 0 C in repeated simulations performed.This was based on the limitation of material technology level and the nuclear reactor capability.The effects of the temperature increase on the turbine entry temperature and the impact on efficiency are illustrated in Fig. ( 4) and ( 5).From an ideal thermodynamic stance, the overall cycle efficiency is independent of the turbine entry temperature; however, for this case, the gas properties were modeled as nonideal, hence, changes in temperature and pressure have an effect on the working fluid properties and the cycle performance is impacted.Generally, a TET increase results in a corresponding increase in the cycle efficiency and specific power.Notwithstanding the benefits, operation at high TET always requires trade-offs, typically between capital cost and operational cost.This impact was observed to be more prevalent on CO2 than on nitrogen.The specific power of CO2 increased by 42% as TET moves from 750 0 C to 950 0 C, while N2 increased by 36% at their respective optimum pressure ratios.Similarly, their cycle efficiencies increased by an average of 15% respectively.
Impact of compressor inlet temperature
The compressor inlet temperature (CIT) of the power plant is dictated by the environment in which the cycle waste heat is rejected.The effect of the CIT on the cycle efficiency and shaft power, in the temperature range of 27 o C to 67 0 C is presented in Fig ( 6) and (7).The compressor pressure ratio was fixed at a design TET of 750 0 C. The general trend from the results indicates that the cycle efficiency and power decreases as compressor entry temperature increases.This is due to increases in compressor work, meaning that the increase in temperature puts more demand on the turbine to able to drive the compressor.On average, a drop of 1% in efficiency was observed with corresponding increase in the entry temperature.These changes on the compressor inlet temperature can have a direct impact on the operational cost of the system.The preferred design criterion for closed-cycle gas turbine compressor inlet temperature is to ensure that the cycle is designed to peak at the fluid critical temperature in order to achieve the optimum performance from the system due to the stability of the thermodynamic properties above its critical temperature.
Impact of Compressor inlet pressure
Typically, the use of a high compressor pressure minimizes the system weight.Since the working fluids are nonideal and its properties depend on the pressure and temperature prescribed for the system, changes in the compressor inlet pressure will have a slight impact on the cycle performance.In view of the results in Fig. (8), an increase in the compressor inlet pressure suggests a 0.1% increase in the overall cycle efficiency for nitrogen although it is expected to that this increase would have a significant impact on the structural integrity of its components.Similarly, the same trend is noted for CO2; the efficiency gained due to increase in pressure is approximately averaged at 0.2%.This is because as the inlet pressure increases, it approaches the critical pressure of CO2, and its thermodynamic properties are newar stability.For working fluids that behave like ideal fluid such as helium, changes in compressor inlet pressure do not have any significant impact on the cycle performance.
Fig. 8 Influence of compressor inlet pressure on the cycle efficiency
As the case is with the (CIT), designing for a closedcycle gas turbine requires high compressor inlet pressure to achieve the working fluid critical properties.For CO2, the objective is to achieve above its critical pressure of 7.38 MPa, while for N2 the design should aim at achieving above 3.35MPa.
CONCLUSION
This paper presents a thermodynamic performance comparison between nitrogen and CO2 as potential working fluids proposed for a Gen-IV nuclear reactor, which indirectly coupled to a recuperated closed-cycle gas turbine.The main findings are summarized below: • Recuperation improves cycle efficiency at the optimised pressure ratio by utilizing exhaust gas at exit to the turbine and returning it back into the cycle.• Selecting an optimum pressure ratio by design is based on reasonable compromise between cycle efficiency, component design constraints, and cost.• In comparison, the results indicate that N2 outperforms CO2 at lower pressure ratio.This is due to the stable thermodynamic properties as it first approaches its critical point.However, the introduction of recompression for a CO2 cycle could enable better performance.• The cycle pressure ratios between 2 and 3 seem to be within the design constraints to achieve optimum performance for both fluids.• The gas turbine Gen-IV cycle efficiency is greatest for working fluid with higher gamma (γ) at lower pressure ratio.• The choice of working fluid for Gen-IV design considers the availability of the working fluid, safety measures and the impact its chemical properties on the system and environment.
• Increasing the TET has a significant influence on the cycle efficiency and specific power.However, as one of the major design constraints, the limit to which this is achieved is dependent on the material technology.• Both compressor inlet temperature and pressure impact the performance of the working fluid since changes in these parameters have a slight impact on their thermodynamic properties.As a design constraint, the level of pressurisation within the cycle is dependent on the mechanical structural integrity of the system.• Validation is recommended for tools such as the one developed for this study.This will enable optimisation to improve the applicability and accuracy, thereby encouraging it use and reducing the costs associated with extensive test activities.
Fig. 1 Schematic
Fig. 1 Schematic Representation of the Gen-IV Reactor indirectly coupled with a recuperated closed-cycle gas turbine
Fig. 4
Fig. 4 Cycle efficiency against pressure ratio at different TETs
Fig. 6
Fig. 6 Variation of cycle efficiency at different compressor inlet temperature
Fig. 7
Fig. 7 Variation of specific power at different compressor inlet temperature | 4,375 | 2020-01-29T00:00:00.000 | [
"Engineering",
"Physics"
] |
Gender Aspects in Driving Style and Its Impact on Battery Ageing
: The long and tiring discussion of who are the best drivers, men or women, is not answered in this article. This article, though, sheds some light on the actual differences that can be seen in how men and women drive. In this study, GPS-recorded driving dynamics data from 123 drivers, 48 women and 75 men, are analysed and drivers are categorised as aggressive, normal or gentle. A total of 10% of the drivers was categorised as aggressive, with an even distribution between the genders. For the gentle drivers, 11% of the drivers, the men dominated. The driving style investigation was extended to utilise machine learning, confirming the results from statistical tools. As driving style highly impacts a vehicle’s fuel consumption, while switching over to battery electric vehicles it is important to investigate how the different driving styles impact battery utilisation. Two Li-ion battery cell types were tested utilising the same load cycle with three levels of current amplitude, to represent accelerations for the three drive categories. While one cell type was insensitive to the current amplitude, the highly energy-optimised cell proved to be sensitive to higher current amplitudes, corresponding to a more aggressive driving style. Thus, the amplitude of the dynamic current can for some cells be a factor that needs to be considered for lifetime predictions, while it can be neglected for other cells.
Introduction
Do women drive better than men?If only judging by the statistics, then yes.Men stand for the larger part of the traffic law violations, e.g., in Sweden, 87% of traffic violations during 2020 were conducted by men [1].Men are also over-represented as drivers involved in traffic accidents [2].An often heard argument for the skewed numbers is that men drive more and longer distances compared to females, which is true.However, when the number of traffic law violations are normalised over driven distance, men are still over-represented in traffic violations and accidents [3,4].A British study showed that men have two times higher risk per driven km to be involved in a fatal accident than female drivers [5].The main reason for this has been attributed to higher risk taking and overestimation of one's driving skill reported for men and especially for younger men [6,7].
The statistics clearly show that there is a difference in the risks taken by men and women while driving, where men tend to drive faster than women [1,[3][4][5]8,9].It is also well established that driving style depends on the physical and emotional state of the driver.Driving style studies thus typically include characteristics such as the somatic, behavioural and emotional conditions of the driver as complements to the recorded drive data [10].Aggressive driving is often attributed to be unsafe, including behaviour such as speeding, tailgating, cutting in front of another driver and then slowing down, running red lights, weaving in and out of traffic, changing lanes without signalling and blocking cars attempting to pass or change lanes [11].When only GPS-driving-dynamic data are available, several of the indications of an aggressive driving style cannot be used.The usable entities are speed, acceleration, deceleration, road type and their distribution in time and given journey.Eboli et al. [11] used GPS-logged acceleration and speed data to classify safe and unsafe driving based on the friction coefficients of the car tires on dry road pavement.
Driving style is not only linked to safety; it also impacts fuel consumption [12].High speeds and high accelerations result in higher fuel consumption.Other aspects that affect fuel consumption are the drive-train efficiency, vehicle weight and engine [13,14].A calmer driving style is less energy-and power-demanding [10,12].As battery electric vehicles (BEVs) are becoming more and more popular, the impact of different driver behaviour and styles on electric vehicle (EV) has also become of high interest.Several studies have compared moderate versus aggressive driver behaviour and have shown that moderate driving behaviour can reduce the energy consumption by as much as ∼30% [15,16].An aggressive driving style results in higher energy and power consumption, i.e., higher average discharge current and larger current fluctuations due to higher average vehicle speed and higher acceleration and deceleration.
Energy consumption is not the only aspect that is important when considering BEVs; the ageing of the battery pack is also highly dependent on the usage.It is well known that the type of load profile heavily impacts Li-ion battery (LIB) ageing and, thus, the cycle lifetime.The main aspects that have proven to accelerate ageing are high state of charge (SOC), large depth of discharge (DOD), high currents and high ambient temperature [17][18][19][20][21][22].However, there have only been a few ageing studies conducted on how different current frequencies and pulse amplitudes impact battery ageing [23][24][25][26] and even fewer related to how dynamic drive behaviour impacts LIB ageing [27,28].
The purpose of this article is to present results from a study investigating different driver styles based on GPS recordings, to answer how gender influences the selection of vehicle and driver style and how this impacts battery degradation.The drivers are categorised based on speeding, acceleration, deceleration and relative positive acceleration (RPA).Driving style and how this relates to gender is investigated by using GPS-recorded drive pattern data: acceleration, road type and speed.This research is conducted using low-dimensional statistical analysis as well as machine learning (ML).Subsequently, the driving style analysis is used to experimentally investigate how driving style impacts battery degradation and how different levels of acceleration and deceleration (regenerative breaking) impact battery ageing.
Materials and Methods
This work includes two studies, a driving style analysis based on GPS-recorded driving dynamics data and lifetime testing of two different 18650 Li-ion battery cells.The driving style analysis included gender aspects where the gender of the driver was determined from questionnaire answers; all participants took part in a questionnaire before the recording of the GPS data started.The cells were tested with driving cycles developed to represent three different driving styles.The development of the driving cycles that represent the different driving styles was based on the driving data analysis.Additionally, a support vector machine (SVM)-based ML approach was scripted to discern driver behaviour using the dynamics of the driving.
Driving Data Analysis
The data analysis is based on GPS-recorded driving dynamics data, collected during 2010-2012, in the Swedish car movement data project [29].The project recorded data from 700 vehicles and 123 of these vehicles were owned by single households who reported themselves belonging to one of the binary genders in the questionnaire sent out to the participants before the start of the data recording [29].The selection of participants was randomised; vehicles with a home address in the region of Västra Götaland were randomly selected from the Swedish vehicle registry.Participation was voluntary and a GPS tracker was sent to be installed by the driver, if they accepted to participate and answer the accompanying questionnaire.A thorough description of the GPS equipment and data acquisition procedure can be found in [29].All vehicles in this study were of internal combustion engine (ICE) type.
Descriptive Statistics of the Driving Data
When analysing and characterising the driving patterns, level measures, distribution measures and oscillatory measures were used.The level measures used were maximum, average and standard deviation of speed (v), acceleration (a) and deceleration (r).The distribution measures were percentage of time in different speed intervals, road types and speeding.The oscillation measure used was the RPA.Important parameters for fuel consumption were found to be acceleration with high power demand, speed oscillation, extreme acceleration/deceleration and number of stops [12].
Inferential Statistics of the Driving Data
In addition to the descriptive statistical analysis of the driving style, a SVM-based algorithm from SKlearn [30] was used to further evaluate gendered driving style differences.The algorithm was trained with a gender-balanced training data set consisting of 14 randomly selected drivers, 7 females and 7 males.
In the proposed approach, the algorithm first chooses a feature randomly, quantifying its maximum and minimum values.By partitioning randomly the selected feature between these extremities, the algorithm subsequently tries to isolate an observation by testing several splitting schemes while assigning an integer value for the number of partitions.The overall structure of the partitioning is thus a forest of tree structures with branching into smaller partitions.The registered number of partitions needed to isolate a sample is the travelled distance from the root to the leaves of the recursive tree structure.After averaging this distance for each tree over the complete forest of partitions, the algorithm reaches a measure of anomalies.The decision function depends on the average distance to the investigated samples and the samples reached by the shortest paths are predicted as outliers [30].As the approach does presume a Gaussian behaviour, it appropriately allows for a more tailored analysis of the data at hand.
Battery Lifetime Testing
The battery lifetime testing was conducted on two different 18650 cylindrical cells with Lithium Nickel Manganese Cobalt oxide (NMC) positive electrodes.The LG INR18650 MJ1 (MJ1) highly energy-optimised NMC811-Graphite with 3.5 wt% Si (3.5 Ah, 240 Wh/kg) [31,32] and the Samsung ICR18650-22P (22P) NMC111-Graphite (2.15 Ah, 175 Wh/kg) [33,34].Both cell types have 0.2 C in standard discharge current and a maximum of 10 A in discharge current.The standard charge current is 0.5 C. The voltage interval for the MJ1 is 2.5-4.2V and for 22P it is 2.75-4.2V.
Test Setup
The cells were tested using the Neware BTS4000 system, 5V20A and 5V6A testers with CA-4008 temperature thermistor auxiliary equipment in room temperature (Figure 1).The 5V6A testers have inbuilt cell-holders, for the 5V20A tester cell-holders, shown in Figure 1, were used to ensure good connections.
Load Cycles
The tests were conducted in two separate groups, one utilising constant current (CC) cycles and one using dynamic current profiles.This was done in order to separate the dynamic and CC impact on cell ageing.All tests had the same charge rate of 0.5 C with CC and constant voltage (CV) for both the MJ1 and 22P cells.The synthetic CC load cycles were only tested for the MJ1 cells.The discharge was conducted at three current levels, 0.2 C, 0.4 C and 0.6 C. The dynamic current profile was derived from an aggressive drive pattern containing 50% urban (v < 50 km/h), 30% rural (50 < v < 90 km/h) and 20% highway (>90 km/h) driving time.To be able to analyse the contribution from acceleration and deceleration from different driving styles, the selected driving pattern was adjusted to gentle and normal driving styles.Different driving styles result in different speeds and accelerations/decelerations of a vehicle, which for a BEV results in different current drawn from the vehicle battery.Higher vehicle speed draws a higher average current from the battery and high accelerations result in high peak current.To be able to separate the transient behaviour, acceleration and deceleration, some calculations on and adjustments of the data were needed.The current was calculated by using a simple force balance, summarising all forces acting on the vehicle, where the resistive force is composed of aerodynamic drag, rolling resistance and grading force, To study the difference in transient behaviour, the same speed profile was assumed for all cases.This resulted in the same resistive forces in all cases, leaving the only difference to be the size of F acc .After these simplifications, the current was calculated from i = (P acc + P resistive )/V nom (3) and scaled to meet the maximum short time discharge current of 10 A for the test cells.
The standard charge level for the MJ1 (0.5 C, 1.7 A) was used as a limit for regenerative breaking, thus limiting all charge pulses to 1.7 A.
The power levels required for normal and gentle drivers' accelerations and decelerations are lower compared to those of the accelerations and decelerations recorded for the aggressive drivers.Thus, the aggressive drivers' accelerations needed to be scaled to represent the accelerations for normal and gentle drivers.Six representative drivers, two from each category, were used, where the mean of the maximum acceleration for each trip was used to calculate the scaling factors.Based on these, the normal drivers' acceleration was 85% of the aggressive drivers' acceleration.The gentle drivers' acceleration was found to be merely 32% of the aggressive drivers' acceleration.The power required for the acceleration was scaled according to these values, and the maximum discharge current for the normal driver was 8.6 A and for the gentle driver 3.7 A. Thus, the reader should note that it is not possible to achieve the speed profile in Figure 2a using the current profiles for normal and gentle driving (Figure 2b).However, in this study it was the impact of the transient current that was investigated and as the same speed profile was used for all drivers the mean discharge current would be similar.Thus, the impact of acceleration and deceleration could be studied.As there was a need to limit the regenerative charge pulses, which is often the case in real applications, the mean discharge current was slightly larger for the aggressive and normal case.For the MJ1, the mean discharge current corresponded to 0.2 C for the gentle case, 0.205 C for the normal case and 0.208 C for the aggressive case.For the 22P, the mean discharge current corresponded to 0.332 C, 0.340 C and 0.345 C, respectively.
Driving Data Analysis and Gender Aspects
To be able to conduct an investigation into the gender-related characteristics of driving, the gender of the driver needed to be known.The gender of the driver could only be defined for single households.This study was thus conducted on 123 vehicles registered to single households, 75 male and 48 female drivers.A deeper study was made into the outlier driver behaviour, especially the gentle and aggressive drivers, studying the difference in speed distribution, acceleration, inclination to follow speed limits and time driven on different road types.
Vehicle Selection
Based on the questionnaire sent out to the drivers with the invitation to participate in the project, additional information and user perspectives could be recorded.The drivers were asked to estimate the amount of the vehicle's yearly distance driven by them.While 91% of the male drivers estimated that they drove 100% of the yearly distance themselves, only 81% of the female drivers gave the same estimation.Hence, the females were more prone to lend their cars compared to the males.
Studying the choice of vehicles, additional clear trends could be seen.In 2010, diesel vehicles were still marketed as an environmentally friendly choice.Among the 123 vehicles, most of the men owned diesel vehicles, 67%.However, for the women, 94% drove a diesel vehicle, while only 6% drove a petrol vehicle.This indicates that women were considering more environmentally friendly vehicles to a larger extent than their male peers.This is supported by several studies showing women to have a greater interest in sustainability and sustainable choices in their vehicles [35][36][37].
An additional aspect in line with women considering more environmentally friendly vehicles is the size of the vehicle, especially considering the weight and power.In general, a car with a low power-to-weight ratio (PWR) is more environmentally friendly, as its motor will work in, or closer to, its optimum operation window for a larger part of the driven time.A low curb weight in combination with a small motor will in most cases result in a car with lower fuel consumption [14].
Figure 3 shows the PWR distribution normalised over the number of vehicles in each gender group.The most common PWR range was 63-71 W/kg, with 35% of the male-and 33% of the female-owned vehicles.However, the majority of female-owned cars had a PWR lower than this and the group had a mean value of 59 W/kg, compared to 68 W/kg for the male-owned vehicles.This is also confirmed when looking at the choice of vehicle model: more men than women owned performance and high-end vehicle models.
Trips and Distance
The number of trips and driven distance recorded for each vehicle varied widely.However, the difference on individual level was equalised on the group level.The overall difference in driven distance followed the number of participants for each gender group, where 39% were women and 61% were men, i.e., the women drove 39% of the recorded total distance and the men 61%.
Vehicle Speed
Speed and time distributions were generated for the two genders by separating data into male and female drivers (Figure 4a).The distributions show that the male drivers' speed distribution is skewed to higher speeds compared to the female drivers'.For the women, the distribution peaks at 70 km/h, while for the men, the peak is at 80 km/h.
Relative Positive Acceleration
The RPA was used as an indication for accelerations that demanded high power.It is a measure of a drive pattern's level of acceleration with strong power demand.The RPA factor is high for drive patterns with a great amount of high power demand accelerations and low for patterns with fewer and less power-demanding sequences.The RPA is the integration of the vehicle speed and positive acceleration over the total driven distance [12]: The RPA value was calculated for each trip and the distribution of the RPA for the normalised number of trips is shown in Figure 4b.As can be seen, there is not a big difference in the RPA distributions for the two gender groups.However, the distribution for the women is slightly higher compared to the male drivers.
Acceleration and Deceleration
For the acceleration and deceleration, the time distributions for normalised time are fairly similar between men and women (Figure 4c).However, as could be seen in the RPA, the mean acceleration for female drivers was 0.336 m/s 2 , while it was slightly lower, 0.322 m/s 2 , for the men.In addition, for the deceleration the same trend is visible: women had a mean of −0.368 m/s 2 and men −0.336 m/s 2 .
Defining aggressive acceleration and deceleration can be considered subjective, as it is related to what is considered to be normal acceleration and deceleration.This is also related to the speed of the vehicle.The definition of aggressive acceleration and deceleration used for this work is based on Eboli et al. 's [11] definition of unsafe driving.The criterion to evaluate safe or unsafe driving as a function of speed is based on the maximum friction value in the longitudinal direction between the road surface and tire for dry pavement conditions for rural roads: By applying the definitions of safe and unsafe acceleration, the same trend seen in mean acceleration and deceleration can also be seen here.More female drivers triggered the criterion than men.Even so, the men that were triggering the aggressive acceleration/deceleration criteria were doing it 1.4 times more often, compared to the female drivers.In addition, when looking at the acceleration at different speeds, Figure 4d, the male drivers not only drove at higher speeds, but they also had higher accelerations at higher speeds compared to the female drivers.Of the 20 drivers most often triggering the aggressive acceleration and deceleration criteria, 14 were male drivers and six were female drivers.Interestingly, the same trend can be seen when looking at the 20 drivers with lowest numbers of times triggering the criteria (or not at al), where five were females and 15 were male drivers.
Inclination to Follow Speed Limits
Two investigations were conducted to evaluate the time driven at different road types.Based on the GPS position, the road type and speed limit of the road were extracted from road grid data from the Swedish Transport Administration [38].The first investigation only took into account the actual vehicle speed and evaluated the percentage of time driven below 50 km/h (urban), in 50-90 km/h (rural) and above 90 km/h (highway).The second investigation instead considered the road classification and speed limit for the road.Fascinatingly, the investigations resulted in two rather different time distributions.
In Figure 5a, the result from the first investigation, percentage of time driven at different speeds shows that female drivers spent more time at lower speeds compared to the male drivers, as shown in the speed distribution analysis.The female drivers spent more than half of the driven time at speeds lower than 50 km/h.The male drivers instead spent more than half of the driving time at speeds above 50 km/h.When investigating time distribution based on the road speed limit, Figure 5b, the difference in the percentage of time driven in the different road categories is however minor.
Based on road speed limit, Figure 5b, a third of the time was spent driving on roads with speeds less than 50 km/h and around 27% of the time on highway, for both men and women.However, as seen in Figure 5a the vehicle speed data did not capture this.The discrepancy is larger for the female drivers, where 18% extra time was spent at speeds below 50 km/h.To further evaluate the inclination to follow the speed limits, the speed breach and time spent speed breaching was investigated.In Table 1, a compilation of six different speed breach levels are summarised with the resulting percentage of drivers in each gender group violating the different speed breach levels (one driver corresponds to women 2.08% and to men 1.33%).
The results show that in this study, a large part of the drivers violated the speed limit on at least one occasion.A total of 83% of the women and 75% of the men mildly over-sped (more than 5 km/h over the speed limit) at least 5% of the driven time.However, the percentage of women speeding reduced notably with increasing percentages of time speed breaching.Additionally, when speeding, the men did it for a larger percentage of the time driven and at higher speeding levels.Looking at drivers who were speed breaching more than 10 km/h, the men are over-represented.Interestingly, both gender groups had a few drivers that stood out as more extreme speeders.
Looking at drivers with no or less than 5% driving time with speed breaching, Table 2, we find that the remainder of the drivers, 25% of the men and 17% women, fulfil this criterion.The percentage of men with very few occasions of speed breach was consistently higher than the percentage of women drivers.Drivers with less than 1% of time speeding or never speeding consist of five men and only one woman.By combining the above criteria, speeding, acceleration, deceleration and RPA, the drivers were categorised into aggressive, normal or gentle drivers.An aggressive driver was assumed to have a higher measure than the mean value plus the standard deviation, and a gentle driver was assumed to have a measure lower than the mean value minus the standard deviation.All criteria were then weighted together, for an overall evaluation of the driver.Again, the mean and standard deviation were used as criteria for aggressive and gentle drivers.No driver was considered aggressive in all four criteria; however, five drivers were considered gentle drivers in all four criteria: four men and one woman.In total, of the 14 drivers considered gentle drivers, only three were female drivers, which was not expected beforehand.For the aggressive drivers, the distribution between the genders was more even, with 13 aggressive drivers, five women and eight men, corresponding to 10% of the drivers for each gender group.
Inferential Quantification of Driving Style by ML
A further investigation of aggressive and gentle driving styles was conducted using ML, an isolated forest algorithm from the SVM family of classifiers [30].The SVM was run on high-performance computing (HPC) nodes with limited computational resources.Due to this, the data set had to be reduced to a sub-group from the 123 drivers.As the initial data set was unbalanced in the disadvantage in the number of female drivers, care was taken in the choice of the ML subgroup on which the Isolation Forest outlier analysis was applied.Accordingly, an arbitrarily chosen subset of 14 drivers with a 50/50 balanced gender distribution was used.The unsafe driving criteria used in the descriptive statistical analysis, Equations ( 5) and ( 6), were used for the ML algorithm.The ML algorithm classified 72.8% of the outlier behaviour as male drivers and the remainder outliers as female drivers.
A visualisation of the results of the algorithm can be seen in Figure 6.For pedagogical reasons, the outlier plot is in a two-dimensional space and the total dimension of the data set is a five-feature hyperplane.The reported ML approach is intrinsically scalable to higher-dimensional data.The contour background is generated by the algorithm and gives an estimate of the driving style: the lighter the colour, the gentler the driving style is; the darker the colour, the more aggressive the driving style is.As can be seen in Figure 6, the zones identified by the SVM algorithm managed well to capture the driving style.The yellow circles represent a registered driving point, to the left an aggressive and to the right a gentle driving style.
Battery Cell Lifetime
The lifetime testing was separated into two test batches, where all tests were charged with 0.5 C CC-CV.The first group tested with CC discharge cycles from 0.2 C to 0.6 C, representing different power demanding driving cycles, was only applied to the MJ1 cells.For the CC tests, the expected results of increased ageing for increased C-rate were observed (Figure 7).However, the large spread between the duplicate cells was unexpected.It is not uncommon to see a small spread in performance for mass-produced cells; however, the spread seen for the MJ1 cell was surprisingly large.Spread in cell performance has been reported by other researchers [39][40][41][42].The large spread made the analysis more difficult; however, for tests with increasing C-rate, the expected increase in cell ageing could be seen.In the second test batch, with the dynamic current profiles described in Section 2.2.2, both MJ1 and 22P cells were included.This test was designed to investigate how the transient current amplitude, due to acceleration and deceleration, impact the battery ageing.Again, MJ1 cells displayed a large spread between duplicates (Figure 8).Still, cells tested with the aggressive driving cycle showed a larger loss of capacity than those with the normal and gentle driving cycles.Surprisingly, cells tested with the gentle driving cycle had a larger resistance increase (Figure 8b).For 22P cells, much more uniform ageing behaviour between the duplicates was seen.Contrary to MJ1 cells, there could not be seen any difference in the ageing trends due to the transient current amplitude (Figure 8a).If any, the normal driver case introduced a marginal resistance increase; however, the difference was too small to draw any conclusions from.
Discussion
This study is based on a rather small number of vehicles from single households where the gender of the driver could be determined.The analysed data are GPS-recorded driver data and questionnaire replies from the drivers as well as vehicle brand, type and specification.Of the 123 drivers, 75 were men and 48 were women.The unbalanced number of drivers introduces an error that has not been calculated.Despite the unbalanced number of men and women in this study, the two groups drove almost the same amount of time on the different road types, 33% in urban, 40% in rural and 27% in highway.Thus, the road type distribution is surprisingly similar for the two groups.When it comes to vehicle selection, there was a clear difference between the gender groups.The findings in this study support previous studies which concluded that women tend to select more environmentally friendly vehicles of smaller size, with lower power capabilities, i.e., low PWR.
When looking at the recorded speed, similarities could be seen.Speeds were often lower than the allowed speed limit, very likely due to traffic congestion during rush hours.However, there are some interesting differences between the two groups.Women spent more than 50% of the time at speeds <50 km/h, 18% more than expected based on road type, while men spent 47% of the time at speeds <50 km/h, only 14% more than expected based on road type.Men also had a 10 km/h higher shift in the speed distribution.The higher speeds recorded for men were also reflected in speed limit violations.Men were speeding for longer parts of the driven time and at higher levels of speed breach compared to women.However, more than 80% of the women and 75% of the men were speeding at some point.
For the RPA, women had slightly higher values compared to men.However, when looking at acceleration, men were over-represented as drivers with aggressive or unsafe acceleration, 20% compared to only 6% for women.Interestingly, for deceleration, the trend was reversed.One reason for lower peak acceleration values for women could be the power capability of their vehicles.Women had to a larger extent vehicles with lower PWR and thus vehicles with limited acceleration capability.
Combining the four criteria, 10% of the drivers were labelled aggressive drivers with 38% women and 62% men.Despite the high speed breach levels and accelerations seen for several of the men, normalising for each gender group, the contribution of aggressive drivers was even from both gender groups.An additional fascinating result was that out of the gentle drivers, 11% of the drivers, only three were women.The analysis has shown that there is a larger spread in driver style within the male gender group, while the female drivers cluster as a group which exhibits less variance in driving dynamics.Thus, for this set of drivers, men could be concluded to be the gentler drivers but also the most aggressive.
For the inferential quantification, the computational resources were limited.The SVM was applied to a sub-group of 14 drivers total.However, the initial data set was unbalanced, while the ML subgroup for the isolation forest outlier analysis was applied to an arbitrarily chosen subset of drivers with a 50/50 balanced gender distribution.Despite the limited number of drivers, the inferential statistical analysis results conform with the initial descriptive statistical analysis results.Drivers which were outliers comprised to 70% by individuals who classed themselves as the gender man.The introduction of ML therefore enables automatised classification with higher numbers of features.The reported proof of concept using real driving data enables the introduction of more tailored products and supports sustainable resource usage.
Driver style is closely correlated with fuel consumption.A high RPA indicates high fuel use, and similarly does speed oscillations, high acceleration/deceleration and number of stops.Women had higher RPA and larger decelerations, while men had the more extreme accelerations and higher speeds for longer times.
In a BEV, energy and power are provided from the battery.It is well established that higher C-rates increase battery degradation, which was also confirmed for the MJ1 cells.An aggressive driving style, defined by higher speed and high acceleration, is more energy-and power-demanding and will result in higher average discharge current and peak transient currents.The experimental results on the cell level was inconclusive, yet gave some important indications.The MJ1 is a highly energy-optimised cell, including small amounts of silicon in the graphite electrode; for this cell, the amplitude of the transient current seems to be important.However, for the 22P, this seems not to impact ageing negatively.
An important note is that the 22P has been on the market for several years and has a well-established chemistry, while the MJ1 is one of the first highly energy-optimised cells on the market with silicon containing negative electrodes.Thus, silicon-containing electrodes are still in the early development stages.This can also be seen in the larger spread of the duplicate cells.Still, the results from this small study show that the current amplitude of the transient current can for some cells impact the ageing noticeably.
Conclusions
So, do women drive better than men?This study cannot answer that; however, it has concluded that there is a difference between how women and men drive.The average female driver drives at lower speeds compared to the average male driver.When separating the two gender groups' drive behaviours, it can be seen that the male drivers have a much broader driver distribution compared to female drivers, which are a more homogeneous group.Hence, the most aggressive but also the gentlest drivers can be found among the men.
There is a large number of male drivers that drive at higher speeds, use higher accelerations and spend more time speeding and at higher speed limit breach.However, a majority of the female drivers violate the speed limit as well, though with a lower speed limit breach.Interestingly, the average female driver also tends to have higher acceleration and deceleration compared to the average male driver.This may be attributed to the gearing ratio practices by original equipment manufacturer, applied for smaller PWR vehicles.
Another clear trend seen is that men and women choose different types of vehicles.Women tend to select smaller and lighter vehicles with lower PWR compared to men.Performance and high-end vehicle models were more common among the vehicles owned by men.
For electric vehicles, the difference in driving style impacts battery ageing.The main impact for battery ageing will be from average discharge current.The amplitude of the dynamic part of the current only influences the ageing to a smaller extent.However, different battery chemistries show different levels of sensitivity to the amplitude of the dynamic current.The highly energy-optimised cell, MJ1, proved to lose more capacity for higher amplitudes of the dynamic current, though the lower amplitude generated a larger resistance increase.In contradiction, the P22 cell showed no sensitivity to the amplitude of the dynamic current.Thus, the amplitude of the dynamic current can for some cells be a factor that needs to be considered for lifetime predictions, while it can be neglected for other cells.
Figure 1 .
Figure 1.Test setup used for the lifetime testing of the MJ1 and 22P cells.
Figure 2 .
Figure 2. (a) The speed profile for the driving cycles used for the lifetime testing.(b) The calculated current after scaling the acceleration according to the analysis of the different driving styles.
<44 45 -Figure 3 .
Figure 3. Distribution of vehicle PWR for the vehicles owned by women and men.
Figure 4 .
Figure 4. Distribution of (a) time driven at different speeds and (b) RPA calculated for each trip by women and men.(c) Distribution of time driven with different acceleration and deceleration for the vehicles.(d) Recorded speed and corresponding acceleration/deceleration for the drivers compared to the unsafe driving criteria for aggressive acceleration/deceleration (red dashed line).
Figure 5 .
Figure 5. Percentage of time driven on urban, rural and highway road type based on (a) vehicle speed and (b) road speed limit.
Figure 6 .
Figure 6. Isolation forest quantification for driving style: the darker the colour of the contour, the more aggressive the driving style is.The yellow circles represent a registered driving point for two different drivers, to the left an aggressive and to the right a gentle.
Figure 7 .
Figure 7. (a) C/10 capacity degradation and (b) resistance increase for the MJ1 cells tested with CC with three different C-rates.
Figure 8 .
Figure 8.(a) C/10 capacity degradation and (b) resistance increase for the two cell types when tested with the driving cycles corresponding to the three different driving styles.
Table 1 .
Comparison of drivers speeding and the time spent speeding at different levels of speed breach in each gender group in percentage.
Table 2 .
Comparison of the drivers with least time spent speeding, in percentage for each gender group. | 8,089.2 | 2022-09-16T00:00:00.000 | [
"Engineering"
] |
Strategies for Improving the Quality of Polling Service in Wireless Metropolitan Area Network
. Abstract. Four kinds of service types are defined in IEEE 802.16. In order to provide the Quality of Service (Qos) for different services, the system must use a reasonable resource allocation method and scheduling algorithm to efficiently and fairly allocate bandwidth resources. Although in the IEEE 802.16 MAC, for the uplink real-time polling service (rtPS) and non-real-time polling service (nrtPS) business type of data transmission are used to polling, but do not provide business-based services. In this paper, the distinction between priority service of rtPS and nrtPS is carried out, and the simulation experiment is used to analyze the performance characteristics of the protocol under the distinction of high and low priority. Wherein the theoretical values of the average delay and the information packets are compared with the experimental values. The average query cycle and throughput are also evaluated. Which proves the validity of the improved service strategy and improves the service characteristics of the system.
Introduction
In the fields of industrial control, computer time division multiplexing, communication system protocol and computer network protocol, the control mode of polling system has been widely used because of its fairness and usability.The analysis and research of the polling system are also developing continuously.With the deepening of the research, the application of the polling system has been further expanded.The strategy of the polling service can be divided into three categories: exhaustive service, gated service and limited-k service.In practice, prioritybased services have a wide range of needs, so the distinction priority service is very necessary.In the broadband network, it is necessary to provide end-to-end service quality assurance for different types of service flows to satisfy the requirements of the traffic flow for broadband, delay jitter and packet loss rate.IEEE802.16 [1,2] , also known as the IEEE Wireless MAN air interface standard, it regulates the underlying standards (including the physical layer and the media access control layer) of the wireless access system in the 2-66 GHz frequency range, and analyzes the system coexistence background, gives the system design, configuration and frequency use scheme.
With the rapid growth of wireless data services and multimedia applications, providing QoS support in the system business has become a basic requirement that must be met.The MAC protocol in IEEE 802.16 defines four types of services, specifying different QoS parameters and basic bandwidth allocation mechanisms for them.
However, the standard does not specify the principles of resource allocation and scheduling between services and similar services.Therefore, how to make effective use of system resources and ensure that different QoS requirements of various services become important and challenging research work.In this paper, we focus on the scheduling problem of real-time polling service (rtPS) (3) Safety Sub layer (SS), responsible for authentication and encryption.
IEEE 802.16 MAC layer implementation of the principle of QoS [3] is to map the MAC packet transmission to the business flow, and map to the connection that has ߦ (n+1)}.The state of the system can be described by the Markov chain, which is aperiodic and experienced., , , , , , , Where F(z) is the probability generation function of the transmission time distribution for the information packets arriving in any time slot according to the exhaustive service rule.And it is assumed that the memory capacity of each site is large enough and don't cause loss of information packets.
Mean queue length
The derivatives of Eqs ( 1) and ( 2) give the mean queue length as: The mean queue length of the low priority service is: (3) The mean queue length of the high priority service is: (4)
The average delay
The average delay of the information packet is the time from the arrival of the information packet into the site memory until it was sent.
The average delay for each site's low priority business information packet is: (5) The average delay for each site's high priority business information packet is: (6)
The average query cycle and throughput
The average query cycle for N sites is: The throughput of the system is the number of , , , , lim , , packets transmitted by the system per unit of time.
The throughput of the system is: (8)
Method of improving QoS guarantee
Based on the traffic priority of the polling mechanism: in the original IEEE 802.16 MAC, the IP head of the Taxonomy of Service (ToS) field that has 3bit priority sub field mapping into the above four different Qos level, see Table 2.The same level of business uses the same type of scheduling, no longer distinguish priority, here we will further subdivide the rtPS business and nrtPS business to distinguish priority services."011" of rtPS and "001" of nrtPS in the ToS field are the high priority service identifier in the same service, and "100" of rtPS and "010" of nrtPS are used as low priority service identifier in the same business.In the IEEE 802.16 MAC polling mode, the BS periodically queries the different business queues of the SS site.First, the higher priority [8,9] business queue is authorized to send the information in the exhaustive service mode, that is, it sends the full information packets of the queue and the packets of information to be reached during the transmission, until the class of business queues on the query SS site is empty.When the SS high priority service queue is empty, the BS starts to query the service queue with the lower priority and sends the information according to the policy of the gated service.That is, it sends the information packets that arrive before the transmission start.The information packets arriving during transmission will wait for the next chance to send.
In this way, the service of the polling mode in the IEEE 802.16 MAC is distinguished by the different service policies of the exhaustive service and the gated service, which makes it have better flexibility and QoS guarantee.ToS sub field settings, providing services to the SS site in business priority.Table 3 shows that the average information packets increases with the increase of the arrival rate, the information packet arrival rate is from 0.0005 to 0.003, and the information packets stored in the high priority service and the low priority service is increased by more than 20 times, but the distinction between high-priority and low-priority services remains clear.Table 4 shows the same rule on the average delay of the information packet.High-priority and low-priority services are clearly differentiated as the arrival rate and system load increase.Table 5 verifies the consistency between theoretical and experimental analysis of query cycles and throughput.
Conclusions
This paper based on the four original types of service scheduling makes some improvements in wireless
is the arrival rate of the low priority service for each site.The generation function, mean value and variance of each station for low priority service arrival process are A(z), ߣ ଶ (i=1,2,…,N)=λ=ܣ ᇱ (1) and ߪ ఒ ଶ ܣ= ᇱᇱ (1)+ ߣ-ߣ ଶ .The low priority queues of the site i (i = 1, 2, ..., N) obtain the transmission right at time ݐ and send the information packets arriving before time ݐ by the gated service rule, the transmission service time of an information packet in the queue follows the identical independent probability distribution with the generation function, mean value and variance are B(z),ߚ=ܤ ᇱ (1) and ߪ ఉ ଶ ܤ= ᇱ (1)+ ߚ-ߚ ଶ .When the site i completes the low-priority service information packet transmission before the station time ݐ arrives, after a switch time ߛ [its generation function, the mean value and variance are R(z), ߛ=ܴ ᇱ (1) and ߪ ఉ ଶ ܤ= ᇱᇱ (1)+ ߚ -ߚ ଶ ], AP(BS or Master) through the multi-point scheduling method to check whether the high priority queue is empty, and query i + 1 site.If the high priority service queue is not empty, the high priority service of each station is unified according to the exhaustive service rule at time ݐ * to send information packets, and the generation function, the mean value and variance are ܤ (z) ߚ ܤ= ᇱ (1) and ߪ ఉ ଶ ܤ= ᇱᇱ (1)+ ߚ -ߚ ଶ .After the high priority business of each station is empty, the i + 1 site starts the low priority service transmission at time ݐ ାଵ .The transmission of the high priority service and the low priority service of each site are alternately accepted service by the AP (BS or Master) and are accessed according to the rules of the exhaustive service and the gated service respectively.At the time of ݐ , ݐ * and ݐ ାଵ ݐ( ݐ< * ݐ< ାଵ )can be expressed by the variables {ߦ
metropolitan area network IEEE 802.16 MAC protocol, and further provides polling scheduling strategies based on service priority.Then, the theoretical value of the average number of information packets and the average delay of the information packet is compared with the experimental value, in the case of high priority business and low priority business.And the theoretical value and the experimental value of the query cycle and throughput are basically consistent.So, they are proved that the improved scheduling strategy improves the flexibility of protocol scheduling and better guarantees the QoS of different service flows.Compared with the traditional wired network, although the existing wireless network service quality and performance there are still some gaps.However, with the popularity of wireless networks and the development of a variety of business applications, the research and improvement of MAC layer QoS mechanism in wireless metropolitan area network has clear practical significance and application value.In the future work, we will continue to study the priority of the polling service strategies, the use of new hybrid services, such as gatedexhaustive service, gated-gated service, exhaustiveexhaustive service to guarantee the service quality of data service, and improve the utilization rate of system resources.
4 Traffic service strategy based on service priority for wireless metropolitan area networks 4.1 Protocol model analysis
U
Table 2 .
Correspondence table for ToS field and IEEE
Table 3 .
Average information packets stored
Table 4 .
Information packets average delay
Table 5 .
Average query cycle and throughput | 2,354.8 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
Exploring deep learning techniques for wild animal behaviour classification using animal‐borne accelerometers
Machine learning‐based behaviour classification using acceleration data is a powerful tool in bio‐logging research. Deep learning architectures such as convolutional neural networks (CNN), long short‐term memory (LSTM) and self‐attention mechanism as well as related training techniques have been extensively studied in human activity recognition. However, they have rarely been used in wild animal studies. The main challenges of acceleration‐based wild animal behaviour classification include data shortages, class imbalance problems, various types of noise in data due to differences in individual behaviour and where the loggers were attached and complexity in data due to complex animal‐specific behaviours, which may have limited the application of deep learning techniques in this area. To overcome these challenges, we explored the effectiveness of techniques for efficient model training: data augmentation, manifold mixup and pre‐training of deep learning models with unlabelled data, using datasets from two species of wild seabirds and state‐of‐the‐art deep learning model architectures. Data augmentation improved the overall model performance when one of the various techniques (none, scaling, jittering, permutation, time‐warping and rotation) was randomly applied to each data during mini‐batch training. Manifold mixup also improved model performance, but not as much as random data augmentation. Pre‐training with unlabelled data did not improve model performance. The state‐of‐the‐art deep learning models, including a model consisting of four CNN layers, an LSTM layer and a multi‐head attention layer, as well as its modified version with shortcut connection, showed better performance among other comparative models. Using only raw acceleration data as inputs, these models outperformed classic machine learning approaches that used 119 handcrafted features. Our experiments showed that deep learning techniques are promising for acceleration‐based behaviour classification of wild animals and highlighted some challenges (e.g. effective use of unlabelled data). There is scope for greater exploration of deep learning techniques in wild animal studies (e.g. advanced data augmentation, multimodal sensor data use, transfer learning and self‐supervised learning). We hope that this study will stimulate the development of deep learning techniques for wild animal behaviour classification using time‐series sensor data.
| Behaviour classification of wild animals using time-series sensor data
Knowing when, where and what an animal is doing is fundamental to understanding animal behaviour.Bio-logging is a modern research technique that employs animal-borne data loggers to record a variety of time-series sensor data such as acceleration, temperature, water depth and location data (Fehlmann & King, 2016;Yoda, 2019).Among available sensors, acceleration sensors are commonly used to reconstruct animal behaviours, because many behaviours are characterised by unique patterns of acceleration signals (Yoda et al., 1999).Once the relationship between acceleration signals and behaviours is confirmed through video or direct observation (i.e.labelling or annotation), one can develop a 'behaviour classifier' through supervised learning.Then, it is possible to calculate behavioural time allocation (Yoda et al., 2001) and identify specific behaviours such as prey capture (Watanabe & Takahashi, 2013) from acceleration signals using these classifiers.
Numerous techniques have been proposed to classify animal behaviours, including rule-based methods and machine learning.
Recently, the classic machine learning approach, that is, a nondeep learning, machine learning approach that usually requires feature engineering (see Table S1 for explanations of terms in this study), has succeeded in classifying animal behaviour.Previous studies have used various machine learning models with acceleration data to classify the behaviour of various animals, including birds and mammals (Fehlmann et al., 2017;Nathan et al., 2012;Yu et al., 2021).
For instance, Nathan et al. (2012) tested the effectiveness of five classic machine learning models for behaviour classification of griffon vultures: linear discriminant analysis (LDA), support vector machine (SVM), decision tree (DT), random forest (RF) and artificial neural network (ANN).Yu et al. (2021) tested XGBoost in addition to LDA, DT, SVM, RF and ANN for five species.Although they mainly focused on seeking a suitable model for onboard behaviour classification, they demonstrated that SVM, RF, ANN and XGBoost generally performed better in terms of the F1-score or overall accuracy.
Other methods have been employed, such as the k-nearest neighbour (Sur et al., 2017) and the hidden Markov model (Leos-Barajas et al., 2017).
Only a few studies have leveraged deep learning for wild animal behaviour classification using time-series sensor data.Although not an acceleration-based behaviour classification, Browning et al. (2018) used a multi-layer perceptron to predict diving behaviour in three seabird species using GPS data.Roy et al. (2022) extended their work by using convolutional neural networks (CNNs) and U-Net to predict seabird diving.Recently, Hoffman et al. (2023) applied deep learning models such as CNN and gated recurrent unit to datasets of nine species.As such, there are several examples of deep learning applications on time-series sensor data in recent biologging research; however, this area is still in the early stages of development.The effectiveness of more advanced architectures, such as long short-term memory (LSTM) and self-attention mechanism, as well as various training techniques, such as data augmentation, have not yet been extensively tested on acceleration data from wild animals.
| Behaviour classification techniques for domestic animals and humans
Deep learning-based behaviour classification techniques have been employed extensively in domestic animal and human studies (e.g.Pan et al., 2023;Singh et al., 2021).In the acceleration-based behaviour classification of domestic animals including horses and lactating sows, deep learning models such as CNN have been developed as a technique for automatically monitoring behaviours and obtaining information about animal health and welfare (e.g.Eerdekens et al., 2020;Pan et al., 2023).Although these techniques successfully classified multiple behaviour classes (e.g. six or seven classes), collecting data from domestic animals appeared to be easier than for wild animals.
In human activity recognition (HAR), Ordóñez and Roggen (2016) demonstrated that DeepConvLSTM (DCL), which combines CNN and LSTM, achieved high performance on datasets of daily activity and assembly-line workers' activity.Singh et al. (2021) proposed a model with an additional self-attention layer after the DCL architecture (called DeepConvLSTMSelfAttn (DCLSA) in this study) that could outperform DCL in various human activity datasets.More recently, HAR studies have been conducted in the industrial domain, with a focus on more specific and complex tasks.Xia et al. (2022) proposed attention-based neural networks to identify the skills of highand low-performing workers.Yoshimura, Maekawa, et al. (2022) proposed a model for recognising complex, ordered and repetitive augmentation, multimodal sensor data use, transfer learning and self-supervised learning).We hope that this study will stimulate the development of deep learning techniques for wild animal behaviour classification using time-series sensor data.
K E Y W O R D S
acceleration sensor, animal behaviour classification, bio-logging, data augmentation, deep learning, machine learning activities during line production systems and packaging tasks in the logistics domain.As such, the application of deep learning techniques in HAR is more varied and advanced than that in wild animals.
| Challenges and our approach
The following key challenges may have prevented the use of deep learning models in acceleration-based behaviour classification of wild animals.First, although deep learning models generally benefit from more training data, it is difficult to collect ground truth data for supervised learning, such as annotations acquired from video data, from wild animals.Second, the data are often imbalanced in terms of behaviour class.For example, the proportion of foraging behaviours in our target animals (streaked shearwaters and black-tailed gulls) is much lower than that of flying or stationary behaviour (Figures S1-S3).Third, there may be various types of noise in acceleration data due to differences in individual behaviour and where the loggers were attached.These three problems are also common in domestic animals and humans but may be more severe in wildlife.Fourth, acceleration data have complexity due to difficult animal-specific behaviours, such as those consisting of micro-actions (e.g.prey capture) and those likely requiring consideration of temporal dependencies for classification (e.g.foraging dive of streaked shearwaters, which consists of a sequence of actions such as diving underwater, following a school of fish and ascending to the sea surface (Tanigaki et al., 2024)).In this study, we explored the effectiveness of stateof-the-art deep learning architectures and related techniques for acceleration-based behaviour classification of wild animals, which may overcome the above-mentioned challenges, using datasets from two wild seabird species.Thus, LSTM and multi-head attention layers could overcome the fourth challenge.We expected that this comparison will provide a better understanding of the performance of each of these components and/or their combinations.We also compared these deep learning models with classic machine learning approaches such as XGBoost, which achieved high performance in a previous study but required feature engineering.
| Datasets
Since 2018, our research team has developed custom-made biologgers with AI that perform real-time behaviour classification using low-power sensors and start camera recording, thus enabling the efficient recording of videos of target behaviours, such as seabird foraging (Korpela et al., 2020).Through this project, we collected acceleration, GPS and water pressure data as well as more than 20 h of video data (excluding those labelled as unknown) from two seabird species in the wild: streaked shearwaters (Calonectris leucomelas) and black-tailed gulls (Larus crassirostris).Data from 28 streaked shearwaters were collected on Awashima Island, Japan, from 2018 to 2022, and data from 27 black-tailed gulls were collected on Kabushima Island, Japan, in 2018, 2019 and 2022 (Table S2; Figures S1-S3).For streaked shearwaters, all the loggers were attached to the animals' backs (Figure S2), whereas for black-tailed gulls, 18 were attached to the animals' abdomens and the remainder were attached to their backs (Figure S3).
The fieldwork on streaked shearwaters was carried out with the permission of the Animal Experimental Committee of Nagoya University (GSES2018-2022) and the Ministry of the Environment, Japan.The fieldwork on black-tailed gulls was carried out with the permission of the Hachinohe City Board of Education (2018Education ( -237, 2019Education ( -329, 2022-301) -301) and Aomori Prefecture (2018-4036, 2019-3033, 2022-3050) as well as from the Ministry of the Environment, Japan, to instal the structure (1803201,1804042,1903281) with approval from the Nagoya University Animal Experiment Committee (GSES2018, 2019 and 2022).
Using video data, we defined six behaviour classes (stationary, bathing, take-off, cruising flight, foraging dive and dipping) for streaked shearwaters and six behaviour classes (stationary, ground active, bathing, active flight, passive flight and foraging) for blacktailed gulls (Figures S1-S5).See Table S3 for more descriptions of each behaviour.
Acceleration data were recorded at 25 or 31 Hz.Those at 31 Hz were first up-sampled to 1000 Hz using the linear interpolation method and then down-sampled to 25 Hz because 31 Hz is not a multiple of 25 Hz, making it difficult to directly employ down-sampling while preserving the shape of the original signal.The time windows were extracted using a sliding window size of 50 samples (2 s) and an overlap rate of 50%.We labelled the data primarily using video data from animal-borne cameras, but also using GPS and water pressure data when the video footage was not very clear.Labelling was performed in consultation with ecologists who studied each target species.To avoid complexity, windows containing two or more unique behaviour class labels were discarded.In addition, we did not use windows with many missing data.We obtained 42,526 labelled windows from 28 streaked shearwaters and 32,391 from 27 black-tailed gulls.The number of labelled windows for each class was heavily imbalanced (Figures S1-S3).Figures S4 and S5 show examples of typical windows for each behaviour class in streaked shearwaters and black-tailed gulls, respectively.Acceleration values greater than +8G or smaller than −8G were clipped to address measurement errors.We did not perform other data pre-processing such as standardisation because pipelines and hyperparameters of pre-processing heavily rely on domain-specific knowledge and we wanted to eliminate the effect of it on our experiments.
| Model architectures and hyperparameters
We implemented the CNN, LSTM, DCL, DCLSA, DCLSA-RN, Transformer and CNN-AE, as shown in Figure 1.See the fourth paragraph of Section 1.3 for the reasons why we used these models in this study.
• CNN: CNN has four convolution layers, the number of convolution filters is 128, the kernel size is 5, the stride length is 1, and the amount of padding is 2. Batch normalisation and ReLU layers followed each convolution layer.
• LSTM: LSTM has one LSTM layer and one dropout layer; the number of LSTM hidden units is 128, and the dropout rate is 0.5.
• DCL: The original DCL has two LSTM layers after four convolution layers (Ordóñez & Roggen, 2016), but our DCL has one LSTM layer, following Singh et al. (2021) and Yoshimura, Morales, et al. (2022).Our DCL is a combination of the above CNN and LSTM, and the parameters are the same as above.
• DCLSA: The original DCLSA has an additional self-attention layer after the LSTM layer (Singh et al., 2021), but our DCLSA has a multi-head attention layer with four heads after the above DCL architecture.
• DCLSA-RN: DCLSA-RN is a modified version of DCLSA, with the latter three convolution layers replaced by four residual blocks with shortcut connections (He et al., 2016).The kernel size is 5, and the numbers of convolution filters of the first and second convolution layers in a residual block are 64 and 128, respectively.
• Transformer: Transformer has four transformer encoder blocks, each consisting of layer normalisation, multi-head attention and feedforward neural network layers.
• CNN-AE: CNN-AE mainly consists of three convolution layers as an encoder block and three transposed convolution layers as a decoder block.The kernel size is 5 in all convolution and transposed convolution layers.The number of convolution filters is 128 in the convolution layers and the first two transposed convolution layers, and 3 in the last transposed convolution layer.The time dimension of the data is gradually down-sampled in the encoder block using the max-pooling layer (from 50 to 26, 14 and 8), and up-sampled in the decoder block using the max-unpooling layer (from 8 to 14, 26 and 50).
The raw acceleration data of the three axes (x, y and z) were used as inputs to the deep learning models.Note that the 'features' in Figure 1 were fed into the flatten and linear layers to output an estimate per behaviour class for each window, but they were fed into the linear (for adjusting the data shape), dropout, flatten and linear layers for CNN-AE.We then applied a softmax function to obtain the prediction probability of each class and obtained one predicted class label with the maximum probability, resulting in one prediction label per window.Given that our datasets were imbalanced, we used the WeightedRandomSampler in Pytorch to obtain a class balance within each training batch.The batch size was 128.We used the cross-entropy loss as the loss function.We used Adam as the optimiser, set the initial learning rate to 0.001 and the weight decay to 0.0001, and gradually decreased the learning rate using the CosineLRScheduler in the 'timm' library.Unless otherwise stated, the minimum and maximum number of training epochs were 70 and 100, respectively.The patience parameter for early stopping was 10.
| Evaluation methods
We evaluated model performance by conducting leave-one-ID-out cross-validation (LOIO-CV).In each LOIO-CV fold, only one bird was excluded as a test individual and the model was trained on the remaining individuals.In each fold, the remaining data were divided into training and validation datasets (8:2 random split).The validation dataset was only used for early stopping.We used the macro and class F1-score as performance metrics because our datasets were imbalanced.The F1-score is a harmonic mean of precision and recall.The precision, recall and F1-score are calculated as below: where TP is the number of true positives, FP is the number of false positives, and FN is the number of false negatives.The class F1score is an F1-score calculated for each behaviour class, and the macro F1-score is the mean of the class F1-scores for all behaviour classes.
Note that because many individuals do not have data windows from some behaviour classes, F1-scores for such missing classes become zero when we calculate them for each of the individuals.
Therefore, we calculated F1-scores by aggregating the prediction results of all the folds.To ensure robustness, we repeated LOIO-CV 10 times by changing the random seeds (seed = 0, 1, …, 9).The F1-score was presented as the mean and the standard deviation.
| Experiment 1: Data augmentation and manifold mixup
In the following experiments, we used only DCL or DCLSA and fewer test individuals because our focus was to better understand how and to what extent each data augmentation technique and manifold mixup affected the prediction performance.For both species, we selected six individuals to cover all classes and reflect the differences in year and attachment position (OM1807, OM1901, OM2003, OM2102, OM2212 and OM2213 for streaked shearwaters, and UM1803, UM1807, UM1901, UM1908, UM1913 and UM2203 for black-tailed gulls).We performed LOIO-CV on six test birds and calculated the F1-scores as described above.This was repeated 10 times with 10 random seeds for each of the conditions described below.
Data augmentation is a technique that transforms data to increase its quantity and variation.Data augmentation techniques We also performed a grid search experiment to understand how hyperparameters of data augmentation techniques (e.g. standard deviation parameter for scaling) influence the performance of DCL.
See Supplementary Experiment S1 in the supporting information for more details.
where ∈ 0, ∞ (mixup alpha hereafter) is a hyperparameter that we explored its impact in this study.The distribution of will be skewed near zero or one when mixup alpha is 0.1, while it will be uniform distribution when mixup alpha is 1.0 (Figure S6).Please refer to the original papers (Verma et al., 2019;Zhang et al., 2018) for more details.
We expected that manifold mixup to regularise the model, and smooth the decision boundaries between behaviour classes, and improve the classification accuracy of minor behaviour classes.To test the effects of manifold mixup, we implemented manifold mixup before the LSTM layer in the DCL and compared the following six conditions: no mixup and with mixup (mixup alpha = 0.1, 0.2, 0.5, 1.0 and 2.0) for both species in the same manner as described in the first paragraph of this section.Usually, the reweighted class probabilities are used; however, we applied the argmax function to the reweighted probabilities and subsequently fed the output into the cross-entropy loss function.This was done because the latter approach showed superior performance in our preliminary ∼ Beta( , ),
| Experiment 3: Model comparison
We compared the performance of seven deep learning models: CNN, LSTM, DCL, DCLSA, DCLSA-RN, Transformer and CNN-AE w/o (see Sections 2.2 and 2.5), following the evaluation methods described in Section 2.3.In Experiment 3, LOIO-CV was repeated for all individuals (i.e.28-fold for streaked shearwaters and 27-fold for black-tailed gulls).We applied random data augmentation and did not perform manifold mixup.
To compare deep learning models with classic machine learning models that require feature engineering, we implemented LightGBM (Ke et al., 2017) and XGBoost (Chen & Guestrin, 2016).Tree-based ensemble models, such as Random Forest and XGBoost, often outperform other classic machine learning models such as LDA or DT in various species (Nathan et al., 2012;Yu et al., 2021).LightGBM and XGBoost were implemented using lightgbm (version 3.3.3),xgboost (version 1.7.1) and scikit-learn (version 1.2.1).XGBoost models were trained on GPUs for fast training.The inputs of these models were 119 handcrafted features extracted from raw data.These features were designed based on previous studies (Fehlmann et al., 2017;Nathan et al., 2012;Yu et al., 2021).These features included the statistics (e.g.mean and variance) of the raw data, static components and dynamic components of each axis.They also included statistics of pitch, roll, ODBA, and main frequencies and their amplitudes.
Note that calculation methods for some features are not exactly identical to previous studies.See the source code and list of features (Table S6) for further details.We used the synthetic minority over-sampling technique (SMOTE) (Chawla et al., 2002) to obtain class-balanced training data.The parameters for both models are as follows: number of estimators was 10,000, 10 early stopping rounds and a learning rate of 0.01.
We also performed a grid search experiment to understand how hyperparameters associated with model architectures, such as the number of convolution layers or the number of attention heads, influence the model performance of DCLSA and CNN-AE w/o, using a smaller number of test individuals (same as Experiment 1) and three random seeds.See Supplementary Experiment S2 in the supporting information for more details.
| Experiment 1: Data augmentation and manifold mixup
We first examined the impact of data augmentation techniques on DCL (Figure 3).For streaked shearwaters, permutation and random data augmentation improved the macro F1-score (Figure 3a).
Random data augmentation improved the macro F1-score by an average of 4.7% compared with those without data augmentation.
Improvements by random data augmentation were observed in the class F1-scores for stationary, bathing, cruising flight, foraging dive and dipping (Figure 3b), while scaling, permutation, t-warp and rotation decreased the class F1-score for take-off, as did random augmentation which included these four types (Figure 3b).Example t-SNE visualisation of features for streaked shearwaters is shown in For black-tailed gulls, rotation and random data augmentation improved the macro F1-score (Figure 3c).Rotation may be useful for learning feature representations that are independent of device attachment positions.This is crucial when the dataset contains data from different attachment positions (e.g.abdomen and back).
Although the impacts of other data augmentation techniques except for rotation were smaller, random data augmentation contributed to the improvement of the macro F1-score by 12.8% in DCL and 12.3% in DCLSA (Figure S8c) on average.Improvements by random data augmentation were observed in the class F1-scores for stationary, ground active, bathing, passive flight and foraging (Figure 3d; When no data augmentation was applied, DCLSA outperformed DCL by 1.0% and 0.7% in terms of the macro F1-score for streaked shearwaters and black-tailed gulls, respectively.However, DCL achieved performance almost equivalent to or even better than DCLSA when random data augmentation was used (Figure 3; In addition, random data augmentation that used the top-ranked parameters slightly outperformed random data augmentation that used the default parameters.The results also showed that there were clear relationships between the parameters of some data augmentation types and the class F1-scores of several specific behaviour classes (e.g. the larger jittering parameters decreased the class F1-score of stationary behaviour).For more detailed results, see Experiment S1.
Figure 4 shows the effect of manifold mixup on DCL.
Manifold mixup improved the macro F1-scores by up to 2.5% (mixup alpha = 1.0) and 0.7% (mixup alpha = 0.2) for streaked shearwaters and black-tailed gulls, respectively.However, the combination of manifold mixup and random data augmentation did not further improve the performance.When random data augmentation was combined with manifold mixup, the models outperformed those with manifold mixup alone (Figure 4).These results indicated that the impact of random data augmentation was much higher than that of manifold mixup for our datasets.
Manifold mixup after the LSTM layer of DCL also improved the macro F1-scores by up to 2.0% (mixup alpha = 0.1) and 2.3% (mixup alpha = 0.2) for streaked shearwaters and black-tailed gulls, respectively; however, again, the improvements were smaller than those achieved with random data augmentation (Figure S9).
| Experiment 2: Pre-training of CNN-AE
Pre-training using unlabelled data did not improve model performance for either species.Rather, the condition 'w/o' (CNN-AE was trained with labelled data without pre-training) performed the best, and followed by 'w/', 'w/ soft', 'w/ hard' in decreasing order of performance (Figure 5).
| Experiment 3: Model comparison
A comparison of the macro and class F1-scores is shown in Figure 6.
| Experiment 1: Data augmentation and manifold mixup
Collecting and labelling large amounts of time-series sensor data is difficult; it is more difficult for humans, domestic animals and wildlife studies, in that order.Data augmentation techniques have been extensively studied for HAR (Um et al., 2017;Wen et al., 2021) and gradually for domestic animals (e.g.Eerdekens et al., 2020;Pan et al., 2023).This study explored and confirmed the effectiveness of data augmentation in wild animal behaviour classification using timeseries sensor data.
Experiment 1 indicated that each data augmentation type may have a positive or negative impact on each behaviour, and the impact may also vary depending on architecture; however, apply- training appears to improve overall performance.Combinations of data augmentation techniques can improve the model performance for HAR (Um et al., 2017).A recent bird-sound recognition study (Lauha et al., 2022) also demonstrated the effectiveness of random combinations of data augmentation techniques, although they were applied to spectrogram images.We believe that random data augmentation is effective against data shortages and imbalance problems in wild animal studies.
In addition to data shortages and class imbalance problems, devices, attachment positions and attachment procedures have an impact on acceleration data in bio-logging studies (Garde et al., 2022).
If a classification model is not robust to this noise, it may cause systematic biases that undermine the foundation of the research when biologists or ecologists utilise the models.Similar to the HAR study (Um et al., 2017), Experiment 1 also showed that differences in attachment position could be handled by data augmentation, rotation for black-tailed gulls in particular.
The results of Experiment S1 highlighted the importance of searching the better data augmentation parameters for different datasets, while implying that random data augmentation might be robust to parameter selection.The results also indicated that not only data augmentation types but also their parameter choices may have different effects depending on the nature of target behaviour class.See Experiment S1 for more discussion.
Although the model performance improved by manifold mixup for both species, the overall effects of manifold mixup were smaller than those of random data augmentation.These two techniques were expected to play common roles; however, the random data augmentation was more effective for our dataset, and their combi- However, some studies have advocated that it does not necessarily improve the generalisation performance of classification models in any case (Alberti et al., 2017;Le Paine et al., 2015).For instance, the effect of pre-training was significant when the ratio of unlabelled to labelled data was large (e.g.50:1), but the performance was poorer when the ratio was 1:1 (Le Paine et al., 2015).In our case, the amount of our unlabelled data was approximately 36 and 43 times larger than the labelled data for streaked shearwaters and black-tailed gulls, respectively; however, the pre-training of CNN-AE with unlabelled data did not improve performance, rather it degraded performance under some conditions.
One possible reason for this is the extreme imbalance in unlabelled data.Our labelled data were heavily class-imbalanced, but the unlabelled data could be even more imbalanced.This is because labelled data includes data collected by bio-loggers with AI, which can efficiently collect data on target behaviours (Korpela et al., 2020).
we could not use the WeightedRandomSampler of PyTorch in unsupervised pre-training as we did in supervised learning.
Therefore, the data in a training batch during pre-training are considered to be extremely imbalanced (e.g.mostly stationary and/or flying).This may also become a major problem when conducting self-supervised learning.In a recent HAR study (Yuan et al., 2022), for example, the acceleration data windows were sampled in proportion to their standard deviation during self-supervised learning.This approach would reduce the frequency of sampling small-amplitude acceleration data, which is prevalent in a large portion of real-world datasets.In our case, for example, reducing the sampling frequency of similar signals (e.g.stationary or flying) that exist in large numbers but are less informative may improve the results.
| Experiment 3: Model comparison
In Experiment 3, DCL slightly outperformed CNN and clearly outperformed LSTM for both species, indicating that adding an LSTM layer after CNN layers is also effective for wildlife behaviour classification, as shown for human datasets in Ordóñez and Roggen (2016).
DCLSA slightly outperformed DCL for black-tailed gulls, which is consistent with Singh et al. (2021), but not for streaked shearwa- ters.Yet, our data augmentation experiment (Experiment 1) on DCL and DCLSA revealed that adding a multi-head attention layer slightly improved the performance for both species, when no data augmentation was applied (Figure 3; Figure S8).This suggests that, for our datasets, both data augmentation and the additional multi-head attention layer have positive impacts, but the former may have a larger impact.Residual blocks with shortcut connection (He et al., 2016) in the DCLSA-RN may also slightly improve the performance, as shown in Figure 6.Transformer has achieved great success, especially in natural language processing (Vaswani et al., 2017), and is extensively used as the basis for well-known models.Although we used only the encoder block of transformer, it did not achieve a higher performance in this study or when used as a backbone network in contrastive learning in the HAR study (Qian et al., 2022).CNN-AE w/o performed comparably to CNN, probably because the encoder of CNN-AE w/o shares a very similar architecture with CNN, except for the max pooling layers that gradually compress the time dimension.
DCL, DCLSA and DCLSA-RN achieved slightly higher overall performance than the simple CNN, but did not show the great improvement in the class F1-score of complex behaviours, such as foraging of black-tailed gulls, that we had expected.Besides, these three models have more trainable parameters than CNN and the number of parameters is larger in this order (Table S4).When data augmentation is effective (e.g. to the extent that the performance difference be- than those previous studies (e.g.38 features in Nathan et al., 2012;25 in Fehlmann et al., 2017;78 in Yu et al., 2021).We also used SMOTE, which improved the macro F1-scores (Figure S14).The classic machine learning approach usually requires feature engineering, which often requires specialised knowledge and time.Our results indicate that deep learning may enable end-to-end classification of wildlife behaviour using time-series sensor data.
It should be noted that simply comparing the F1-scores in this study with those of previous studies is meaningless.This is because the target species, number and types of behaviours, data amount, evaluation methods, etc., have an impact on performance metrics.
If the target behaviours are basic, such as stationary, walking and running, the macro F1-score tends to be higher, even with a naive approach.In general, the greater the number of target behaviour classes and the greater the degree of class imbalance, the lower the macro F1-score would be.Regarding evaluation methods, some may use only the train/test or train/validation split (i.e. two datasets) rather than the train/validation/test split (i.e. three datasets); the former may tend to return a higher accuracy or F1-score if test or validation data are also used during training (e.g. for early stopping).
More importantly, if one randomly splits the time-series sensor data into training, validation and test data (e.g. a 7:2:1 random split), these three datasets will include data segments from the same individuals or the same behavioural sequences.To avoid the above problems, we recommend using LOIO-CV, which is stricter and more robust and thus tends to produce lower scores than the above evaluation methods.However, note that we calculated F1-scores by aggregating the prediction results of all the folds because calculating an F1score for each individual and behaviour class is not realistic when only a few individuals have completed sets of all target classes.
| Future directions
Finally, we discuss interesting future directions for the behaviour classification of wild animals using time-series sensor data.Although data augmentation is promising, searching for optimal data augmentation techniques and/or their combinations and parameters is time-consuming and requires considerable computational resources (see also Discussion of Experiment S1).Developing a method specifically for wildlife that automatically finds the optimal data augmentation techniques and their parameters would be interesting, as would other data augmentation approaches such as deep generative models (see Cubuk et al., 2020;Wen et al., 2021).
First
, we explored the effects of data augmentation and manifold mixup.Data augmentation refers to techniques that transform data to increase their quantity and variation.Manifold mixup (Verma et al., 2019) generates a new training instance (a set of new features and label) by mixing intermediate features and labels of randomly sampled two existing training instances in an intermediate layer (see Section 2.4 for more details).These techniques are considered to improve generalisation performance, robustness to various noises and recognition performance of minor classes, and are thus expected to overcome the above challenges.Second, we tested the effects of pre-training CNN-based Autoencoder (CNN-AE) with a large amount of unlabelled data, which is expected to be effective when using a small amount of labelled data.The CNN-AE can be either simply trained with labelled data or first pre-trained with unlabelled data and then fine-tuned with labelled data.Finally, we explored various deep learning model architectures: CNN, LSTM, DCL, DCLSA, ResNet version of DCLSA (DCLSA-RN), Transformer and CNN-AE.Convolution layers in CNN, CNN-AE and DCL-based models are good at extracting local, specific features or patterns.An LSTM layer in LSTM and DCL-based models can incorporate short-and long-term temporal dependencies, which seems essential for time-series sensor data.Multi-head attention layer (Vaswani et al., 2017) in DCLSA, DCLSA-RN and Transformer learns which parts of the data to prioritise, considering global information.
further details on model architectures, hyperparameters and the numbers of parameters.The implementations of the Transformer and CNN-AE were heavily based on those in Qian et al. (2022) but were slightly modified for this study.All deep learning models were implemented using Python (version 3.10.8)and PyTorch (version 1.13.1) on Ubuntu 18.04.6LTS.The deep learning models were trained using Docker (version 20.10.22),Kubernetes (version 1.26.0) and a GPU cluster (Table E 1 Deep learning model architectures: convolutional neural network (CNN), long short-term memory (LSTM), DeepConvLSTM (DCL), DeepConvLSTMSelfAttn (DCLSA), ResNet version of DCLSA (DCLSA-RN), Transformer and CNN-based Autoencoder (CNN-AE).Inputs were raw triaxial acceleration data.The features were fed into the flatten layer and the linear layer with the number of classes as the output dimension (the linear, dropout, flatten and linear layers for CNN-AE).are expected to help models avoid overfitting, make them robust to various types of noise in acceleration data and improve the classification accuracy of minor behaviour classes.We tested the impacts of the following data augmentation techniques: scaling, jittering, permutation, time-warping (t-warp) and rotation following Um et al. (2017).Scaling samples a scaling factor from a Gaussian distribution (mean = 1.0, standard deviation = 0.2) and multiplies the factor with input data, changing the scale of the acceleration signal.Jittering randomly samples noise signals from a Gaussian distribution (mean = 0, standard deviation = 0.05) and adds the noise to input data.Permutation randomly splits input data into short segments (maximum number of segments = 10) and changes their orders.T-warp stretches and warps the acceleration signal in the temporal dimension (see Supplementary Explanation S1).Rotation applies a rotation matrix to input data with a randomly selected angle ∈ [ − , ], around random axes in 3D space.An example visualisation of these data augmentation techniques is shown in Figure2and see the source code for more details.We implemented these data augmentation techniques followingQian et al. (2022) but modified the parameters of scaling and jittering for our data.We also implemented random data augmentation which randomly applies one of the six data augmentation types (i.e.none and the five data augmentation techniques) to each window in a training batch.We compared seven data augmentation scenarios (none, scaling, jittering, permutation, t-warp, rotation and random) using DCL and DCLSA.
is a data augmentation technique that generates a new training instance by mixing two existing training instances.Manifold mixup (Verma et al., 2019) performs mixup in an intermediate layer.Where x i , y i and x j , y j are intermediate features and labels of two example instances randomly sampled from a training batch, a set of new features and label x, ŷ are generated as below: Examples of data augmentation types (a) none, (b) scaling, (c) jittering, (d) permutation, (e) t-warp and (f) rotation on an 'active flight' window from a black-tailed gull.investigate whether the combination of data augmentation and manifold mixup can improve the model performance, we performed experiments with and without random data augmentation.To examine the impact of manifold mixup position in the DCL architecture on the prediction performance, we also implemented manifold mixup after the LSTM layer without data augmentation.2.5 | Experiment 2: Pre-training of CNN-AEWhen there is much more unlabelled data than labelled data, pretraining with unlabelled data (unsupervised pre-training) may be effective (e.g.LePaine et al., 2015).We tested the impact of unsupervised pre-training on CNN-AE using 1,546,440 and 1,398,580 instances from 33 streaked shearwaters and 29 black-tailed gulls, respectively (more than 36 and 43 times greater than the number of labelled data).We used the mean squared error to calculate the reconstruction loss during pre-training.We used the same optimiser and scheduler for supervised training.The extracted unlabelled windows were randomly shuffled for each individual.The batch size was 600.The maximum number of epochs was 100 and the patience parameter for early stopping was 10, but the median value of the actual number of epochs for unsupervised pre-training was 22.0 and 25.5 for streaked shearwaters and black-tailed gulls, respectively.We compared the following four conditions: 'w/o', 'w/', 'w/ soft' and 'w/ hard'.The 'w/o' indicates that the model encoder was trained using only labelled data and cross-entropy loss function without pretraining.The 'w/' indicates that the model was pre-trained with unlabelled data, and then simply fine-tuned.The 'w/ soft' or 'w/ hard' indicates that the learning rate for the encoder parameters during the fine-tuning phase was a smaller value (0.00001) or frozen, respectively.For all conditions in Experiment 2, we applied random data augmentation and did not perform manifold mixup during unsupervised pre-training or supervised training.
Figure
Figure 3e,f (for one test individual) and Figure S7 (for six test individuals).Similarly, random data augmentation was effective for DCLSA (Figure S8a,b), improving the macro F1-score by an average of 3.3%, but data augmentation types except for jittering decreased the class F1-score of take-off, as did random data augmentation.For both DCL and DCLSA, rotation had a negative effect on the class F1score for foraging dive (Figure 3b; Figure S8b), indicating that postural information was critical for this behaviour and may be obscured by rotation.
Figure
Figure S8d).Example t-SNE visualisation of features for black-tailed gulls is shown in Figure 3g,h (for one test individual) and Figure S7 (for six test individuals).
Figure
Figure S8).Experiment S1 showed that data augmentation parameters influence the model performance and the top-ranked parameters were different between the two species except for t-warp and rotation.
Figures S10 and S11 show comparisons of the confusion matrix of each model for streaked shearwaters and black-tailed gulls, respectively.For the feature importance of XGBoost, see Figure S12.The impact of the number of features and SMOTE on XGBoost is shown in Figures S13 and S14, respectively.
ing random data augmentation to each sample during mini-batch F I G U R E 5 Impacts of unsupervised pre-training on CNN-based Autoencoder (CNN-AE) for streaked shearwaters (a, b) and black-tailed gulls (c, d).The following four conditions were compared: 'w/o' (pre-training), 'w/', 'w/ soft' (smaller learning rate for encoder parameters) and 'w/ hard' (with encoder parameters frozen).Comparison of model performance (mean and standard deviation of macro and class F1-score) for streaked shearwaters (a, b) and black-tailed gulls (c, d).Confusion matrix of ResNet version of DeepConvLSTMSelfAttn (DCLSA-RN) for streaked shearwaters (a) and black-tailed gulls (b) when the random seed was 0.
nation did not contribute to further improvement.The effects may vary depending on the dataset and model architecture, and manifold mixup is worth trying in different settings.4.2 | Experiment 2: Pre-training of CNN-AEUnsupervised pre-training has been generally considered to improve the model performance in image classification (LePaine et al., 2015).
tween DCL and DCLSA almost disappears), simpler models may be a better choice in terms of the balance between the performance and training time and/or computational cost.Experiment S2 suggested the importance of performing model hyperparameter tuning for different datasets.However, hyperparameter tuning requires enormous time and computational resources, and see Experiment S2 for more discussion on this point.Using only raw triaxial acceleration data as inputs, deep learning architectures, such as CNN, DCL, DCLSA and DCLSA-RN, outperformed LightGBM and XGBoost, which used 119 handcrafted features.Note that our feature list covers most features used in the previous studies we referenced, and the number of features is larger Domain adaptation techniques such as domain adversarial neural networks(Ganin et al., 2016) can be explored to further reduce F1-score variations between individuals.The development of a new model architecture for more specific tasks(Xia et al., 2022; Yoshimura, Maekawa, et al., 2022), the use of a specific loss function to deal with class imbalance (e.g.Park et al., 2021) and the use of multimodal sensor data (e.g.acceleration, gyroscope, magnetometer, GPS and depth) are also exciting approaches.We trained our deep learning models using relatively large datasets; however, such a situation may be rare in wild animal research.In addition, labelling enormous amounts of sensor data is labour intensive and time-consuming.In data-scarce scenarios, transfer learning and self-supervised learning may be promising, in addition to data augmentation.For example, in transfer learning, a model can be pre-trained on a large dataset of different individuals from different study sites or different but similar species and fine-tuned on the target data.Self-supervised learning, such as contrastive learning(Chen et al., 2020;Qian et al., 2022), uses unlabelled data to train the feature extractor, and then, the classifier or whole network can be finetuned with fewer labelled data.Contrastive learning such as SimCLR with ResNet-50 as the backbone network has succeeded in image classification task(Chen et al., 2020) and an exploratory study on contrastive learning has already been conducted in HAR using acceleration data(Qian et al., 2022).These approaches have the potential to be not only effective against data shortages and class imbalance problems but also robust against various types of noise.If established, researchers can easily use behaviour classification techniques for various animals without much effort to collect and label the data.5 | CON CLUS IONSAcceleration-based behaviour classification using deep learning models has only been extensively studied in humans and domestic animals and has rarely been applied to wildlife research.Challenges include data shortages, class-imbalanced problems, various types of noise due to differences in individual behaviour and where the loggers were attached, and complexity in acceleration data due to difficult animal-specific behaviours.This study explored the effectiveness of data augmentation and manifold mixup, pre-training of CNN-AE with unlabelled data and state-of-the-art deep learning model architectures to overcome these challenges.We demonstrated that data augmentation is effective and that deep learning models such as DCL, DCLSA and DCLSA-RN are promising for wildlife behaviour classification using time-series sensor data.We believe that deep learning approaches have great potential for development, and we discussed their future directions.These include more advanced approaches for data augmentation, domain adaptation, model architectures and loss functions development, the use of multimodal sensor data, transfer learning and self-supervised learning.We hope that this study will fill the gap between accelerationbased behaviour classification studies of wild animals and humans or domestic animals and stimulate the development of deep learning techniques in behaviour classification using time-series sensor data for wild animals.
Figure S1 :
Figure S1: Behaviour class label distribution for streaked shearwaters and black-tailed gulls.
Figure S2 :
Figure S2: Behaviour class label count by individual for streaked shearwaters.
Figure S3 :
Figure S3: Behaviour class label count by individual for black-tailed gulls.
Figure S4 :
Figure S4: Visualisation of typical windows of six behaviour classes for streaked shearwaters.
Figure S5 :
Figure S5: Visualisation of typical windows of six behaviour classes for black-tailed gulls.
Figure S7 :
Figure S7: Example t-SNE visualisation of features (i.e.features before the output layer) when no or random data augmentation was applied (only when the random seed = 0) during the training of DeepConvLSTM (DCL) models, for streaked shearwaters (SS) and black-tailed gulls (BG).
Figure S12 :
Figure S12: Feature importance of top 30 features in XGBoost for streaked shearwaters (a) and black-tailed gulls (b).
Figure S13 :
Figure S13: Comparison of performance when different numbers of handcrafted features were given as inputs (25, 78 and 119) to XGBoost for streaked shearwaters (a, b) and black-tailed gulls (c, d).
Figure S14 :
Figure S14: Impacts of Synthetic Minority Over-sampling Technique (SMOTE) on XGBoost with 119 features as inputs for streaked shearwaters (a, b) and black-tailed gulls (c, d).
Figure ExS1- 6 :
Figure ExS1-6: Streaked shearwaters' individual differences in the mean of the maximum difference for each axis for all windows of each behaviour class.
Figure ExS1- 7 :
Figure ExS1-7: Black-tailed gulls' individual differences in the mean of the maximum difference for each axis for all windows of each behaviour class.
Table S6 :
A list of 119 features used in this study for LightGBM and XGBoost.
Table ExS2 -
1: Impacts of hyperparameters on the macro F1-scores of DCLSA for streaked shearwaters.
Table ExS2 - 2 :
Impacts of hyperparameters on the macro F1-scores of DCLSA for black-tailed gulls.
Table ExS2 - 3 :
Impacts of hyperparameters on the macro F1-scores of CNN-AE w/o for streaked shearwaters.
Table ExS2 - 4 :
Impacts of hyperparameters on the macro F1-scores of CNN-AE w/o for black-tailed gulls. | 10,308.2 | 2024-02-21T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
The Design and Inspection of the Thermocouple in the Breakout Prediction System
Using practical solutions and exuberant experience in installation and inspection as the background and basic theories, this essay has analyzed and demonstrated the optimal design and installation and inspection techniques of the thermocouple on the crystallizer in the breakout prediction system. The content of this essay is informative and specific, and it has a higher practicality in related field of casting breakout prediction techniques.
Introduction
In the breakout prediction system of casting steel, the method of measuring thermocouple Temperature is applicated, nearly by promoting the accuracy.the major technology is to install thermocouple in the wall of crystallization so as to collect, test, display and handle easy Temperature changing data of the local areas of the crystallization, and make the corresponding alarm of molten steel leakage by the seriousness (yellow or red card) to the operating personnel, finally finish the process of automatically control and guide worker to product and operate, which can decrease the (number) chance of molten steel leakage and avoid an accident.
The optimized design of planned spacing in the crystallization
K-thermocouple in the wall of Crystallization, its planned spacing, that is, the measuring spacing of thermocouple Temperature matches the spread speed at a mouthful of billet shell in molten steel leakage or not, will impact directly the accuracy and the response to speed of steel leakage forecast system.Therefore, first of all, the planned spacing should be calculated and optimized.
The spreading speed of the breakup of Casting billet shell is related to the casting speed of continue caster.When the casting speed of continue caster is v c , for the horizontal speed of the fracture billet shell is v x , the vertical speed is v y , the rate to the casting speed (experience value 0.55~0.90)has: A planned horizontal spacing of the crystallization of copper is w x , the vertical space is w y , is the angle to billet shell broken line with the horizontal line, as shown in figure 1.
For the time for broken line of billet shell extending a temperature spacing in horizontal and vertical is t x and t y , and the horizontal vertical spread time: The Vertical spread time: There are two ways to get value: (1) According to measured value remaining in billet shell line of crystallization, when the range 0.7~ 1.6 m /min, measured value is 30 0 ~45 0 ; (2) With experimental value t x and t y , the v x and v y in the following formula: is about 20 o ~45 o , for example, when the planned horizontal spacing of the crystallization copper w x is 220 mm, the vertical space w y is 220 mm, the horizontal and vertical spread time difference of the molten steel leakage: Horizontal spread time for a space: Vertical spread time for a space: As tan= t x /t y ,the relationship of horizontal and vertical spreading speed: V x (2.14~1)V y Thus, the speed of delivery of horizontal temperature is more than the vertical speed, when the plan time interval is the same, as forecasting the adhesion, the temperature of the delivery speed is quicker than the vertical speed.
To enhance Accuracy of the steel leakage forecasting system, The temperature should be detected on both directions to reduce failures in reporting or missing; being optimized, the spread time in both directions must be equal, that is, t x = t y , the relationship as shown in the flowing: W y =W x tan This is theoretical basis in calculating the planned spacing while the thermocouple is installed in crystallization.
The selection of thermocouple
Choosing a right thermocouple will not only get the exact temperature value and conformity products, but also may reduce material consumption to save money and to ensure the quality of products.in order to guarantee the thermocouple working Reliably and safe in bad circumstances (working environment temperature range -20 O C~200 O C, with an amount of water vapor and a small amount of oil fields, and space limitation, together with the vibration), our system uses non-standard K-thermocouple.
As in figure 2, it consists of eight parts with firm structure, magnesium insulation resistance is good and has not short circuit; the stainless steel casing can isolate fully harmful media, and compensates welding connection between the wire and free port of the thermocouple and processes with sweating; the front are screwed with the thread of more than two remaining teeth, and outer seal nut matches with PTEE seal and bolts of double head to ensure gathering the temperature data of the lower in crystallizer.
Technical parameters instance from all parts of the thermocouple is showed as table1~4.
Pay attention that length of bolts hole and aperture from different crystallization may vary, or is redesigned or inflexible.It should consider installing process of the thermocouple and calculate the length of thermocouple and spring accurately etc. the length allowance of thermocouple is less than 1mm, and that of the spring is less than 3 mm.
Install processes of thermocouple
To ensure that the Indicators of thermocouple meet the requirement, the crystallization must be reformed to ensure the quality of install process.
(1) Design features of crystallization the Crystallization sheet is panel veneer by roughly machining with 1.5 mm machining allowance, Shown in figure 3, it is a 46.5190900 section of copper sheet, on the central position there is a hole with diameter 4.5 mm and depth 7 mm and it is a gathering point of thermocouple temperature.
(2) Main process points of Crystallization installation Support the copper to the board by the expansion stud with stated tighten moment, through sealing test of 1Mpa water pressure and after conformity process it to stated size; all impurities caused by processing must be cleaned up (without water and oil )to ensure thermocouple accuracy.In addition, a hole is processed on the center of bolt to ensure thermocouple goes through it, and the thread linking with thermocouple is also processed in the back of it, and then hardened and tempered.
Install the thermocouple sensors into the bottom of 4.5 mm hole of the copper and tight it; and bolts are tighten and sealed.This seal can prevent the water steam goes into to influence measure temperatures of thermocouple.
After installing thermocouple in Crystallization, all the signal cables of thermocouple are connected with corresponding relay cases hanging up to the crystallized with compressed air of 3kg pressure protecting; in addition, the planned time, to guarantee installing thermocouple that there is a position of thermocouple in the water tank of the crystallization so as to weld stainless steel pipe with thermocouple going through.
Thermocouple compensate wires connect with the remote compensate wires with switching quickly device in the framework of the Crystallization.Before compensate wires connecting transfer box, they should installed the casing against high temperature for their being exposed outside of the Crystallization to prevent steel leakage and contacting steel water.
A vent should be set in the transfer box to blow wind for protecting cases and keep dry inside it.A shield tube (diameter 40 mm) should be set in the closure of compensate wires and then tight them with a setting or throat to avoid compensate wires nudity outside when it slips.We must pay attention to the model and polarity using Thermocouple compensate wires, the Temperature in the connection of Compensate wires and thermocouple could not more than 100 O C. It will create the major errors of thermocouple when thermocouple is set error, for example, installing position and inserting depth of thermocouple could not reflect the real temperature of copper sheet.Therefore, inserting depth of thermocouple should be at least 8 to 10 times as the diameter of the protection case; for there is no insulation material between protection tube of thermocouple and wall so that heat t gas overflows and cooling water invades it should be sealed with PTEE between protection tube of thermocouple and Crystallization hole.
At the same time, we should try to avoid strong magnetic and electric field and not to put thermocouple and power cable in one conduit to avoid the error caused by the disturbance.replacing thermocouple when it is damaged, we should install it in the hole and tighten it with a wrench bolt; we push PTEE sealing ring the gap between tapered bolt and compensation wire and tighten sealing nut properly without tighting seriously so as to damage PTEE sealing ring.
Detect way to thermocouple
In order to guarantee installing quality and eliminate the error signal during installation process, we must test all the thermocouple.Detection methods off-line: mark each position of n thermocouple of copper sheet in turn, and record the temperature that thermocouple shows at ambient temperature, for the number of thermocouple and the temperature are recorded as T 11 , T 12 ,…, T 1n .then adjust gas welding to the neutral flame to bake each mark point about 15 seconds; and record the temperature that thermocouple shows when baked, for they are T 21 , T 22 ,…, T 2n .
In addition, in order to reduce detecting influence among each mark point, the operation order of roasted point can test in odd or even sequence so that we can take test point with the greatest distance and not to miss detecting point with minimum errors.
We can determine temperature difference whether to meet the requirements of formula .inaccordance with the following three conditions to analysis data.
Reason for temperature difference, the first is manual operation; the second is device differences, and the third is system influence.Therefore, this is not only indicators of quality of design and installation, but also that of system.For T ij point being discontent to condition it is not difficult to identify accurately the corresponding thermocouple and line fault, before using the device, we can eliminate it to security steel leakage forecast system to display.
Inspection on-line: The testing error of temperature will be larger because of device and material being polluted, deformation, oxidation insulation resistance beyond production and it will be more serious in high temperature.This not only causes wastage of hot electric potential, but also introduces interference; therefore, the error of resulting temperature is sometimes up to 100 degrees.If we use computer forecast system of steel leakage by systems, it can scan and analysis point T ij timing, and can judge whether or not the value of point T ij changes and range ability is in the normal range.If there are exceptions, the operator will shield the corresponding monitoring point, or automatic shield corresponding monitoring point.At the same time, the system prompts running maintenance personnel spot checks in time in order to achieve inspection capabilities on-line automatic.
Conclusion
In forecasting system of casting steel leakage, making design, installation and testing of thermocouple well would greatly enhance reliability of system, reduce the maintenance work greatly, and make forecasting of system steel leakage productive.(Chapter 3).
Figure 1 .
Figure 1.Schematic diagram of fracture line extension bond
Table 1 .
The components
Table 4 .
Technical specifications of Connection (with socket device) and compensate wire | 2,594.2 | 2010-10-03T00:00:00.000 | [
"Materials Science"
] |
Heading Estimation for Pedestrian Dead Reckoning Based on Robust Adaptive Kalman Filtering
Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. Heading estimation error is one of the main location error sources, and therefore, in order to improve the location tracking performance of the PDR method in complex environments, an approach based on robust adaptive Kalman filtering (RAKF) for estimating accurate headings is proposed. In our approach, outputs from gyroscope, accelerometer, and magnetometer sensors are fused using the solution of Kalman filtering (KF) that the heading measurements derived from accelerations and magnetic field data are used to correct the states integrated from angular rates. In order to identify and control measurement outliers, a maximum likelihood-type estimator (M-estimator)-based model is used. Moreover, an adaptive factor is applied to resist the negative effects of state model disturbances. Extensive experiments under static and dynamic conditions were conducted in indoor environments. The experimental results demonstrate the proposed approach provides more accurate heading estimates and supports more robust and dynamic adaptive location tracking, compared with methods based on conventional KF.
Introduction
The expansion of location-based services (LBS) and applications has led to extensive interest in ubiquitous localization which may rely on widely used smart phones. The rich sensors embedded in smart phones support vary types of localization techniques, such as cellular localization, WiFi localization, vision-based location tracking, micro-electro-mechanical system (MEMS) sensors-based pedestrian dead reckoning (PDR), etc. However, a stand-alone technique cannot satisfy all positioning demands of LBS. For example, Global Navigation Satellite Systems (GNSS) cannot work in blocked regions, and WiFi localization systems (WLS) are limited by the coverage of WiFi signals, etc. Among these techniques, PDR is of great importance that it can flexibly link up different absolute positioning systems (such as GNSS, WLS, etc.) to achieve ubiquitous location provision for LBS. However, as a kind of relative localization method, it suffers from the problem of location error accumulation, and therefore cannot hold its performance continuously. Thus, it is essential to improve the tracking performance of the PDR method. Although external techniques, such as indoor graph matching [1][2][3], WiFi localization [3][4][5][6][7], visible light positioning [8] etc. have been considered to assist PDR, it is crucial to enhance its own performance in several aspects, such as speed estimation, heading determination and position calculation, the errors of which can propagate and can result in Actually, besides the above presented solutions, location accuracy of conventional filters can also be improved by using adaptive methods. For example, Ding et al. [18] proposed a process noise scaling algorithm for autonomously tuning the process noise covariance to the optimal magnitude. Hu et al. [19] investigated two adaptive algorithms which were based on fading memory and variance component estimation respectively, and found that both algorithms perform better than conventional KF, and the variance component estimation filter achieves the best positioning accuracy. Li et al. [20] proposed an effective adaptive Kalman filter for attitude and heading estimation. When filtering, the noise variance matrix R is tuned by a three-segment function that is constructed depending on the level of acceleration. Zheng et al. [21] proposed a robust adaptive UKF with a two-step adaptive scheme. First, an innovation-based statistic is used to identify model errors, and then if model errors exist, two adaptive factors are applied to control the noise covariance matrices Q and R by balancing the last noise covariance matrices and the estimated ones. In summary, there are three kinds of adaptive filtering algorithms [18,22], such as the covariance scaling-based adaptive filter [19,21,[23][24][25][26][27][28], the multi-model adaptive estimation-based filter [29], and adaptive stochastic modelling-based filter [18,20,30,31]. Moreover, in order to control the influence of measurement outliers, Yang et al. [32] combined robust estimation and adaptive filtering and proposed the theory of adaptively robust Kalman filtering for kinematic navigation and positioning. Yang also summarized the models and the judging statistics for constructing adaptive factors systematically, and explained the relations between their proposed algorithm and other filters in detail. Although these adaptive filtering algorithms have been widely applied in various fields, the application in PDR has been rarely investigated.
Considering pedestrians' complex walking patterns, it is challenging to track or position them with their smart devices (smart phone, smart watch, etc.). As a result, in order to improve the tracking performances of PDR, a method based on robust adaptive Kalman filtering (RAKF) is proposed for heading estimation. Outputs from gyroscope, accelerometer and magnetometer sensors are used. To resist the negative impacts from measurement outliers and state model disturbances, a maximum likelihood-type estimator (M-estimator)-based model is used in combination with an adaptive factor. Generally, the contributions of our work can be summarized as follows: • A heading estimation approach based on RAKF is proposed for PDR. Compared with the conventional KF-based approach, the proposed one uses an M-estimator-based model to control measurement outliers, and employs a state discrepancy statistic-based adaptive factor to resist the negative impacts of state model disturbances. • Static tests were conducted, and the results indicate the advantages of our proposed approach over the conventional KF-based approach are faster converging speed, and more accurate estimation. Dynamic tests were carried out, and results of PDR demonstrate that our proposed approach provides more accurate and robust estimates, compared with the conventional KF-based approach. • It is found that the proposed approach handles the issue of sudden turn in pedestrian location tracking quite well, and alleviates the problem of error accumulation effectively.
In the rest of this paper, we first present the procedure for heading estimation by using smart phone-embedded MEMS sensors in Section 2. After that, we explain the proposed RAKF algorithm in detail in Section 3. In Section 4, we show the experiments and results. At last, we draw conclusions and future work in Section 5.
Heading Estimation for PDR Based on Smart Phone-Embedded MEMS Sensors
Low cost MEMS sensors embedded in smart phones, such as accelerometers, magnetometers, and gyroscopes provide raw data for pedestrian speed estimation and heading estimation. In this paper, we assume that the heading of a pedestrian and that of his/her smart phone coincide. Thus, only the heading of the smart phone needs to be determined. As Figure 1 presents, raw data from the three sensors are used in two ways, the acceleration and magnetic field data are combined to calculate absolute headings, and the angular rate is used to integrate relative headings. The two kinds of headings are then fused using a filtering algorithm to obtain optimal values which may be used iteratively in the angular rate integration. headings are then fused using a filtering algorithm to obtain optimal values which may be used iteratively in the angular rate integration.
Heading Representation and Determination
Attitude and heading for a rigid body are always handled together. To represent the attitude heading, we define an orthogonal body frame (X-Y-Z) B in which Y and Z axes link up with the forward and up directions respectively, and X axis points to the right. Commonly, the attitude is determined by the rotation matrix with respect to the ENU (East (X)-North (Y)-Up (Z)) frame (also named navigation frame N). Define vector n X in frame N, and the corresponding vector b X in frame B, the mapping between the two vectors can be expressed as: where, b n C represents the rotation matrix from the frame N to the frame B. To be specific, suppose the frame N first rotate around Z axis with an angle ψ, and then rotate around X axis about an angle θ, and finally rotate around Y axis with an angle φ, the rotation matrix b n C will be calculated as: and it can be further written as: According to the definition of body frame, ψ, θ, and φ are called heading, pitch, and roll angles respectively. It is noticed that b n C will be different with respect to other rotational orders [33]. Since Euler angles have the problems of singularity and lower computation efficiency [14], quaternion is designed to replace it for attitude representation. A quaternion q is a 4-tuple: where 0 q is the scalar part, and 1 2 3 T q q q e denotes the vector part. In this paper, unit quaternion that is with the constraint of unity norm: is used. Likewise, a unit quaternion can also be used to represent the attitude of a rigid body. Consider the vectors defined above, the mapping can be expressed as [34,35]:
Heading Representation and Determination
Attitude and heading for a rigid body are always handled together. To represent the attitude heading, we define an orthogonal body frame (X-Y-Z) B in which Y and Z axes link up with the forward and up directions respectively, and X axis points to the right. Commonly, the attitude is determined by the rotation matrix with respect to the ENU (East (X)-North (Y)-Up (Z)) frame (also named navigation frame N). Define vector X n in frame N, and the corresponding vector X b in frame B, the mapping between the two vectors can be expressed as: where, C b n represents the rotation matrix from the frame N to the frame B. To be specific, suppose the frame N first rotate around Z axis with an angle ψ, and then rotate around X axis about an angle θ, and finally rotate around Y axis with an angle φ, the rotation matrix C b n will be calculated as: and it can be further written as: cos ϕ cos ψ − sin ψ sin θ sin ϕ cos ϕ sin ψ + cos ψ sin θ sin ϕ − cos θ sin ϕ − sin ψ cos θ cos θ cos ψ sin θ sin ϕ cos ψ + sin ψ sin θ cos ϕ sin ϕ sin ψ − cos ψ sin θ cos ϕ cos θ cos ϕ According to the definition of body frame, ψ, θ, and φ are called heading, pitch, and roll angles respectively. It is noticed that C b n will be different with respect to other rotational orders [33]. Since Euler angles have the problems of singularity and lower computation efficiency [14], quaternion is designed to replace it for attitude representation. A quaternion q is a 4-tuple: where q 0 is the scalar part, and e = q 1 q 2 q 3 T denotes the vector part. In this paper, unit quaternion that is with the constraint of unity norm: is used. Likewise, a unit quaternion can also be used to represent the attitude of a rigid body. Consider the vectors defined above, the mapping can be expressed as [34,35]: where ⊗ indicates the quaternion multiplication, q −1 is the inverse of the quaternion q: According to the matrix form of quaternion multiplication [34], (6) can be expanded as: where M(q) is the quaternion matrix function [34], and M(q) is its conjugate form. At last, we can get: A similar equation as (1) can be derived from (9): where C b n (q) is the rotation matrix formed by using quaternion: Inspection of (3) and (11) yields the calculation of the attitude heading:
Heading Estimation Using Acceleration and Magnetic Field
Having defined how to represent headings, accelerations and magnetic fields can be used to estimate meaningful headings for PDR.
Magnetometer Calibration
Magnetometers are essential for estimating absolute orientation, however, they often lack calibration, so that the outputs are easily contaminated by hard iron, soft iron, and scale factor errors. These errors can bias the magnetometer outputs, or be superimposed on the outputs. Methods for removing the negative impacts caused by these errors are needed. In this paper, we assume that the outputs of the magnetometer in a smart device are mainly corrupted by the hard iron and scale factor errors. Thus, a method that recovers the locus of error-free magnetic field measurements as Figure 2a presents from an altered locus as Figure 2b presents is applied. In general, the procedure can be summarized as follows: (1) Constructing an ellipsoid model We can see from Figure 2b that the magnetic field measurements at a given geographical location without calibration approximates an ellipsoid, thus an ellipsoid model that can adjust the bias and non-uniform scale is constructed: where mx, my, and mz denote the raw magnetometer measurements of a device in its body frame, sfx, sfy, and sfz denote the scale factors, (2) Estimating the parameters of the model To fit the best ellipsoid and to estimate the six parameters accurately, enough measurements that span the entire Euler angle space at a given location should be collected. With the collected data, a least square (LS) estimation algorithm can be used to approximate the model. Detailed implementation of the LS algorithm refers to [36].
Heading Calculation
Once the magnetometer is calibrated, absolute headings with better accuracy can be obtained. Quaternion heading qam can be directly derived from acceleration vector and calibrated magnetic field vector m by solving the Wahba's problem [37]. Valenti et al. [13] proposed to decompose qam into two quaternions, qa and qm which are determined by accelerations and magnetic field, respectively: In general, the procedure can be summarized as follows: (1) Constructing an ellipsoid model We can see from Figure 2b that the magnetic field measurements at a given geographical location without calibration approximates an ellipsoid, thus an ellipsoid model that can adjust the bias and non-uniform scale is constructed: where m x , m y , and m z denote the raw magnetometer measurements of a device in its body frame, sf x , sf y , and sf z denote the scale factors, ∆m x0 , ∆m y0 , and ∆m z0 denote the hard iron-caused biases, R denotes the ellipsoid radius.
(2) Estimating the parameters of the model To fit the best ellipsoid and to estimate the six parameters accurately, enough measurements that span the entire Euler angle space at a given location should be collected. With the collected data, a least square (LS) estimation algorithm can be used to approximate the model. Detailed implementation of the LS algorithm refers to [36].
(3) Correcting the magnetic field measurements With the estimated two tuples of parameters, magnetometer outputs can be calibrated. Raw measurements m (m x , m y , m z ) are first shifted according to a vector ∆m (∆m x0 , ∆m y0 , ∆m z0 ). Then, the measurements are scaled depending on a vector s (1 + sf x , 1 + sf y , 1 + sf z ). Finally, we will obtain calibrated measurementsm (m x ,m y ,m z ).
where C sf = s · I 3×3 denotes a scale transformation matrix.
Heading Calculation
Once the magnetometer is calibrated, absolute headings with better accuracy can be obtained. Quaternion heading q am can be directly derived from acceleration vector and calibrated magnetic field vectorm by solving the Wahba's problem [37]. Valenti et al. [13] proposed to decompose q am into two quaternions, q a and q m which are determined by accelerations and magnetic field, respectively: where a x , a y , and a z denote the accelerometer measurements of a device in its body frame, λ 1 = a z +1 2 , and λ 2 = 1−a z 2 . Then the calibrated magnetic field vector is rotated using the quaternion q a : where l is the rotated magnetic field vector. Then the quaternion q m is futher derived from l: where l x and l y denote X and Y components of l, Γ = l 2 x + l 2 y .
Heading Estimation Using Angular Rate
The angular rates output by the gyroscope can also be used to estimate quaternion attitude headings which represent the changed value relative to the initial quaternion, and the estimation is based on a differential equation: where s ω is constructed with the gyroscope output: where ω x , ω y , and ω z are the X, Y, and Z components of the gyroscope output in a device's body frame. In order to obtain the results at different time instants, the discrete form of (21) should be used: Using the quaternion matrix function, (23) can be further expanded as: where M(s ω,t ) is the quaternion matrix function of s ω,t . The above two methods for heading estimation can be combined to produce more robust and accurate results, and a frame of KF is applied in this paper. In the following, the process of RAKF is explained in detail.
State and Measuring Models for Heading Estimation
According to the above equations for calculating quaternion headings, the state and measuring models are designed as follows: Let X k represent the state at time k, and F k = I 4×4 + ∆t 2 M(s ω,k ), the state model can be formed as: where w k denotes the model noise.
Let Z k denote the measurement at time k: where q am,k is calculated using (17). To avoid the discontinuities of headings caused by (18) and (20), Z k has to be changed by checking the difference between current predicted state X k and itself: where d k = Z k · X k is the dot product of two quaternions. A linear function is enough to construct the measuring model: where H is an identity matrix I 4×4 , and v k denotes the model noise.
Since a unit quaternion is used in our designed algorithm, the initial state value X 0 = q ω,0 and all the measurements Z 1:k = {Z i , i = 1, · · · , k} should be normalized before they are inputted into the process of filtering. Given the models for heading estimation are designed above, a RAKF algorithm is used for obtaining optimal results. The algorithm consists of two procedures, predicting and updating which are presented in the subsequent sections.
•
Computing the predicted stateX k|k−1 According to the state model in (25), the predicted state can be calculated as: • Computing the predicted state error variance matrixP k|k−1 : where Q k is the state model noise covariance matrix.
Updating
• Computing the gain matrix K k In the RAKF, the computation of K k is different from conventional implementation in the KF. To control the outliers in the measurements, an M-estimator-based robust estimation of the equivalent weight matrix P k of the measurements is used. Among several formatting methods, we choose Huber's approach [32]. Then, the diagonal p k ii and non-diagonal p k ij elements of P k are determined as follows: where σ ii and σ ij are diagonal and non-diagonal elements of the measurement noise covariance matrix R k . c is a constant, and it is usually within the range of [1.3, 2.0]. r k i denotes the standard residual, and it is calculated by: where r k i is the residual of the measurement z k i , and σ r k i is the mean deviation of r k i : whereẐ k|k−1 is the predicted measurement which is calculated depending on the measuring model in (28): In order to control the influence of dynamic model error, an adaptive factor is applied for correcting the predicted state error variance matrixP k|k−1 . Before calculating the adaptive factor, a kind of state discrepancy statistic for judging the state model errors [26,32] is chosen as: where tr(·) stands for the trace of a matrix. X k is a least-square estimator of the state: where P k = R −1 k denotes the weight matrix. To avoid measurement outliers, equivalent weight matrix P k can be applied in (37).
With the chosen statistic ∆ X k , a two-segment function is applied for calculating the adaptive factor: where c0 is a constant which can be tuned depending on practical implementation. Having weaken the negative impacts from measurement outliers and state model errors, a proper gain matrix can be obtained: • Computing the corrected stateX k :X The state needs to be normalized further: • Updating the state error variance matrixP k : Using the equations from (29) to (42), headings can be estimated iteratively in the frame of RAKF.
Experimental Setup
To evaluate the proposed heading estimation approach, we conducted extensive tests in two situations, static and dynamic. In the static tests, a Xiaomi 5 smart phone was put still on a table in an office, collect data covering more than ten minutes. All the heading results calculated from accelerations and magnetic field were averaged to obtain the reference heading value. Additionally, dynamic tests were performed in the corridors on the fifth and the seventh floors in the research building for the School of Geography and Planning at Sun Yat-sen University. The floor plans are presented in Figure 3a,b, respectively. We employed five persons to participate in collecting data. The participants had different heights, different weights, and different walking postures. Table 1 lists the detailed information of each participant. They were all asked to hold the Xiaomi 5 smart phone on their chest to collect data along labeled traces which are marked by black lines in Figure 3a,b respectively. For the tests in the first site on the fifth floor, three participants were involved in collecting data where they walked back and forth twice, and finally returned back to the start point after three sharp turns. The length of the location traces that they walked is as long as 150.4 m each. Whereas for the tests in the second site on the seventh floor, all of the five participants walked from the start point to the end point once. The length of each traces is about 68 m. A Xiaoyi 4kplus sports camera was used to record their walking, and relative accurate positions were derived from the videos to construct the reference traces. Conventional KF is used as the baseline for comparisons, and the noise covariance matrix of both the measurements and the states are set differently in the static and dynamic tests. The matrices are empirically determined depending on the standard deviation of each measurement outputted by the smartphone. For static tests, Q = 10 −10 * I 4×4 , R = 10 −6 * I 4×4 , and for dynamic tests, Q = 10 −8 * I 4×4 , R = 10 −6 * I 4×4 . Moreover, the sampling frequency of data is 50 Hz. Conventional KF is used as the baseline for comparisons, and the noise covariance matrix of both the measurements and the states are set differently in the static and dynamic tests. The matrices are empirically determined depending on the standard deviation of each measurement outputted by the smartphone. For static tests, Q = 10 −10 * I4×4, R = 10 −6 * I4×4, and for dynamic tests, Q = 10 −8 * I4×4, R = 10 −6 * I4×4. Moreover, the sampling frequency of data is 50 Hz.
Performances on Heading Estimation in the Static Tests
Robust estimation can control the outputs of KF by using a parameter for determining which part of the measurements may cause negative influences. In this paper, the weights of the "negative measurements" are reduced to alleviate their impacts. However, if initial values for KF have not given properly, robust estimation can result slower convergence. Figure 4 presents heading errors of KF and its variants (Robust KF, RKF) with different values of the robust parameter. We find that RKFs produce significant heading errors at the beginning of filtering, but converge to relatively smooth results after a time period. Morever, the smaller the parameter is, the smoother the results are during the subsequent time period. Thus, the value in subsequent experiments is set to 1.5 which means the measurements with absolute errors over 1.5σ (mean squared error) will be handled.
Performances on Heading Estimation in the Static Tests
Robust estimation can control the outputs of KF by using a parameter for determining which part of the measurements may cause negative influences. In this paper, the weights of the "negative measurements" are reduced to alleviate their impacts. However, if initial values for KF have not given properly, robust estimation can result slower convergence. Figure 4 presents heading errors of KF and its variants (Robust KF, RKF) with different values of the robust parameter. We find that RKFs produce significant heading errors at the beginning of filtering, but converge to relatively smooth results after a time period. Morever, the smaller the parameter is, the smoother the results are during the subsequent time period. Thus, the value in subsequent experiments is set to 1.5 which means the measurements with absolute errors over 1.5σ (mean squared error) will be handled. The state discrepancy statistic-based adaptive method determines the adaptive factor depending on the difference between the predicted state and the result estimated using the acceleration and magnetic field data. The adaptive factor can directly change the accuracy and precision of the filtering results. Figure 5 presents mean values and standard deviations of absolute heading errors with respect to different values of adaptive parameters. The chosen of the adaptive parameter should balance the two aspects. Figure 6 presents results produced by KF and RAKFs with two different adaptive parameters. We can see that the results produced by the RAKF with an adaptive parameter of 1.5 converge faster but fluctuate with a larger amplitude, whereas the outputs of the RAKF with an adaptive parameter of 15 are smooth but converge slower. We adopt the average value of all the heading estimations from accelerometer and magnetometer as the true value to obtain the statistical results of estimation errors of KF and RAKF. Table 2 presents the results, and we can see that RAKFs estimate more accurate and steady headings than KF, the mean errors decrease about 8.2% and 17.6% respectively, and the standard deviations decrease about 38.5% and 15.8% respectively. Additionally, the results indicate special characteristics of RAKFs with different adaptive parameters. The state discrepancy statistic-based adaptive method determines the adaptive factor depending on the difference between the predicted state and the result estimated using the acceleration and magnetic field data. The adaptive factor can directly change the accuracy and precision of the filtering results. Figure 5 presents mean values and standard deviations of absolute heading errors with respect to different values of adaptive parameters. The chosen of the adaptive parameter should balance the two aspects. Figure 6 presents results produced by KF and RAKFs with two different adaptive parameters. We can see that the results produced by the RAKF with an adaptive parameter of 1.5 converge faster but fluctuate with a larger amplitude, whereas the outputs of the RAKF with an adaptive parameter of 15 are smooth but converge slower. We adopt the average value of all the heading estimations from accelerometer and magnetometer as the true value to obtain the statistical results of estimation errors of KF and RAKF. Table 2 presents the results, and we can see that RAKFs estimate more accurate and steady headings than KF, the mean errors decrease about 8.2% and 17.6% respectively, and the standard deviations decrease about 38.5% and 15.8% respectively. Additionally, the results indicate special characteristics of RAKFs with different adaptive parameters.
Performances on Heading Estimation in the Dynamic Tests
To further verify the superiority of the proposed RAKF in heading estimation, dynamic tests were conducted at two test sites. Since reference headings in dynamic tests are hard to obtain, estimation errors cannot be presented straightforwardly. Fortunately, location tracking performance of PDR can reflect heading estimation performance to some extent. Therefore, in order to examine heading errors, headings estimated by KF and RAKF are applied respectively in PDR location tracking which is based on conventional EKF.
An EKF-based PDR always consists of three parts, heading estimation, speed estimation, and location tracking. Except heading estimation, the other two parts are explained simply in the following. Detailed implementation refers to [38]. Speed estimation contains two steps, stride detection and step length estimation. For stride detection, peaks of measured total acceleration are counted. For step length estimation, a one-parameter nonlinear model [7,38] is employed:
Performances on Heading Estimation in the Dynamic Tests
To further verify the superiority of the proposed RAKF in heading estimation, dynamic tests were conducted at two test sites. Since reference headings in dynamic tests are hard to obtain, estimation errors cannot be presented straightforwardly. Fortunately, location tracking performance of PDR can reflect heading estimation performance to some extent. Therefore, in order to examine heading errors, headings estimated by KF and RAKF are applied respectively in PDR location tracking which is based on conventional EKF.
An EKF-based PDR always consists of three parts, heading estimation, speed estimation, and location tracking. Except heading estimation, the other two parts are explained simply in the following. Detailed implementation refers to [38]. Speed estimation contains two steps, stride detection and step length estimation. For stride detection, peaks of measured total acceleration are counted. For step length estimation, a one-parameter nonlinear model [7,38] is employed:
Performances on Heading Estimation in the Dynamic Tests
To further verify the superiority of the proposed RAKF in heading estimation, dynamic tests were conducted at two test sites. Since reference headings in dynamic tests are hard to obtain, estimation errors cannot be presented straightforwardly. Fortunately, location tracking performance of PDR can reflect heading estimation performance to some extent. Therefore, in order to examine heading errors, headings estimated by KF and RAKF are applied respectively in PDR location tracking which is based on conventional EKF.
An EKF-based PDR always consists of three parts, heading estimation, speed estimation, and location tracking. Except heading estimation, the other two parts are explained simply in the following. Detailed implementation refers to [38]. Speed estimation contains two steps, stride detection and step length estimation. For stride detection, peaks of measured total acceleration are counted. For step length estimation, a one-parameter nonlinear model [7,38] is employed: where A max (or A min ) is the maximum (or minimum) vertical acceleration in a single step and K is a constant. An assumption is that the leg is a lever of fixed length while the foot is on the ground. Location tracking is based on the primary theory of dead reckoning [7,38], and it is implemented in the frame of EKF, as presented in [38]. Depending on different walking patterns of participants, K in (43) is set separately. The value of K for each participant is presented in Table 1. Other settings for the filtering, such as model noise variances, robust parameter and adaptive parameter are with the same values in which the robust and the adaptive parameters are set as 1.5 and 3, respectively.
•
Results of the tests in the first site The heading estimation results of KF and RAKF are presented in Figure 7. We can see that, the results of RAKF are smoother than those of KF. Noticeably, the most important improvement that is marked by black circles in Figure 7a,b of RAKF for heading estimation is the ability of controlling outliers, as well as the adaptation to sudden heading changes, compared with KF. However, similar improvements cannot be found in Figure 7c which means sudden turns did not cause significant heading errors during participant 3's walking. The results also indicate that different walking characteristics of the three participants bring about great challenges for heading estimation based on RAKF with constant robust and adaptive parameters. where max A (or m in A ) is the maximum (or minimum) vertical acceleration in a single step and K is a constant. An assumption is that the leg is a lever of fixed length while the foot is on the ground. Location tracking is based on the primary theory of dead reckoning [7,38], and it is implemented in the frame of EKF, as presented in [38].
Depending on different walking patterns of participants, K in (43) is set separately. The value of K for each participant is presented in Table 1. Other settings for the filtering, such as model noise variances, robust parameter and adaptive parameter are with the same values in which the robust and the adaptive parameters are set as 1.5 and 3, respectively.
Results of the tests in the first site The heading estimation results of KF and RAKF are presented in Figure 7. We can see that, the results of RAKF are smoother than those of KF. Noticeably, the most important improvement that is marked by black circles in Figure 7a,b of RAKF for heading estimation is the ability of controlling outliers, as well as the adaptation to sudden heading changes, compared with KF. However, similar improvements cannot be found in Figure 7c which means sudden turns did not cause significant heading errors during participant 3's walking. The results also indicate that different walking characteristics of the three participants bring about great challenges for heading estimation based on RAKF with constant robust and adaptive parameters. More intuitive improvements can be observed in location tracking, performances of which are presented in Figure 8. For all three participants, the RAKF results approximate the reference trace better, compared with the results of KF. Location errors in tracking are further shown in Figure 9. All three figures indicate that KF and RAKF both provide low location errors at the beginning of tracking, but KF performs as worse as the walking distance becomes longer. The performances demonstrate that PDR suffers from location error accumulation, but RAKF is able to handle the problem to some extent. Finally, Table 3 gives the statistical results of location errors. Compared with KF, the outputs of RAKF are with higher accuracy and precision. The mean errors of RAKF' outputs decrease 8.8%, More intuitive improvements can be observed in location tracking, performances of which are presented in Figure 8. For all three participants, the RAKF results approximate the reference trace better, compared with the results of KF. Location errors in tracking are further shown in Figure 9. All three figures indicate that KF and RAKF both provide low location errors at the beginning of tracking, but KF performs as worse as the walking distance becomes longer. The performances demonstrate that PDR suffers from location error accumulation, but RAKF is able to handle the problem to some extent. Finally, Table 3 gives the statistical results of location errors. Compared with KF, the outputs of RAKF are with higher accuracy and precision. The mean errors of RAKF' outputs decrease 8.8%, 39.7%, 15.2% respectively, and the standard deviations of location errors decrease 10%, 53.2%,17.5%, respectively, compared with that of KF. Similar heading estimation performances were obtained at the second site. The heading estimation results of KF and RAKF are presented in Figure 10. We still can see that the results of RAKF are smoother than those of KF. The adaptation of RAKF to sudden turns which are marked by black circles in Figure 10a-e is proven again. More intuitive performances can be observed in location tracking, results presented in Figure 11. For all five participants, the RAKF results approximate the reference trace better, compared with the results of KF. The changing of location errors in tracking are further drawn in Figure 12. The performances demonstrate that PDR suffers from location error accumulation, but RAKF is able to handle the problem to some extent. Finally, Figure 13 gives the mean and STD. errors of location tracking for five participants. Compared with KF, the outputs of RAKF are with higher accuracy and precision. The mean errors of RAKF' outputs decrease 14%, 18%, 22%, 26%, and 29%, respectively, and the standard deviations of location errors decrease 9%, 15%, 23%, 7%, and 7.8%, respectively, compared with that of KF. • Results of the tests in the second site Similar heading estimation performances were obtained at the second site. The heading estimation results of KF and RAKF are presented in Figure 10. We still can see that the results of RAKF are smoother than those of KF. The adaptation of RAKF to sudden turns which are marked by black circles in Figure 10a-e is proven again. More intuitive performances can be observed in location tracking, results presented in Figure 11. For all five participants, the RAKF results approximate the reference trace better, compared with the results of KF. The changing of location errors in tracking are further drawn in Figure 12. The performances demonstrate that PDR suffers from location error accumulation, but RAKF is able to handle the problem to some extent. Finally, Figure 13 gives the mean and STD. errors of location tracking for five participants. Compared with KF, the outputs of RAKF are with higher accuracy and precision. The mean errors of RAKF' outputs decrease 14%, 18%, 22%, 26%, and 29%, respectively, and the standard deviations of location errors decrease 9%, 15%, 23%, 7%, and 7.8%, respectively, compared with that of KF. We can find from the location tracking results of both test sites that the accuracy levels are different with the same setting of noise covariance matrices. Precisely, the noise covariance matrices used in Kalman filtering should be set specially for each pedestrian. However, this is impractical for wide implementation. The RAKF can alleviate the negative influence of imprecise setting of noise covariance matrices to some extent, but the best location accuracy performance is hard to obtain with constant robust and adaptive parameters. Taking location tracking of participant 3 in the first test site as an example, although the location accuracy provided by the proposed RAKF with the parameters c = 1.5, and c0 = 3 is higher than that of KF, RAKF with other setting of parameters can achieve even better performances. Figure 14 presents comparions on location tracking trajectory and location error distribution using different algorithms. We can find that RAKF with the parameters c = 1.5, and c0 = 8 performs much better than it with the parameters c = 1.5, and c0 = 3. Therefore, in our opinion, an We can find from the location tracking results of both test sites that the accuracy levels are different with the same setting of noise covariance matrices. Precisely, the noise covariance matrices used in Kalman filtering should be set specially for each pedestrian. However, this is impractical for wide implementation. The RAKF can alleviate the negative influence of imprecise setting of noise covariance matrices to some extent, but the best location accuracy performance is hard to obtain with constant robust and adaptive parameters. Taking location tracking of participant 3 in the first test site as an example, although the location accuracy provided by the proposed RAKF with the parameters c = 1.5, and c0 = 3 is higher than that of KF, RAKF with other setting of parameters can achieve even better performances. Figure 14 presents comparions on location tracking trajectory and location error distribution using different algorithms. We can find that RAKF with the parameters c = 1.5, and c0 = 8 performs much better than it with the parameters c = 1.5, and c0 = 3. Therefore, in our opinion, an We can find from the location tracking results of both test sites that the accuracy levels are different with the same setting of noise covariance matrices. Precisely, the noise covariance matrices used in Kalman filtering should be set specially for each pedestrian. However, this is impractical for wide implementation. The RAKF can alleviate the negative influence of imprecise setting of noise covariance matrices to some extent, but the best location accuracy performance is hard to obtain with constant robust and adaptive parameters. Taking location tracking of participant 3 in the first test site as an example, although the location accuracy provided by the proposed RAKF with the parameters c = 1.5, and c0 = 3 is higher than that of KF, RAKF with other setting of parameters can achieve even better performances. Figure 14 presents comparions on location tracking trajectory and location error distribution using different algorithms. We can find that RAKF with the parameters c = 1.5, and c0 = 8 performs much better than it with the parameters c = 1.5, and c0 = 3. Therefore, in our opinion, an automatic solution for determining the most suitable adaptive and further robust parameters are needed. Generally, the above results demonstrate that KF can provide optimal heading estimations with proper models and the corresponding noise properties. However, for PDR, the statistical properties of pedestrians' movements are changing dynamically and constant noise levels can result in divergent performances both in heading estimation and location tracking. Moreover, measurement outliers also corrupt the performance of PDR. The proposed RAKF can adapt to dynamic conditions, such as sudden turns during pedestrian's walking, and it is robust in that constant parameters are effective for different persons. Moreover, the results also indicate that heading errors are some of the main error sources for location estimation using the PDR approach. It is necessary to obtain accurate headings, whether the PDR method is assisted by external techniques or not.
Finally, we analyzed the computational time of the proposed approach. The KF and RAKF algorithms are implemented using C#, and the corresponding software runs on an 2.7 Hz Intel Core i5 processor. The average runtimes of one iteration of the filtering algorithms are listed in Table 4. The results demonstrate that our proposed RAKF is slightly slower than the KF. Nonetheless, the proposed RAKF improves the accuracy of heading estimation effectively. Generally, the above results demonstrate that KF can provide optimal heading estimations with proper models and the corresponding noise properties. However, for PDR, the statistical properties of pedestrians' movements are changing dynamically and constant noise levels can result in divergent performances both in heading estimation and location tracking. Moreover, measurement outliers also corrupt the performance of PDR. The proposed RAKF can adapt to dynamic conditions, such as sudden turns during pedestrian's walking, and it is robust in that constant parameters are effective for different persons. Moreover, the results also indicate that heading errors are some of the main error sources for location estimation using the PDR approach. It is necessary to obtain accurate headings, whether the PDR method is assisted by external techniques or not.
Finally, we analyzed the computational time of the proposed approach. The KF and RAKF algorithms are implemented using C#, and the corresponding software runs on an 2.7 Hz Intel Core i5 processor. The average runtimes of one iteration of the filtering algorithms are listed in Table 4. The results demonstrate that our proposed RAKF is slightly slower than the KF. Nonetheless, the proposed RAKF improves the accuracy of heading estimation effectively.
Conclusions and Future Work
In this paper, the outputs of smart phone-embedded MEMS sensors, such as accelerometers, magnetometers, and gyroscopes are fused using RAKF for pedestrian heading estimation. To alleviate the negative influence of measurement outliers, an M-estimator-based model is applied for identifying and controlling them. Moreover, a state discrepancy statistic-based adaptive factor is used to reduce the effect of state model disturbances. Experiments under static and dynamic conditions are conducted to verify the superiority of the application of RAKF over KF. In the static tests, RAKF provides faster convergence speed and better accuracy, compared with KF. In the dynamic tests, the headings produced by RAKF and KF are input into a PDR method, respectively. Location tracking performances reveal that the headings estimated by RAKF lead to more accurate location estimations, especially in the situation of sudden turns during pedestrians' walking. The results also tell us that it is necessary to estimate headings accurately, although there are other data or techniques, such as indoor graphs, WiFi positioning for enhancing the performance of PDR.
For PDR, each pedestrian may have special walking characteristics, which can result in a dedicated noise covariance matrix. Thus, the determination of adaptive parameters seems a cumbersome task which needs an automatic process. In the future, we will focus on studying an adaptive solution with the ability to automatically determine parameters. Moreover, tests about different carrying modes of the smart phone will be carried out, and the associating modifications to the filtering will be made. | 10,301.2 | 2018-06-01T00:00:00.000 | [
"Engineering"
] |
Rapid Extraction of Research Areas from Scientific and Technological Literature
Along with the rapid development of Internet Plus, big data, and other technologies, the construction of smart cities is promoting the transformation and upgrading of mapping geographic information models from traditional information services to intelligent services with spatial sensing. At present, however, most of the knowledge needed to provide intelligent services is implicit in the form of unstructured text in various books and journal papers in related fields, which is difficult to capture, use, analyze, and share. In particular, geographical feature knowledge is one of the types of knowledge that needs to be extracted urgently. To solve this problem, in this paper, we propose a method for the rapid extraction of research areas from scientific and technological literature abstracts. Firstly, with the help of a general naming entity identification tool, we propose a method of rapidly annotating place-name entities in administrative divisions. Then, combining the bidirectional long short-term memory conditional random field (BiLSTM-CRF) model with a place-name database covering five levels of administrative divisions in China, the identification, disambiguation, and relationship extraction of place names in different administrative divisions are realized. On this basis, the extraction of research areas is regarded as a two-classification problem, feature vectors such as frequency and location are constructed for the names of the extracted administrative divisions, and the classification model is constructed with the random forest algorithm to rapidly extract research areas. The experimental results show that the recognition accuracy of place names in administrative areas in this study is 92.61% and the recognition accuracy of research areas is 90.31%. The results are superior to those of similar algorithms; thus, the proposed method can accurately and rapidly extract research areas.
Introduction
After years of hard work, the field of surveying and mapping geographic information has built a multiscale basic geographic information database system with timely updates, which has played an important role in the construction and application of smart cities. (1,2) In recent years, with the gradual development of intelligent city construction, we are required to meet the personalized application needs of users and provide intelligent services with spatial sensing such as the intelligent recommendation of spatial data and the discovery of hotspots to support smart city planning, management, and decision-making research. (3) However, at present, massive data, an explosion of information, and hard-to-find knowledge are phenomena in basic geographic information services, making it difficult to meet the needs of users of geospatial knowledge services and to realize innovation in surveying and mapping science and technology. (4) The main reason for these phenomena is that most of the above-mentioned knowledge exists implicitly in an unstructured form in various books and journal papers in different fields, which makes it difficult to capture, share, and reuse. (5) Therefore, as the foundation of computer understanding of literature, knowledge extraction technology has important research value and broad application prospects. (6) Journal papers are important carriers of the knowledge of different disciplines in various fields, which condense the excellent research ideas, theories, and achievements of scholars. They are the most cutting-edge, authoritative, and easily accessible knowledge resources in various research fields, including extensive professional core knowledge such as research problems, algorithm models, and other types of knowledge. (7) Facing the demand for geographic information services in the construction of smart cities, where more than 80% of all types of information involved in the development of smart cities are related to geospatial locations, the simulation space support of a smart city is the geospatial framework of the digital city and the geographical framework is the core of a city's efficient operation. (8) Therefore, geospatial knowledge is an important part of constructing a geographic information system for smart city construction. If geospatial knowledge can be extracted from the massive amount of scientific and technological literature, it can provide users with knowledge services such as hotspot discovery, location-based spatial data recommendation, and other services through simple statistical analysis, association rule mining, and so forth. (9) According to different needs, the geospatial knowledge in the literature can be divided into sampling and research areas where scientific research activities are located. Most scholars are dedicated to extracting the names contained in the literature or extracting scientific research events. (10,11) However, the naming entity identification technology cannot determine which place names are related to research areas, and the extraction of scientific research events cannot guarantee that all place names related to research areas can be extracted.
This paper mainly focuses on the extraction of research areas from scientific and technological literature abstracts. First, in view of the inaccuracy of the universal naming entity identification tool, a method of rapid name marking is proposed. By combining the bidirectional long short-term memory conditional random field (BiLSTM-CRF) model with a five-level administrative division place-name database, the extraction, disambiguation, and relationship extraction of the administrative division place names in a document abstract are realized. On the basis of place-name entity recognition, research area identification is abstracted as a two-classification problem, and the random forest classification module is introduced. The classification model is trained by rapidly constructing feature vectors such as the frequency and location of the place names. As a result, the extraction of research areas has high accuracy and practicability.
Related Work
The extraction of research areas from literature abstracts mainly refers to the identification of geographic entities that appear in them and determines whether they are the research areas or where the scientific research activities are located. The key extraction technologies mainly include place-name identification and research area extraction.
For the research on text-oriented place-name recognition, the method based on pattern matching has been gradually replaced by supervised machine learning methods, such as the hidden Markov model (HMM) and conditional random field (CRF) models, because of its low recall rate and excessive cost of constructing patterns. (11)(12)(13) In recent years, with the rapid development of artificial neural network technology, many scholars have used CRFnested neural network models, such as IDCNN-CRF, BiLSTM-CRF, and RNN-CRF, to carry out research on the entity recognition of place names. Among them, BiLSTM-CRF is the most popular: BiLSTM can effectively use past and future input features and CRF can help use sentence-level label information. This method can achieve 90% accuracy in certain specific fields. (14)(15)(16) However, owing to the lack of research on place-name extraction from scientific and technological literature, there is a lack of directly usable annotation data. In addition, because of the layer-by-layer abstraction of human cognition and the diversification of expressions, the extracted place names have ambiguities, and the disambiguation rules are generally written by linguists. (17,18) Owing to the limited coverage of the rules, the effectiveness of disambiguation by this method is not ideal. (17) With the increasing growth and improvement of the encyclopedia knowledge base, it has become a valuable knowledge source for disambiguation, providing rich expressions, rapid updates, and extensive coverage of background knowledge, making it a new trend in place-name disambiguation. (19,20) There have been very few studies on research area extraction in the literature. Similar research has mainly focused on the extraction of news events. Early research on the extraction of news events directly used the geographical entities identified in the text as the spatial location of occurrence or directly used the spatial location information attached to the text to assign the location as the place where the news event occurs. Part of the studies considered entity relationships in the extraction process, but they were mainly used in place-name disambiguation, the extraction result was still expressed by a single geographic entity, and the effectiveness of place recognition was unsatisfactory. (20,21) In recent years, many scholars have carried out work on research area extraction from the aspects of dependency syntax analysis, feature construction classification models, and so forth, and have obtained high recognition accuracy. However, the related research corpus is mainly news, Weibo content, and other public opinion data, which does not have good universality. Thus, it is difficult to directly use it with scientific literature data. (22)(23)(24) At present, the main difficulties in identifying geographical entities from scientific research literature are how to rapidly construct annotated data sets and the method of place-name disambiguation. Moreover, there are very few related results on research area extraction. A major challenge is how to construct a classification model based on the semantic characteristics of the literature study area. In addition, it is necessary to combine multiple methods in the research process and incorporate more domain knowledge resources to reduce labor costs and improve research efficiency.
Place-name identification
In addition to the word segmentation characteristics of common Chinese, the hierarchical characteristics of place names and the randomness, diversity, and ambiguity of place names also increase the difficulty in recognizing place-name entities. The multilocation information in the literature adds to these difficulties. The BiLSTM model, which does not rely on dictionaries and features, has strong context memory capabilities. It can solve the problems of unregistered words and ambiguity, while the CRF algorithm can control the address annotation output through a transition probability matrix. Therefore, in this paper, we use the BiLSTM-CRF model to identify the names of administrative divisions in literature abstracts.
Principles of BiLSTM-CRF model
The BiLSTM-CRF model is divided into three layers, as shown in Fig. 1: the presentation BiLSTM, and CRF layers. First, a new training data set is generated by labeling a large number of place names in the document abstract data, and then training is carried out through the Word2vec vector model to form a high-dimensional word vector matrix. The word vector sequence corresponding to each sentence in the training data set is input into the BiLSTM module for feature extraction by looking up the table. Finally, the feature vectors output by the BiLSTM module are sequence-labeled through the CRF module to increase the relevance of text information and improve the accuracy of label prediction.
The identification of place-name entities based on BiLSTM-CRF is a typical sequence labeling problem. The model requires large-scale labeling data support to ensure the accuracy of recognition. However, at present, there is a lack of large-scale marking data for the identification of place names in scientific and technological literature, and the time and labor required for manual marking are very high. Therefore, we propose a rapid labeling method based on an existing word segmentation tool (HanLP) for place-name entities, as shown in Fig. 2.
The labeling method includes five main steps. Because the number of raw data is large, if the labeling process is carried out directly, it will require a lot of time and labor. Therefore, the first step is to evenly divide the raw data into multiple sub-data sets. For example, 5000 pieces of data are divided into five data sets, each including 1000 pieces of data. Each data set is segmented in order. After the previous data set is segmented, the user-defined dictionary of the word segmentation tool can be optimized to solve the same problem in the next data set, and it will be easier to process the next data set, reducing the time and labor required to label data. The second step is to use the HanLP word segmentation tool to segment each data set, which is a tool based on words. After importing the abstract, the tool marks words such as place names, organization names, and person names with set labels. This tool can identify more place names by optimizing a custom vocabulary. The third step is to extract the words labeled as place names in the second step to obtain a place-name data set. The abstract is manually read and the place names separated by the word segmentation tool are corrected. If there is an undivided place name, it is manually added to the custom dictionary of the word segmentation tool, and this place name can be recognized when the next data set is segmented. The fourth step is to perform the second and third steps in sequence on each divided data set to obtain the manually corrected place-name data set. Because some problems cannot be solved by optimizing a custom dictionary, we use these corrected data sets as training data and import them into the BiLSTM-CRF model based on characters to train the place-name recognition model, so as to solve the other problems in HanLP word segmentation. The fifth step is to accumulate several parts of the data into a data set, write an algorithm to match the corrected place names with the place names in the abstract, and replace the place names in the abstract with the form "/o place name /ns". Each word in the abstract is a single line. When encountering words that start with "/o" and end with "/ns", "B-LOC" is marked after the first word, the next few words are marked with "I-LOC", and all other words are marked with "O".
Place-name disambiguation and relation extraction based on place-name database of five-level administrative divisions
Using the BiLSTM-CRF model, place names in the literature can be extracted accurately, but because of the nature of the Chinese language, in place-name naming, the ambiguity caused by the same place names will reduce the practicality of the extraction results. In addition, the affiliation relationship between place names is a factor that needs to be considered in the construction of a research area about place-name characteristics. Therefore, we propose a method for the disambiguation and relationship extraction of place names that is based on a knowledge graph of administrative divisions. The knowledge graph of administrative divisions is the result of the preliminary work of the project team. The knowledge graph contains the main attributes and affiliations of all place names at the five administrative levels of China: province, city, county, township (town), and village. Relevant knowledge service applications have been developed on the basis of this knowledge graph (http://kmap.ckcest.cn/town/ tosearch). The method of place-name disambiguation and relation extraction is shown in Fig. 3.
The method mainly includes the following six steps.
Step 1: Using the place-name database, accurate, complete, and uniquely matched place names in an abstract are disambiguated. Considering that many place names in an abstract use abbreviated forms, the place names of the place-name database include the full name and the name without the suffix. The matching process first matches the complete place name; if it cannot match the complete place name, it matches the abbreviation, where the abbreviation matching must be unique.
Step 2: The set of place names is divided according to the distance between place names in the abstract. If the distance between place names is less than or equal to 1, then these place names may have an affiliation relationship (distance = 0) or a level relationship (distance = 1). These place names are divided into subsets with affiliation or level relations, and then they are matched in the place-name database by semantic similarity calculation, and the matching names are marked.
Step 3: After the first two steps of disambiguation, there may be more than two ambiguous items in the place-name database, which can be disambiguated according to the distance between them and the names marked in the first two steps. The shorter the distance, the higher the correct rate of disambiguation.
Step 4: If disambiguation cannot be achieved by marking place names, the distance between these ambiguous names can also be calculated in the place-name database, and the place name with the shortest distance can be selected as the correct place name.
Step 5: If a single place name cannot be disambiguated through the above four steps, a final disambiguation is performed by considering the administrative division scale of the place name to be matched, and the place name with the highest administrative division level is selected for disambiguation. This is because the geographical location of higher administrative divisions is more likely to be relevant because of the large population and the developed economy.
Step 6: For the place name obtained after the disambiguation, the name corresponding to the place-name database can be chosen as its standard name, and its relationship in the place-name database is extracted to provide assistance in the next step of calculating the characteristics of the research area.
Random forest
Research area extraction is performed to extract the place of scientific research activity from the place names extracted from the literature abstract. An abstract contains at least one research area. In this paper, the extraction of the research area is regarded as a two-category problem, that is, a research area is divided into two cases: yes or no. At present, there are many classification models, such as naive Bayes, support vector machine, random forest, and classification and regression tree. Among them, the random forest algorithm is easy to implement and has high accuracy. Therefore, we use the random forest model to classify the research area. The classification principle is shown in Fig. 4.
The random forest is a classifier that contains multiple decision trees. It uses n decision trees for classification and a simple voting method to obtain the final classification results, thereby improving the accuracy of classification. In other words, for classification data with an unbalanced distribution, it can also balance the errors generated. In the random forest dichotomy algorithm, the input parameter is the word feature vector. For the research area extraction task, this vector refers to the feature set of each place name in the abstract, including frequency and location characteristics, and other characteristics. Under the premise that the purpose of classification is not clear, the place names in the document abstract can be used to construct feature vectors from multiple dimensions, such as similarity, word frequency, location, distance, and other features. Generally, the more feature dimensions, the higher the classification accuracy, although the time cost also increases. The main task in this paper is to Fig. 4. Schematic diagram of research area extraction based on random forest model. rapidly extract the research area, which requires both accuracy and efficiency. Therefore, three important characteristics, place name frequency, whether the place name is in the title, and the place name position, are mainly selected for rapid classification.
Classification feature construction
(1) Frequency characteristics of place names If a place name appears multiple times in the abstract and more frequently than other place names, then this place name is probably the research area of this article. If two place names have an inclusive relationship, the frequency of place names with high administrative divisions is added to that of place names with low divisions. We take "Beijing" and "Xicheng District" as examples. Xicheng District is part of Beijing, so the frequency of "Beijing" is added to that of "Xicheng District". Assuming that the abstract contains three place names a, b, and c, and a is part of b, the frequency calculation formulas of the three place names are as follows: where f(a), f(b), and f(c) are the frequencies of place names a, b, and c in the abstract, and p a , p b , and p c represent the numbers of place names a, b, and c, respectively. To verify the rationality of feature settings, 150 data were extracted for an experiment, in which the frequency of all place names was first calculated, and then they were classified according to whether they were research areas, as shown in Fig. 5, where the ordinate is the frequency of place names. It can be seen that in the literature, the frequency of place names in the research area is generally greater than that in the non-research area. Therefore, the frequency of place names can be used as a characteristic value of the research area.
(2) Whether the place name is in the title
If an abstract is a condensed summary of the document, then the title can be considered to be a condensed summary of the abstract. The place name mentioned in the title is likely to be the research area. Because the titles of some scientific research documents directly express research at a certain place, whether the place name appears in the title can also be used as a basis for judging whether the place name is a research area. The existence and title of place name a can be expressed by the following formula: where H(a) represents whether the place name is in the title: a value of 0 means that the title does not contain place name a, and a value of 1 means that the title contains place name a. To verify the rationality of the feature setting, 200 pieces of data were extracted for statistical analysis, and the results are shown in Fig. 6. Place names were randomly sampled and calculated. The probability that they existed in the title and were the research area was 55%, and the probability that they existed in the title but were not the research area was 6%, as shown in the figure. It can be seen that the existence of a place name in the title can be used to distinguish whether the place name is a research area, so it can be set as a characteristic value in research area classification.
(3) Location characteristics of place names In an abstract, the location of the research area also has certain regularity, mostly appearing at the beginning of the abstract and occasionally at the end, so the location characteristics of the place name in the abstract can also be used as a basis for judging whether the place name is the research area. Because the same place name may be distributed throughout the abstract, we only calculate the position where the place name first appears.
The calculation formula for the place-name position is where w(a) represents the location feature value of place name a, Fa is the word number where place name a first appears, and Fn is the total number of words in the abstract.
To verify the rationality of the feature setting, 170 pieces of data were extracted for statistical analysis, and the results are shown in Fig. 7. It can be seen that the feature values of place names in the research area are generally small, that is, they are generally at the front of the abstract, and individual feature values are close to 1, that is, near the end of the abstract. Therefore, the feature of a place name can also be used as a feature value of the research area.
Experimental material
The data used in this research was from the geographic information professional knowledge service platform. At present, the platform has collected more than 10 million articles on surveying and mapping geographic information and related fields (covering the period 1991-2018). We randomly selected 10000 literature abstracts as corpus data. The literature metadata consisted of several fields, such as title, abstract, time, and author. The HanLP tool was used to segment the abstract and select the place names based on the part of the text, and then a second accurate labeling was performed through manual correction. Among the data, 5000 pieces of data were used in an entity recognition experiment on place names. The corpus ratio of the CRF model training set to the test set was about 10:1. The remaining 5000 pieces of data were used in a research area identification experiment and the research area was manually marked. The data volume ratio of the random forest model training set to the test set was about 5:1.
Experimental setup
The configuration of the computer hardware and software and the main parameters of the BiLSTM-CRF and random forest models are shown in Tables 1-3, respectively.
Model evaluation indicators
By comparing the indicators, the effectiveness of the model is evaluated. We set the recall rate (Recall), precision (Precision), and F1 value to evaluate the named entity recognition model. The main evaluation indicators are these three indicators and accuracy. For the twoclass model, it is inaccurate to judge the accuracy of the research area only, so two indicators, the average (macro avg) and the weighted average (weighted avg), are added. The average value index is used when the sample ratio of the research area to the non-research area is about 1:1, and the weighted average value index is used when the ratio is out of balance.
The formula for calculating the recall rate R is where TA is the number of toponyms correctly identified as the study area and FB is the total number of toponyms of the study area. The formula for calculating the accuracy rate P is where FA is the number of samples. The formula for calculating the F1 value is
Place-name recognition results
The batch size of the model represents the amount of data read in the model training network, and the epoch represents the number of iterations. The two parameters mainly affect the time cost and performance of the model training. First, we select 1000 pieces of training data to tune the two parameters. The 1000 pieces of data are independent of the 10000 pieces of data used for modeling mentioned in the next paragraph. When the batch size is 64 and the epoch is 20, the model can obtain the local optimal solution with the lowest time cost. The number of nodes (H) and the learning rate (LR) are the two training parameters that mainly affect the training accuracy of the model. To obtain the best experimental results, we set the batch size to 64 and the epoch to 20, and select 1000 pieces of training data for parametertuning experiments. The training accuracy for four different parameter configurations is shown in Table 4.
It can be seen that when H is 300 and LR is 0.001, the accuracy of place-name recognition of the model is the highest. To verify the superiority of the proposed method, in the next experiment, the optimal model parameters are used, 5000 training data sets are selected, and the CRF, BiLSTM, and LSTM-CRF models are used for comparison with the BiLSTM-CRF model. The performance characteristics of the four models are shown in Table 5.
According to Table 5, the accuracy indicators of the CRF and BiLSTM models are relatively close. The accuracy indicators of the LSTM-CRF model are better than those of the first two single models, but because the LSTM network in the sequence-labeling model can only extract context features, the extraction model does not achieve the best results. Compared with the other three model methods, the BiLSTM-CRF model has higher precision and recall rates, which shows that the method based on the BiLSTM-CRF model is superior to the other methods.
Extraction results for the research area
The random forest model has many parameters, with three parameters, n_estimators, max_depth, and max_features, having an important impact on the accuracy of the model. n_estimators represents the maximum number of iterations of the learner. Generally, if the value is very small, underfitting may occur. If it is very large, the cost will increase and the performance will not significantly increase. By selecting 1000 training data for many experiments, we find that setting this value to 15 gives the best performance. Max_depth and max_features respectively represent the maximum number of features considered when constructing the optimal model of the decision tree and the maximum depth of the decision tree. On this basis, only three features are selected in the next experiment. Therefore, the maximum value of both parameters is selected without limitation. Max_depth and max_features are respectively set to None and Auto. To verify the superiority of the proposed method, a training set of 5000 data is selected under the above-mentioned optimal model parameter configuration, with the naive Bayes model, K-proximity method model, decision tree model, and SVC used for comparison with the random forest model. The results for the five models are given in Table 6.
The random forest algorithm has the best classification performance. Among the five algorithms, the naive Bayes algorithm is the simplest. It is generally used in text classification, but it does not perform well for the research area/non-research area dichotomy problem in this article. The algorithm of the K-nearest method of the SVC model has better classification performance than the naive Bayes algorithm and also requires fewer samples than the SVC model to achieve the same accuracy. For a given sample size, the K-proximity method gives superior results to the SVC model. The decision tree model achieves good results for large data sources in a relatively short time. The size of the training set in this experiment is 5000, so the accuracy of the algorithm is higher than those of the SVC and K-proximity methods. However, because a large amount of training data is prone to noise, the decision tree is prone to using noisy data as the separation standard, which often leads to overfitting. The random forest algorithm uses the voting mechanism of multiple decision trees to reduce the overfitting problem of the decision tree, and the classification performance result is better than that of the decision tree model.
Conclusion
Aiming to solve the problem of the overflow of information and the lack of knowledge faced by intelligent geographic services with spatial sensing, we propose a method of extracting knowledge on the location of research areas from scientific and technological literature. Placename recognition is an important basic task in extracting research areas. Therefore, we carried out the first ever recognition of place names using the BiLSTM-CRF model. The method of NER combined with manual correction can ensure the accuracy of place-name recognition and greatly reduce labor costs. With the help of a five-level place-name knowledge map to disambiguate the recognized place names and extract relations, we can further improve the practicability of place-name recognition. On this basis, we construct the characteristics of the frequency and location of place names in the research area using the random forest classification algorithm, which rapidly and accurately extracts the study area of the literature abstract, and the data with greater accuracy is better than similar algorithms. Although the place names of the research areas extracted in this paper are those of administrative districts, there are also natural geographical entities such as water systems and mountain ranges in the scientific and technological literature. Therefore, in the next step of this research, a large amount of labeling data needs to be added with the help of a larger geographical knowledge atlas to realize the recognition, disambiguation, and relation extraction of place names. In addition, it is necessary to construct more comprehensive and easy-to-implement classification features to further improve the accuracy of research area identification. | 7,102.2 | 2020-12-29T00:00:00.000 | [
"Computer Science"
] |
Eugenol triggers apoptosis in breast cancer cells through E2F1/survivin down-regulation
Background Breast cancer is a major health problem that threatens the lives of millions of women worldwide each year. Most of the chemotherapeutic agents that are currently used to treat this complex disease are highly toxic with long-term side effects. Therefore, novel generation of anti-cancer drugs with higher efficiency and specificity are urgently needed. Methods Breast cancer cell lines were treated with eugenol and cytotoxicity was measured using the WST-1 reagent, while propidium iodide/annexinV associated with flow cytometry was utilized in order to determine the induced cell death pathway. The effect of eugenol on apoptotic and pro-carcinogenic proteins, both in vitro and in tumor xenografts was assessed by immunoblotting. While RT-PCR was used to determine eugenol effect on the E2F1 and survivin mRNA levels. In addition, we tested the effect of eugenol on cell proliferation using the real-time cell electronic sensing system. Results Eugenol at low dose (2 μM) has specific toxicity against different breast cancer cells. This killing effect was mediated mainly through inducing the internal apoptotic pathway and strong down-regulation of E2F1 and its downstream antiapoptosis target survivin, independently of the status of p53 and ERα. Eugenol inhibited also several other breast cancer related oncogenes, such as NF-κB and cyclin D1. Moreover, eugenol up-regulated the versatile cyclin-dependent kinase inhibitor p21WAF1 protein, and inhibited the proliferation of breast cancer cells in a p53-independent manner. Importantly, these anti-proliferative and pro-apoptotic effects were also observed in vivo in xenografted human breast tumors. Conclusion Eugenol exhibits anti-breast cancer properties both in vitro and in vivo, indicating that it could be used to consolidate the adjuvant treatment of breast cancer through targeting the E2F1/survivin pathway, especially for the less responsive triple-negative subtype of the disease.
Background
Breast cancer remains a worldwide public health concern and a major cause of morbidity and mortality among females [1]. Treatment of breast cancer includes, tumor resection, radiation, endocrine therapy, cytotoxic chemotherapy and antibody-based therapy [2]. However, resistance to these forms of therapies and tumor recurrence are very frequent. Furthermore, there is relative lack of effective therapies for advanced-stage and some forms of the disease such as triple negative breast cancer (TNBC). Recently, PARP inhibitors showed promising results against tumors with mutated BRCA1 and TNBC [3,4]. Therefore, scientists keep seeking for new agents with higher efficiency and less side effects. Of 121 prescription drugs in use for cancer treatment, 90 are derived from plant species and 74% of these drugs were discovered by investigating a folklore claim [5,6]. Indeed, several natural products and dietary constitutes exhibit anti-cancer properties without considerable adverse effects [7,8]. Therefore, the abundance of flavonoids and related polyphenols in the plant kingdom makes it possible that several hitherto uncharacterized agents with chemopreventive or chemotherapeutic effects are still to be identified. Several of these products such as curcumin, green tea, soy and red clover are currently in clinical trials for the treatment of various forms of cancer [9].
Eugenol (4-allyl (−2-mthoxyphenol)), a phenolic natural compound available in honey and in the essential oils of different spices such as Syzgium aromaticum (clove), Pimenta racemosa (bay leaves), and Cinnamomum verum (cinnamon leaf ), has been exploited for various medicinal applications. It serves as a weak anaesthetic and has been used by dentists as a pain reliever and cavity filling cement ("clove oil"). In Asian countries, eugenol has been used as antiseptic, analgesic and antibacterial agent [10]. In addition, eugenol has antiviral [11], antioxidant [12] and anti-inflamatory functions. Furthermore, while it has been proved not to be carcinogenic neither mutagenic [13], eugenol has several anticancer properties. Indeed, eugenol has antiproliferative effects in diverse cancer cell lines as well as in B16 melanoma xenograft model [14][15][16]. Eugenol induced apoptosis in various cancer cells, including mast cells [17], melanoma cells [15] and HL-60 leukemia cells [18]. Moreover, eugenol induced apoptosis and inhibited invasion and angiogenesis in a rat model of gastric carcinogenesis induced by MNNG [19]. Interestingly, Eugenol is listed by the Food and Drug Administration (FDA) as "Generally Regarded as Safe" when consumed orally, in unburned form.
In the present paper we present clear evidence that eugenol has potent anti-breast cancer properties both in vitro and in vivo with strong inhibitory effect on E2F1 and survivin.
Ethics statement
Animal experiments were approved by the KFSH & RC institutional Animal Care and Use Committee (ACUC) and were conducted according to relevant national and international guidelines. Animals suffered only minimal pain due to needle injection and certain degree of distress related to the growth/burden of the tumor. Euthanasia was performed using CO2 chamber.
Cytotoxicity assay
Cells were seeded into 96-well plates at 0.5-1.10 4 /well and incubated overnight. The medium was replaced with fresh one containing the desired concentrations of eugenol. After 20 hrs, 10 μl of the WST-1 reagent (Roche Diagnostics, Mannheim, Germany) was added to each well and the plates were incubated for 4 hrs at 37°C. The amount of formazan was quantified using ELISA reader at 450 nm of absorbance.
Cell proliferation analysis
Complete medium (100 μl) containing 2-4 x 10 3 cells was loaded in each well of the 96-well microtiter Eplates with integrated microelectronic sensor arrays at the bottom of each well. The plate was incubated for at least 30 min in a humidified, 37°C, 5% CO2 incubator, and then was inserted into the Real-Time Cell Electronic Sensing System (RT-CES system, xCELLigence system from Roche Applied Science, originally invented by the US company ACEA Biosciences Inc., San Diego, CA). This allows for label-free and dynamic monitoring of cell proliferation. Cells were monitored for 90 hrs. The electronic readout, cell-sensor impedance is displayed as arbitrary units called cell index, which is defined as Rn-Rb/Rb, with Rn = cell-electrode impedance of the well with the cells and Rb = the background impedance of the well with the media alone.
Visualization of the second antibody was performed using the superSignal West Pico Chemiluminescent substrate according to the manufacturer's recommendations (THERMO Scientific, Rockford, IL).
Quantification of protein and RNA expression levels
The expression levels of RNAs and proteins were measured using the densitometer (BIO-RAD GS-800 Calibrated Densitometer, USA). Films were scanned and protein signal intensity of each band was determined. Next, dividing the obtained value of each band by the values of the corresponding internal control allowed the correction of the loading differences. The fold of induction was determined by dividing the corrected values that corresponded to the treated samples by that of the non-treated one (time 0).
Annexin V/PI and flow cytometry
Cells were treated either with DMSO or eugenol, and then were reincubated in complete media. Detached and adherent cells were harvested 72 hrs later, centrifuged and re-suspended in 1 ml PBS. Cells were then stained by PI and Alexa Fluor 488 annexinV, using Vibrant Apoptosis Assay kit #2 (Molecular probe, Grand Island, NY, USA). Stained cells were analyzed by flow cytometry. The percentage of cells was determined by the FACScalibur apparatus and the Cell Quest Pro software from Becton Dickinson, USA. For each cell line 3 independent experiments were performed.
shRNA transfection
The transfection using E2F1-shRNA and control-shRNA was performed using Lipofectamine (Life technologies, Grand Island, NY, USA) as previously described [20].
Tumor xenografts
Breast cancer xenografts were created in 10 nude mice by subcutaneous injection of the MDA-MB-231 cells (5.10 6 ) into the right leg of each mouse. After the growth of the tumors (about 2 cm 3 ) the animals were randomized into 2 groups to receive intraperitoneal (i.p.) injections of eugenol (100 mg/kg) or the same volume of DMSO each 2 days for 4 weeks. Tumor size was measured with a calliper using the following formula (Length X Width X Height).
Results
Eugenol has cytotoxic effect on estrogen positive and negative breast cancer cells We first investigated the cytotoxic effect of eugenol on different breast cancer cells (MDA-MB-231, MCF7 and T47-D) and the non-tumorigenic MCF 10A cell line using the WST-1 assay. Cells were seeded in triplicates into microtiter plates and treated with increasing concentrations of eugenol for 24 hrs, and then the cytotoxic effect was measured. While MCF 10A cells exhibited high resistance to eugenol, with an LC 50 (the concentration that leads to 50% survival) of 2.4 μM, breast cancer cells showed clear sensitivity ( Figure 1A). The LC 50 were 1.7 μM, 1.5 μM and 0.9 μM for MDA-MB-231, MCF7 and T47-D, respectively ( Figure 1A, Table 1). This indicates that eugenol has differential cytotoxicity against different breast cancer cell lines, but its less toxic against non-neoplastic breast epithelial cells.
Eugenol triggers apoptosis in breast cancer cells through the mitochondrial pathway independently of the estrogen receptor status
Next, we investigated whether eugenol triggers apoptosis in breast cancer cells. To this end, cells were treated with different concentrations of eugenol for 3 days, and then were stained with annexin V/Propidium Iodide (PI), and were sorted by flow cytometry. Figure 1B shows that eugenol triggered essentially apoptosis in both breast cancer cells MCF7 and MDA-MB-231. However, the non-carcinogenic MCF 10A cells exhibited great resistance. Figure 1C shows the proportions of eugenol-induced apoptosis, which was considered as the sum of both early and late apoptosis after deduction of the proportion of spontaneous apoptosis. Interestingly, eugenol effect increased in a dose-dependent manner in the 4 cell lines ( Figure 1C). While the effect was only marginal in response to 1 μM, the proportion of apoptotic cells reached 80% in MCF7 and MDA-MB-231 and 65% in T47-D, while it was only 20% in MCF 10A in response to 2 μM eugenol. At 4 μM, eugenol was toxic for MCF 10A as well, and apoptosis reached 70% in these cells, while it was beyond 80% in the three breast cancer cell lines ( Figure 1C). This indicates that the eugenoldependent cytotoxicity is mediated mainly through the apoptotic cell death pathway, with selective effect on breast cancer cells up to 2 μM. Therefore, this concentration was used for the next experiments.
To confirm the induction of apoptosis by eugenol in breast cancer cells and determine the apoptotic route that eugenol activates, MDA-MB-231 cells were treated with eugenol (2 μM) and were harvested after different time periods (0, 24, 48 and 72 hrs). Whole cell extracts were prepared and were used to evaluate the levels of different pro-and anti-apoptotic proteins using the immunoblotting technique and specific antibodies. GAPDH was used as internal control. First, we assessed the effect of eugenol on the caspase-3 and PARP-1 proteins (two principal markers of apoptosis). Figure 2 shows that eugenol triggered the cleavage of caspase-3 and PARP-1, which led to significant increase in their active forms, confirming the induction of apoptosis by eugenol in breast cancer cells. Next, we assessed the effect of eugenol on the levels of Bax and Bcl-2 and have found that while the level of Bax increased in a timedependent manner, the level of Bcl-2 did not change ( Figure 2). This resulted in a time-dependent increase in the Bax/Bcl-2 ratio reaching a level 4 fold higher after 72 hrs of treatment, suggesting that eugenol triggers apoptosis through the mitochondrial pathway. To confirm this, we assessed the levels of cytochrome C, caspase 9 and its active form in these cells, and showed that while the level of caspase-9 decreased in a timedependent manner reaching a level more than 3 fold lower after 72 hrs of treatment, the level of cleaved caspase-9 and cytochrome C increased 3 fold, and 17 fold, respectively ( Figure 2). Together, these results demonstrate that eugenol triggers apoptosis in breast cancer cells through the internal mitochondrial pathway via Bax increase.
Eugenol is an efficient inhibitor of several cancer promoting genes
To investigate the effect of eugenol on cancer-related genes, MDA-MB-231 and MCF7 cells were either shamtreated (DMSO) or challenged with eugenol (2 μM) for 24 hrs, and then cell lysates were prepared and protein levels were monitored by immunoblotting. Eugenoltreatment had strong effect on the expression of NF-κB, decreasing its level 2 fold and 3 fold in MDA-MB-231 and MCF7, respectively ( Figure 3A). Similar effect was observed on β-catenin, indicating that eugenol could inhibit both major cancer promoting pathways Akt/NF-κB and Wnt/β-catenin. To confirm this, we studied the effect of eugenol on the common downstream effector cyclin D1 [21][22][23]. Indeed, eugenol treatment decreased cyclin D1 level 3 fold in MDA-MB-231 cells and 20 fold in MCF7 cells ( Figure 3A). Interestingly, the strongest eugenol inhibitory effect was observed on E2F1 and survivin, a cancer anti-apoptosis marker [24] in both cell lines ( Figure 3A). Indeed, after 24 hrs of treatment, the E2F1 and survivin proteins became almost undetectable ( Figure 3A). To ascertain the level of action of eugenol on these genes, we investigated the effect on their mRNA levels. To this end, MDA-MB-231 cells were treated with eugenol (2 μM) for 24 hrs and total RNA was purified and amplified using RT-PCR and specific primers. Interestingly, eugenol treatment reduced the expression level of both transcripts ( Figure 3B). This indicates that eugenol inhibits the expression of these 2 genes at the transcriptional or post-transcriptional level. Therefore, eugenol targets several breast cancer-related signaling pathways, leading to strong inhibition of two important breast cancer oncogenes E2F1 and survivin in both luminal as well as basal like breast cancer cell lines.
Eugenol triggers apoptosis through E2F1/survivin downregulation
To elucidate the role of eugenol-related down-regulation of E2F1 and its antiapoptosis target survivin [25] in apoptosis induction in breast cancer cells, we studied the effect of E2F1 specific down-regulation on the cytotoxic effect of eugenol. Therefore, MDA-MB-231 cells were transiently transfected with specific E2F1-shRNA or control-shRNA. Figure 4A shows the effect of E2F1-shRNA on the level of the E2F1 mRNA and protein.
Interestingly, like eugenol, E2F1 down-regulation by specific shRNA reduced also the expression level of the survivin mRNA and protein ( Figure 4A). This shows that E2F1 controls the expression of survivin in these cells. We next treated MDA-MB-231 cells expressing either control-shRNA or E2F1-shRNA with DMSO or eugenol (1 μM) for 48 hrs. Figure 4B shows that 1 μM eugenol had only marginal effect on MDA-MB-231 cells. Interestingly, E2F1 down-regulation doubled the killing effect of eugenol as compared to the effect on the corresponding control cells ( Figure 4B). This suggests that the killing effect of eugenol is mediated through E2F1/survivin down-regulation.
Eugenol inhibits cell proliferation and up-regulates p21 WAF1 in breast cancer cells
Exponentially growing breast cancer cells (MDA-MB-231, MCF7 and T47-D) were seeded in 96-well plates and were either sham-treated with DMSO or challenged with eugenol (2 μM), and then reincubated for 120 hrs. During this time, the real-time cell electronic sensing system was used to monitor cell proliferation. While DMSO-treated cells continued to proliferate, eugenol treatment suppressed cell proliferation in the 3 breast cancer cell lines ( Figure 5A). Next, we evaluated the effect of eugenol on the expression of the versatile cyclin-dependent kinase inhibitor p21 WAF1 in MDA-MB-231 and MCF7. After treatment with eugenol (2 μM) cells were harvested at different periods of time (0-24 hrs) and immunoblotting was utilized for protein level assessment using specific antibodies. Figure 5B shows that eugenol increased the level of p21 WAF1 reaching a level 5 fold higher as compared to the basal level in both cell lines. Therefore, eugenol is a strong inducer of p21 WAF1 expression in a p53-independent manner.
Eugenol inhibits tumor growth of breast tumor xenografts in mice
To study the anti-cancer effect of eugenol in vivo, breast cancer xenografts were created by injecting 5.10 6 MDA-MB-231 cells subcutaneously into nude mice. When tumors reached a reasonable volume (about 2 cm 3 ), eugenol was given i.p. at a dose of 100 mg/kg each 2 days for 4 weeks. Control animals were treated with DMSO only. Interestingly, in the mock-treated animals, the volume of the tumors increased in a time-dependent manner and became 3 fold bigger than the initial ones ( Figure 6A). On the other hand, treatment with eugenol inhibited tumor growth ( Figure 6A). This shows that eugenol inhibits the proliferation of breast cancer cells in vivo as well.
Subsequently, we investigated the effect of eugenol on the expression of various cancer-related genes in tumor xenografts. Figure 6B shows that eugenol down-regulated E2F1 and survivin in tumor xenografts as well. Concomitantly, the levels of NF-κB and cyclin D1 also decreased and Cox-2 became undetectable ( Figure 6B). Interestingly, like in vitro, eugenol up-regulated p21 WAF1 ( Figure 6B). Furthermore, we have investigated the effect of eugenol on the expression of apoptosis-related genes and have shown that eugenol increased the levels of Bax, cleaved PARP-1 and the active form of caspase-9, but decreased the level of the anti-apoptosis protein Bcl-2, suggesting eugenoldependent induction of apoptosis in vivo and confirming the results obtained in vitro ( Figure 6B).
Discussion
In the present study we have shown that eugenol, a natural phenolic compound, exhibits strong anti-breast cancer features. Indeed, we present here clear evidence that eugenol could be considered as a potential therapeutic agent for both ER-negative as well as ER-positive breast tumors for the following reasons: First, eugenol is cytotoxic and triggered apoptosis in great proportion of breast cancer cells, with marginal effect on normal cells in response to 2 μM of eugenol. However, at higher concentration (4 μM), eugenol killed normal cells as well, showing that this molecule may have some toxicity when used as high concentrations.
Eugenol-related apoptosis was mediated through the mitochondrial pathway via Bax increase, and is p53-and ERα-independent since it occurred in p53-and ERαdefective cells, MDA-MB-231 [26]. This effect was mediated through strong down-regulation of E2F1 and its antiapoptosis target survivin [25]. Indeed, specific downregulation of E2F1 strongly reduced the level of survivin and increased the effect of eugenol on breast cancer cells ( Figure 4). Notably, low E2F1 levels were related to favorable breast cancer outcome [27]. On the other hand, E2F1 expression was related with poor survival of lymph node-positive breast cancer patients treated with fluorouracil, doxorubicin and cyclophosphamide [28]. This indicates that high E2F1 levels reduce the response of breast tumors to therapy. Similarly, while survivin expression has been found to confer resistance to chemotherapy and radiation, targeting survivin in experimental models improved survival [29]. Thereby, the fact that eugenol can inhibit both E2F1 and survivin in vitro and in tumor xenografts, indicates that eugenol could be used to consolidate the adjuvant treatment of breast cancer patients, especially the clinically aggressive ER-negative types, whose prognosis is still poor and clinically characterized as more aggressive and less responsive to standard treatments [30,31].
Second, eugenol is a potent inhibitor of cell proliferation, may be through inhibition of E2F1 and great increase in the level of the cyclin-dependent kinase inhibitor p21 WAF1 in vitro and in tumor xenografts. E2F1 is a transcription factor that regulates the expression of several genes involved in G1 to S phase transition [32]. In a previous study it has been shown that eugenol inhibits cell proliferation in melanoma cells through inhibition of E2F1 [15]. p21 induction in p53-defective MDA-MB-231 cells, suggests the ability of eugenol to induce p21 WAF1 through p53-independent mechanism. Overexpression of p21 WAF1 can block both the G1/S and G2/M transitions of the cell cycle [33]. Furthermore, p21 WAF1 is a modulator of apoptosis in a number of systems [34][35][36]. Therefore, the strong eugenol-dependent up-regulation of p21 WAF1 in a p53-independent manner could be of great value for the inhibition of cancer cell proliferation and the induction of cell death in various p53-defective breast tumors, including the triple negative form of the disease where p53 deficiency is observed in up to 44% [37].
Third, eugenol down-regulated several onco-proteins known to be highly expressed in breast cancer cells and tissues, such as NF-κB, β-catenin, cyclin D1, Bcl-2 and survivin. Akt/NF-κB signaling pathway plays a major role in breast carcinogenesis. NF-κB up-regulation is implicated not only in tumor growth and progression, but also in the resistance to chemo-and radiotherapies. Several studies have documented the elevated activity of this protein in breast cancer cells [38,39], which makes it an excellent target for cancer therapy [40,41]. In a recent study, it has been shown that eugenol can inhibit cell proliferation via NF-κB suppression in a rat model of gastric carcinogenesis [42]. The other important breast cancer signaling pathway is the Wnt/β-catenin, which is another transcription factor that has been found highly expressed in various types of cancer, including breast carcinomas [43,44], and is particularly activated in triple negative breast cancer. Therefore, the Wnt/β-catenin signaling pathway constitutes an important potential therapeutic target in the treatment of breast cancer, especially the triple negative form of the disease [45].
The activation of these 2 signaling pathways leads to the up-regulation of cyclin D1, which is a common downstream effector protein. Cyclin D1 is an oncogene that is over-expressed in about 50% of all breast cancer cases [46], and its down-regulation is an important target in breast cancer therapy [47]. Therefore, eugenol-related down-regulation of NF-κB and β-catenin and their common downstream target cyclin D1 could have a great inhibitory effect on breast cancer growth. Importantly, the inhibitory effect of eugenol on these onco-proteins was also observed in vivo in tumor xenografts ( Figure 6).
Conclusions
Eugenol could constitute a potent anti-breast cancer agent with less side effects than the classical chemotherapeutic agents, through targeting the E2F1/survivin oncogenic pathway. Therefore, eugenol warrants further investigations for its potential use as chemotherapeutic agent against ER-negative and also p53-defective tumors, which are still of poor prognosis. | 4,991 | 2013-12-13T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Great Escape: Tunneling out of Microstate Geometries
We compute the quasi-normal frequencies of scalars in asymptotically-flat microstate geometries that have the same charge as a D1-D5-P black hole, but whose long BTZ-like throat ends in a smooth cap. In general the wave equation is not separable, but we find a class of geometries in which the non-separable term is negligible and we can compute the quasi-normal frequencies using WKB methods. We argue that our results are a universal property of all microstate geometries with deeply-capped BTZ throats. These throats generate large redshifts, which lead to exceptionally-low-energy states with extremely long decay times, set by the central charge of the dual CFT to the power of twice the dimension of the operator dual to the mode. While these decay times are extremely long, we also argue that the energy decay is bounded, at large $t$, by $(log~t)^{-2}$ and is comparable with the behavior of ultracompact stars, as one should expect for microstate geometries.
Introduction
One of the primary motivations for the construction of microstate geometries is that they approximate very closely the behavior of black holes without leading to information loss. This happens because these geometries have a smooth cap at very high redshift but do not have a horizon. The craft in constructing and analyzing such geometries lies in how well they approximate the black-hole behavior and this craft is becoming a well-developed science for BPS microstate geometries. In particular, we now have extensive families of BPS microstate geometries that look exactly like a BPS black hole, except that the infinite AdS 2 throat of the black hole is capped at some very large depth [1][2][3][4][5][6][7][8][9][10]. This cap affects an infalling observer less than a Planck time before crossing the would-be event horizon [11,12].
From a holographic perspective, the depth of the throat is one of the most important physical parameters of the solution, because it controls the energy gap of the excitations on top of the supersymmetric CFT ground state dual to the microstate geometry. For the states with the longest throat, this gap matches exactly the one expected from the typical CFT states that count the black hole entropy [13,9,1,11].
Furthermore, if one computes, holographically, the two-point function in the heavy state dual to the microstate geometry [14], this two-point function exhibits the same thermal decay as in the BTZ background, except that the information is not lost but is recovered after a return time of order the inverse of the energy gap. Hence microstate geometries look exactly like black holes on time-scales less than this return time, but after that they do indeed return the information about what was thrown into them and about the smooth cap at the bottom of the throat.
Intuitively, it is natural to expect that the cap region will be the repository of all the microstate structure and thus one should expect infalling matter to be trapped there for a very long time. Starting with [15], there have now been several investigations [16][17][18][19] of trapping of matter in either BPS or non-BPS microstate geometries. Furthermore, it was shown in [18] that there exist modes that decay extremely slowly, and this was confirmed by a matched-asymptoticexpansion calculation of the decay time [19]. From a mathematical perspective, this extremely slow decay of a wave equation in a background was the slowest ever found, and this has created some interest in the mathematical community [20][21][22].
The concern raised by the analysis of [18] was that such long-term trapping would lead to non-linear instabilities. This is because, in General Relativity alone, if matter accumulates in a region for a long period of time, it will tend to form black holes, or black extended objects. Luckily, String Theory affords many other possibilities, and the guiding principle of fuzzballs and microstate geometries is that whenever GR predicts the formation of a black hole, the system should instead become a fuzzball, or, in its more coherent incarnations, it should transition into a new microstate geometry. This is because all the microstate geometries belong to a very large moduli space of solutions, whose dimension is of order the central charge of the CFT dual to the black hole (6n 1 n 5 for the D1-D5-P black hole [23,24]). Hence, an excitation of these geometries has ∼ n 1 n 5 available directions into which it can spread, and will generically explore this very large phase space [25,12] rather than form a black hole.
There is another problem with the decay-time analysis of [18], which can be best seen when analyzing the slowly-decaying modes using a matched asymptotic expansion [19]. From a mathematical perspective, the allowed wave-numbers are unbounded. However, one cannot trust the supergravity approximation if the wavelength of these oscillations is smaller than the Planck scale. This puts an upper bound on the wave numbers, and the practical effect of this upper bound is to eliminate the slowest decaying modes. This in turn indicates that from a physics perspective, the non-linear instability found in [18] is an artifact of considering sub-Planckian modes 1 .
The purpose of this paper is to calculate the long-term trapping in, and tunneling from, a family of asymptotically-flat microstate geometries that have the same charges and the same long AdS 2 throat as a three-charge black hole with a large event horizon 2 .
There is a standard approach to this class of problems in which one uses matched asymptotic expansions [16][17][18][19]. Essentially, one constructs the modes in an asymptotically AdS space-time, and then matches the AdS asymptotics to the Bessel function asymptotics that are the staple of flat-space scattering problems. This usually requires approximations in which the frequency is taken to be small or one considers the near-decoupling limit of the background, Q P Q 1 , Q 5 . To date, this method has been applied successfully to computing the quasi-normal frequencies of atypical microstate geometries that do not have long black-hole-like throats [26][27][28]30].
The challenge in analyzing the known asymptotically-flat microstate geometries with a deeply-capped BTZ throat is that they usually depend non-trivially on several variables and the wave equation is not separable. However, we have obtained a family of such microstate geometries in which the scalar wave equation is "almost separable": If one tries to make a separation of variables, one finds that it almost works except for one term. We then show that this term is parametrically suppressed in the long-throat approximation, and even more highly suppressed at low energies. This means that the tunneling process is accurately governed by the separable pieces of the scalar wave equation, and all the interesting physics is encoded in the radial wave equation.
Rather than using matched asymptotic expansions, we use a technique similar to that of [15,14]: we reduce the radial equation to an equivalent Schrödinger equation in which the tunneling from the cap to the asymptotic region becomes a simple computation of a barrier penetration. We then use WKB methods to compute the quasi-normal frequencies of the modes of this system. This approach leads to a simple, more intuitive picture of the tunneling process, making the universality of our results for generic deep microstate geometries all the more apparent.
At low energy, we show that the quasi-normal modes are mostly supported in the highlyredshifted AdS 3 cap. Thus, as in [16][17][18][19], the real parts of the frequencies correspond to the bound-state frequencies of the cap and their imaginary parts depend on the redshift between the cap and flat space. We find that the decay times in our solutions are parametrically slower than the decay times found in [18,19], and this comes from the fact that our solutions have a very long throat.
We also highlight another regime of energy that is absent in the geometries studied in [18,19]. 1 Here we mean sub-Planckian wavelengths, which correspond to super-Planckian masses. 2 The solutions considered in [18,19] have an angular momentum that exceeds the cosmic censorship bound [26][27][28][29], and hence lack the long BTZ-like throat characteristic of typical black-hole microstate geometries. At intermediate energy, the modes start to explore the BTZ throat. The rigidity of the AdS 2 region makes them leak very slowly into flat space. The end result is that the spectrum of quasi-normal modes is modulated by the BTZ response function in this regime of energy.
We also show that the modes that can be described in supergravity, even if they decay very slowly, give a decay that is consistent with the trapping created by extremely compact neutron stars. This result is in perfect accord with the physics that one would hope to see emerge from the microstate geometry programme. By smoothly capping-off geometries just above the horizon scale of a black hole, one creates an extremely compact object whose states can still be seen and measured by distant observers. It is therefore to be expected that the trapping of matter by microstate geometries should parallel the trapping of matter by extremely compact, "normal" objects.
If one leaves physical considerations aside, and considers arbitrary sub-Planckian wavelengths, one finds that our geometry also has modes that decay slowly enough to give rise to non-linear instabilities. As was shown in [18,19], such modes are localized in the neighborhood of the evanescent ergosurface. This was anticipated from the long-term trapping of exceptionally low-energy geodesic near such surfaces [18]. We also find such modes in the superstratum geometries, but they are necessarily sub-Planckian and hence of no physical relevance.
Our analysis also reveals the existence of modes that are trapped forever. Some of these modes have a very simple physical explanation: There are trapped modes with zero frequency, which correspond to BPS deformations of the supersymmetric zero-energy superstratum into another zero-energy superstratum that is close in phase space. In addition, there is an infinite family of trapped modes with negative momentum. These modes have positive energy but carry a (momentum) charge that is opposite to that of the background, so they will always be attracted to the bottom of the solution and will never be able to escape 3 . The focus of this paper is on the quasi-normal modes but we will make some remarks about the non-trivial eternally-trapped modes in Section 7.
In Section 2, we present the general features of the six-dimensional microstate geometries whose quasinormal modes we compute, highlighting their key features and explaining how our WKB analysis works. In Section 3, we construct the asymptotically-flat (2, 1, n) supercharged superstrata, we discuss the separability of the minimally-coupled scalar-wave equation and the different limits of the potential that appears in the radial equation. In Section 4 we derive the spectrum of quasi-normal modes in two energy regimes using the WKB method and discuss the corresponding decay rates. In Section 5, we review the analysis of [19] for a family of atypical microstate geometries using similar convention as ours and compare the results. In Section 6, we discuss the decay timescales for the energy of the quasi-normal modes to leak to flat space and the potential instabilities. We make some final remarks in Section 7.
(a) A superstratum spacetime is asymptotically flat at infinity, then has an AdS3×S 3 region, then an AdS2×S 1 ×S 3 throat and then the cap (which a redshifted global AdS3×S 3 ). Together the upper AdS3 and the AdS2×S 1 regions form the BTZ throat of the geometry.
(b) A GMS spacetime has flat space at infinity glued to a global AdS3×S 3 in the infrared. Unlike superstrata, the redshift between the cap and flat space is not controlled by the length of a BTZ throat but by the parameter, k, of a Z k orbifold. To illustrate the presence of this orbifold we represent the cap as a cone.
Tunneling and quasi-normal modes in microstate geometries
In this paper we compute the quasinormal modes for two classes of microstate geometries: Superstrata [33,3,4,8,9], and the Giusto-Mathur-Saxena (GMS), or the closely-related Giusto-Lunin-Mathur-Turton (GLMT) solutions [26][27][28][29]. In this section, we give an overview of the geometry and the relevant class of Schrödinger problems and describe how to use WKB methods to analyze the decay of the states that are trapped deep within the microstate geometry.
At infinity the geometries are asymptotic to R 1,4 ×S 1 . Asymptotically-flat superstrata with a deeply-capped BTZ throat [4,8] have four regions: (i) The cap, (ii) The AdS 2 throat, (iii) The AdS 3 region, and (iv) The flat region near infinity. A schematic picture of this structure is shown in Figure 1(a). The AdS 3 region and the AdS 2 throat together form a region of the geometry that is closely approximated by the BTZ metric. In this paper, the superstrata will always have a long AdS 2 throat, but the size of AdS 3 region will depend upon the charges because the geometry may well transition rapidly from the AdS 2 throat to the asymptotically-flat region. The geometry of the cap at the very bottom of the solution is closely approximated by a global AdS 3 ×S 3 metric.
The GMS solutions have two regions: (i) The cap and (ii) the flat region near infinity. A schematic picture of this structure is shown in Figure 1(b). The cap geometry is an S 3 fibration over a redshifted AdS 3 geometry. Since these solutions do not have a BTZ throat, the redshift is much smaller than for superstrata and yields to a larger energy gap.
The parameters and charges of the solutions
Superstrata are 1 8 -BPS solutions of type IIB supergravity on T 4 or K3 that have the same charges and mass as supersymmetric D1-D5-P black holes. From the perspective of the six-dimensional transverse space, they carry three charges, Q 1 , Q 5 and Q P , and two angular momenta, J L and J R .
We will consider a specific family of superstrata, denoted as the (2, 1, n) supercharged superstrata. They have five independent parameters, which we will denote by Q 5 , b, a, R y and an integer, n. The parameter, R y , is the radius of the common D1-D5 circle at infinity which we oftentimes refer to as the "y-circle," while Q 5 is the charge of the D5 branes. The remaining parameters, b, a and n, control the other supergravity charges via: (2.1) The first of these relations is required by smoothness.
The quantized charges, n 1 , n 5 , n P , j L and j R are related to the supergravity charges by: where V 4 is the volume of the internal manifold (T 4 or K3) of the Type IIB compactification to six dimensions and N is: where 10 is the ten-dimensional Planck length and (2π) 7 g 2 s α 4 = 16πG 10 ≡ (2π) 7 8 10 . The quantity, Vol(T 4 ) ≡ (2π) −4 V 4 , is sometimes introduced [34] as a "normalized volume" that is equal to 1 when the radii of the circles in the T 4 are equal to one in Planck units.
One should note that, unlike the superstrata with a long BTZ throat constructed in [6], the right-moving angular momentum of our solutions is quite large and remains finite as one makes the throat longer and longer by decreasing the parameter a.
In contrast, the left-moving angular momentum, j L , becomes arbitrarily small in this limit. Hence, the microstate geometry we consider corresponds to a BMPV black hole with a finite five-dimensional angular momentum (similar to the microstate geometries constructed in [1,5]) 4 .
While one may wish to consider superstrata with lower values of j R , this value of the charge was the accidental side-effect of a choosing a "nearly separable" superstratum.
It is also useful to note that which means that for b a, n controls the momentum in units of the central charge of the system. Generically we will take b a because this produces superstrata with deeply-capped BTZ throats that are likely to trap particles for the longest period of time.
The GMS solutions are also three-charge 1 8 -BPS solutions of type IIB supergravity on T 4 or K3. They also carry three charges, Q 1 , Q 5 and Q P , and two angular momenta, J L and J R . Unlike superstrata, the angular momenta are much larger than those of black holes, exceeding the black hole cosmic censorship bound. Therefore, these solutions cannot have a deeply-capped BTZ throat. They are determined by five parameters, Q 5 , R y , a, an integer-moded spectral-flow parameter n and an orbifold parameter k. They are related to the supergravity charges via The supergravity charges are in turn related to the quantized charges via the same compactification relations (2.2) with (2.3).
The Schrödinger problem
As explained in the Introduction, the fact that GMS solutions are composed only of two regions allows one to use easily matched asymptotic expansions to study their quasi-normal modes in certain limits. We will review this technique in detail in Section 5. To cope with the more complex structure of superstrata, we will use the WKB approximation to derive the spectrum of quasi-normal modes. Since this technique has not been so widely used in analyzing supergravity solutions, we will review the key elements.
The family of superstrata we use are very similar to those analyzed in [14], except that the geometries considered here have an asymptotically flat region. The price of adding this region is that there is no longer a simple recasting of the metric as an S 3 fibration over a three-dimensional space. Moreover, the scalar wave equation is no longer separable. However, the geometry still behaves as depicted in Figure 1(a) and, as we will show in Section 3, while no longer separable, the failure of separability is extremely small for solutions with a deeply-capped BTZ throat, and hence we can still use a separated wave equation as an excellent approximation.
Just as in [14], we find that the radial equation for the scalar modes can be recast in an equivalent Schrödinger form: for some potential, V (x). The shape of the potential depends on several parameters, but, for the class of quasi-normal modes we wish to consider, the potential takes the form shown in Figure 2.
There are four zones, delimited by the three classical turning points, x i , defined by V (x i ) = 0. Zone I is simply the centrifugal barrier at the center of the cap. This barrier depends on the angular momenta of the mode and can be lowered to zero by considering S-waves. Zone II is induced by the smooth cap geometry, and the lowest-energy states of the system are localized in this potential well. Zone III corresponds to the barrier that the waves trapped in Zone II need to traverse in order to escape. It reflects the effects of the throat regions on the wave. Zone IV corresponds to the asymptotically flat region, and the potential decays without a lower bound because of the usual energy dilution in flat R 1,4 .
x 0 zone I zone II zone III zone IV Figure 2: Typical form of the potential V (x). It has four zones, corresponding to the centrifugal barrier, the cap, the BTZ throat and the asymptotically-flat region. Connecting these zones, there are three classical turning points, The fact that the potential drops arbitrarily low as x becomes large, means that all the "bound states" in the cap are actually quasi-normal modes that will eventually escape to infinity by tunneling through the barrier in Zone III. Our goal is to compute the quasi-normal excitations and estimate this barrier-penetration rate.
The WKB analysis
The Schrödinger problem described above can be easily solved using a standard WKB analysis: In each zone one uses a wave-function that is a superposition of functions of the form: When V (x) is negative these functions oscillate and when V (x) is positive they grow or decay exponentially. We consider modes that have a centrifugal barrier in Zone I and that therefore decay as x → −∞. Furthermore, the decay of our quasi-normal modes is captured by requiring outgoing modes as x → +∞. Thus we will need to correlate the boundary conditions at +∞ with the sign of the frequency.
The matching at the classical turning points is then done using Airy functions, as in standard WKB problems. The only issue that can arise with this Airy-function matching is when two turning points (for example x 1 and x 2 ) are too close to each other; one then has to do a quadratic approximation through x 1 and x 2 , using parabolic cylinder functions (see, for example, [35]). Fortunately, for our problem, all the classical turning points are widely separated and we can apply the standard procedure.
We therefore take (2.9) Around each turning point, x ∼ x i , the wave function behaves as: (2.10) Matching the asymptotics of the Airy functions to the WKB functions on both sides of each turning point, x i , one can relate D N ± to D I ± . This gives the connection formulae: where Θ ≡ The mode is required to decay in the centrifugal barrier and so one must take D I + = 0. For a quasi-normal mode, one must have an outgoing wave for large x. If we assume that the wave depends on time as e iωt , then equation (2.9) implies that the wave function at large x behaves as: where f (r) is a monotonically increasing function of r. This mode will be outgoing if D IV + = 0 for Re(ω) > 0 and D IV − = 0 for Re(ω) < 0. These two boundary conditions lead to the following constraint on the matrix elements in the connection formula: (2.14) If we take the tunnelling barrier to be infinite, e −2 T → 0, we find the standard WKB condition that leads to a tower of (real) bound-state spectrum labelled by a mode number N : The quantity, Θ, defined in (2.12), depends upon ω, and one uses (2.15) to determine the normal modes, ω (0) N , of the bound states. Since our superstrata have large but finite barriers, 0 < e −2 T 1, we can use perturbation theory to find the leading-order corrections to the spectrum. First, one expands Θ around ω One also has: ∂Θ ∂ω ω=ω (0) The contribution from differentiating the endpoints of the integral with respect to ω vanishes by the fundamental theorem of integral calculus because V vanishes at the end points.
Substituting (2.16) into (2.14) leads to the leading-order correction: There are several things to note. First, this leading-order correction is purely imaginary. There will also be shifts in the fundamental frequencies, ω 0 , but these arise at the next order in perturbations. Also note that for just about any physical system one has sign (Re(ω)) ∂Θ ∂ω > 0 . (2.19) This is because the fundamental frequencies of the system are given by solving (2.15) for ω as a function of N . The positivity condition (2.19) simply reflects the fact that the absolute values of the frequencies increase with the mode number. As a result of (2.19), we see that the sign in (2.18) is precisely the correct one so that e iωt becomes a decaying mode, independent of the sign of ω.
Taking this one step further, one can obtain a simple intuitive understanding of (2.15). Recall that for a wave motion of frequency ω and wave number, k, the group velocity is given by ∂ω ∂k . For a particle in a box of length, L, the wave number, k, is given by k = 2N π L . Thus, from (2.15) it follows that the group velocity is given by L 2 ( ∂Θ ∂ω ) −1 and so the time for a round trip across the box (distance 2L) is 4( ∂Θ ∂ω ). Therefore the factor in (2.18) represents the impact frequency of the bound-state wave against the potential barrier. The factor e −2T in (2.18) represents the transition probability per impact, and hence the complete expression represents the inverse time-scale for the decay.
Finally, recall that WKB methods work well if the potential is not "too flat" near its turning points, and provided that the turning points are widely separated. In particular, this means that the "plateau" between x 1 and x 2 should be suitably high and wide. This guarantees that e −2T will also be small and hence our perturbative computation of δω will also be reliable. As we will see, these conditions are satisfied by the quasi-normal modes of superstrata with a deeply-capped BTZ throat, as well as by the quasi-normal modes of GMS geometries in the near-decoupling limit that do not have a long capped BTZ throat [26][27][28][29].
3 The radial potential for asymptotically-flat (2, 1, n) superstrata Our ultimate goal is to compute the decay rate of deeply-bound states in asymptotically-flat superstrata. One of the simplifying features of asymptotically-AdS superstrata is that the functions entering in their construction depend only on two variables [33,3], and there are even simple families in which the massless scalar wave equation is separable [36,37]. This was used to great effect in the study of bound states and Green functions in [38,13,14]. However, in more general superstrata, such as those constructed in [10], the functions that enter in the solution depend upon three or more variables and the wave equation fails to be separable. The situation becomes even more complicated for asymptotically-flat superstrata [4], where even the flat-space analogues of the simplest asymptotically-AdS superstrata typically depend explicitly at least three, or more, variables and separability also fails.
The key observation, that makes our entire analysis possible, is that there exist certain sufficiently simple asymptotically-flat superstrata in which the decay of perturbations can be computed. First, if one uses the simplest "supercharged superstrata" [8,9], the geometry once again only depends on two variables, even for asymptotically-flat superstrata. Moreover, there are families of such superstrata that have a "nearly separable" massless scalar wave equation. This means that the wave-equation almost completely separates except for one term that spoils the separation. Furthermore, we can show that this term can be made parametrically insignificant when the superstrata have a long capped BTZ throat.
We will therefore study the decay of bound states in these simple "supercharged superstrata." Specifically we will focus on what are known as the (2, 1, n) supercharged superstrata, whose asymptotically-AdS forms were constructed in [8,9]. It is relatively straightforward to generalize these results to obtain asymptotically-flat (2, 1, n) superstrata and we will give the solution in Section 3.1.2.
The goal of this section is to reduce the problem of solving the massless wave equation in asymptotically-flat (2, 1, n) superstrata to solving a radial equation. This equation comes with a complicated potential function and we will examine, in considerable detail, its structure and elucidate the physics that emerges in various limits. While the computational details depend upon the explicit form of this superstratum, we expect the physics that we extract to be a universal property of all superstrata with a deeply-capped BTZ throat.
The asymptotically-AdS (2, 1, n) superstrata are dual to coherent states of the D1-D5 CFT peaked around the Ramond-sector state: In this expression, | ++ N 1 is the maximally spinning RR-ground state, and |2, 1, n, q = 1 is the spectral flow of the "supercharged" NS state: The operators are all acting on the right-moving sector of the CFT. Note that we have followed the conventions of [9] in which we have re-labelled the states of [8] by sending n → n + 1.
The numbers, N 1 and N 2 , of these states must satisfy the constraint where n 1 and n 5 are the numbers of D1 and D5 branes.
More details of the holographic dictionary can be found in [8]. Here we will simply note the RR states dual to the superstrata have quantum numbers: In the supergravity dual, the number of copies of each fundamental state, N 1 and N 2 , are reflected in two Fourier coefficients, which will be denoted by a and b. The supergravity charges, Q 1 and Q 5 are proportional to n 1 and n 5 and the numbers, N 1 and N 2 , are proportional to a 2 and 1 4 b 2 . The supergravity analogue of (3.3) becomes where R y is the radius of the common D1-D5 direction. In supergravity this constraint emerges from requiring that the microstate geometry be smooth. The relationship between supergravity and quantized charges is given by (2.1) and (2.2) and the precise details can be found in [33,3,4,9]. One relation between supergravity charges and quantized charges that we will often use is This is the parameter that controls the depth of the BTZ throat: the redshift between the cap and infinity.
The superstratum geometry
The construction technique for superstrata are well-documented in many places (see, for example, [33,3,4,9,39]), and we are simply going to summarize the results of such an analysis.
Superstrata are most simply described within the six-dimensional (0, 1) supergravity obtained by compactifying IIB supergravity on T 4 (or K3) and then truncating the matter spectrum to tensor multiplets. For supersymmetric solutions, the six-dimensional metric takes the form [40,41]: are null coordinates and y parametrizes the common S 1 of the D1 and the D5 branes.
In the superstrata considered here, the metric, ds 2 4 , is simply that of flat R 4 and it is most convenient to write it in terms of spherical bipolar coordinates: where Σ ≡ r 2 + a 2 cos 2 θ . (3.10) The vector, β, is chosen to be the potential for a self-dual magnetic field on R 4 with a source along r = 0, θ = π 2 : β = R y a 2 √ 2 Σ (sin 2 θ dφ − cos 2 θ dψ) . (3.11) The remaining pieces of (3.7), namely the vector, ω, that lies in R 4 and the functions P and F are obtained by solving the BPS system following the techniques described in [33,3,4,9]. The data about the CFT states involve exciting particular Fourier modes in the three-form fluxes in the six-dimensional geometry. However, the fluxes are not relevant to our problem and so we will simply provide the metric quantities that emerge from solving the BPS system and refer the interested reader to [8] for the tensor fields. 5 The metric is given by: where (3.13) If one expands the metric (3.7) around infinity using (3.12), one can extract the angular momenta and momentum given in (2.1).
Of particular importance in this paper will be the superstratum geometries that have very long capped BTZ throats, and hence cap off at very high redshift. The hallmark of these geometries is that j L is extremely small compared to the other charges. From (2.1) and (2.2) it is evident that such solutions arise when: (3.14) In this regime, the three-dimensional manifold parameterized by (u, v, r) corresponds to a highlyredshifted global AdS 3 cap region in the IR, 0 < r √ n a. Then, as Γ transitions from 0 to 1, the geometry resembles a BTZ throat. In particular, the geometry looks like AdS 2 ×S 1 for √ n a r √ Q P and an "upper" AdS 3 region for It is also possible to have Q P Q 1,5 (that is b R y ). For these charges, the BTZ throat is reduced to a simple AdS 2 ×S 1 throat that transitions to flat space without any intermediate AdS 3 region. As always with brane configurations, the transition to the asymptotically-flat region occurs when the constants in the warp factors begin to dominate the terms that fall off with the radius. This happens when r √ Q I and the metric becomes five-dimensional flat space times the S 1 common to the D1 and the D5 branes.
We therefore have three distinct sub-regions that are depicted in Fig.1
(a):
• A global AdS 3 ×S 3 cap region in the IR: The cap geometry is obtained by taking the limit r √ na (corresponding to Γ n ∼ 0) in (3.12). We decompose the six-dimensional cap metric as an S 3 fibration: is the metric on S 2 : The (r, τ, y) manifold defines a (hugely red-shifted and boosted) global AdS 3 . The d Ω 2 term in (3.15) give the metric on the U (1) × U (1) defined by (φ, ψ). The φ-circles and ψ-circles universally pinch-off at θ = 0 and θ = π 2 , respectively and so the (dθ, dφ, dψ) components describe a round S 3 with non-trivial fibering over the three-dimensional space-time.
• An intermediate S 3 fibration over a BTZ throat: The metric reduces to: . This is simply a trivial S 3 fibration over a red-shifted extremal BTZ geometry. The left and right temperatures are • The product of flat five-dimensional space-time and the common D1-D5 circle: For √ Q I r, all quantities in the metric converge to a constant or to zero, giving Henceforth we will assume that n is large. This greatly simplifies the structure of the metric without losing the essential physics. This assumption means that the global AdS 3 cap will be large in units of the AdS radius, and hence will contain a large number of bound states. The bound states that localize in the cap only have small interactions with the rest of the geometry and, as we will see, their decay can be treated accurately in perturbation theory.
Scalar wave excitations
We will look at the behavior of massless scalar modes satisfying where g M N is the six-dimensional metric defined in (3.12). Since the geometry is independent of u, v, φ, and ψ, we can decompose the scalar into Fourier modes along these directions: The wave equation becomes an expression of the form: where we have defined the Laplacian operator,L, viā The angular and radial potentials, V r (r) and V θ (θ), and the non-separable term, W(r, θ), are given by: , where we have introduced a transition function, F (r), between the cap and the outer region, and an asymptotic potential V asymp (r), which represents the difference between the asymptoticallyflat superstrata and the asymptotically-AdS 3 superstrata: a 4 n(n + 2) 2 + 2r 2 a 2 n(n + 1) − 2(r 2 + a 2 ) F (r) a 2 (n + 2) + 2r 2 , Moreover, the wave profile (3.21) must be 2πR y -periodic along y which requires (3.27) We now have three integer-moded quantum numbers related to the periodicities along (y, φ, ψ) We conclude by noting that for quasi-normal modes, energy must be able to leak out at infinity and so the potential must be negative at large r. This implies ΩP = Ω (Ω + q y ) > 0 . (3.29) In contrast, the potential for the modes with ΩP ≤ 0 is positive at infinity, and hence these modes are "eternally trapped." We will discuss the modes further in Section 7, and restrict our attention here to the quasi-normal modes.
Separability
We begin by noting that the failure of separability of the wave equation is encapsulated entirely in the term W(r, θ) in (3.22) and defined in (3.24).
First, we note that W is proportional to Ω 2 and, as we will show, it is extremely small for the lowest-energy quasi-normal modes. However, independent of its coefficient, W is also parametrically small when the BTZ throat is long.
If one examines W one can see that it contains terms that could also be moved into the separable terms. In fact, in defining W we have been careful to adjust these terms so that W is parametrically smaller than V Ω 2 , the coefficient of Ω 2 in V r (r). That is, Thus W is also negligible for all values of Ω.
For geometries with a deeply-capped BTZ throat with bRy , which is indeed small. The negligibility of W is then independent of the parameters of the mode {Ω, q y , q φ , q ψ } and only relies on having a solution with a long throat.
We can therefore neglect W, and take The wave equation (3.22) then reduces to: The second equation is almost, but not quite, the wave equation on a round S 3 : Without the last term (proportional to P Ω), the smooth solutions of this equation are Jacobi polynomials and λ = ( + 2) , ∈ N . (3.33) The (P Ω)-term in (3.32) comes directly from the coupling of the geometry to flat space and it arises in other investigations similar to ours (see for example, [16][17][18][19]). We would like to argue that this term will only cause a small correction to the spectrum (3.33): To make this correction parametrically small, we will take a 2 R 2 y and we will prove later that bound states have P Ω that scales at large as a 4 b 4 2 . Thus, once again, having a geometry with a long black-hole-like throat is enough to consider that the eigenvalue, λ, is given by (3.33) at leading order.
Without the term proportion to P Ω, the angular wave equation is exactly solvable and gives (3.35) which is regular at cos 2 θ = 1 if and only if one imposes the bound (3.36)
Schrödinger form, the large-n limit and regions of the geometry
As in [14], all the interesting physics is encoded in the potential function, V r (r). The apparent complexity of its form (3.24) can be removed by dissecting it into various limits. Indeed, as explained in Section 3.1.2, the superstratum geometry can be thought of being composed of a global-AdS 3 cap at small r, a BTZ throat in the middle, and an asymptotically-flat region at large r. We will show that the scalar wave equation reflects this geometric structure. We will also significantly simplify the discussion by taking the large-n limit.
We first convert the radial equation into an equivalent Schrödinger problem. There is an infinite number of ways to do this, but we will use the approach of [14], which was particularly effective and simple. Thus, we use: The radial wave equation gives where V (x) is given by: In terms of the geometry, the large-n limit produces a large, highly-redshifted global AdS 3 cap region, with 0 < r √ n a, which, at larger radius, transitions to the BTZ throat. From the point of view of the scalar wave equation, this transition is driven by the behavior of the transition function, F (r), defined in (3.26), as it goes from 1 to 0. The region with F (r) ∼ 1 is the cap, while the bump in F (r) that occurs at r ∼ √ na corresponding to the beginning of the BTZ throat, as depicted Fig.1(a).
The transition to the asymptotically-flat region occurs when the r 2 term begins to dominate over (Q 1 + Q 5 ) in the overall factor of V asymp (r) in (3.25). Thus, the asymptotically-flat region begins for r Q 1,5 . We therefore have three distinct sub-regions of the geometry in which we can simplify the potential. These are the yellow, brown and green regions depicted in Fig.3. -The global AdS 3 cap region: when 0 < r √ n a, or x 1 2 log n: The potential is well-approximated by the potential of a scalar field in a AdS 3 background (green dotted curve in Fig.3): As we will see, the factor 1 + b 2 a 2 make the cause the values of Ω where bound states occur to be highly redshifted compared to a simple global AdS geometry.
The form of this potential is simple because we have taken the large-n limit: At large n, the last term of the first line of V(r), (3.24), and all the terms in the second and third lines are negligible. As explained in [9], the cap structure is more complicated for small n.
-The BTZ region: when √ n a r Q 1,5 , or 1 2 log n x 1 2 log The potential is well-approximated by the potential of a scalar field in an extremal BTZ black-hole (orange dotted curve in Fig.3): (3.41) The form of the potential is the same as one would have in a standard BTZ geometry (see, for example, [14]) except that the parameters have been shifted by constants proportional to Q I R 2 y . These terms arise through the gluing to flat space.
-The flat region: when Q 1,5 r, or 1 2 log The potential is well-approximated by the potential of a scalar field in flat space, which is shown as a red dotted curve in Fig.3): This reflects the relative roles of the three-dimensional mass, ( + 1), and the asymptotic decay of the energy and momentum at large x. If Ω P is positive, the last term is negative and "destabilizes" the bound states at the cap to produce quasi-normal modes. However, if the last term is negative, the modes will be trapped forever in the geometry.
As we will show below, the bound states will have frequencies quantized in units of This is a consequence of the huge red-shift created by the long capped BTZ throat of the microstate geometry.
We label the modes by a mode number N ∈ N and their frequencies will behave as a 2 , we can simplify the constant term, in each potential and work with 6 : (3.45) For N 2 Q 1,5 a 2 , the constant term starts to be negative and quasi-normal modes no longer exist.
Energy regimes, mode numbers and mass
One can arrive at an even simpler picture of the bound-state physics if one thinks about energetics in terms of the mode number, N , and the three-dimensional mass, , of the six-dimensional massless mode. Indeed, a closer study of Fig.3(a), suggests that the approximate potentials, V cap (x) and V Flat (x) actually match the full potential at low energy far outside the ranges in r described above. This means that we can think of the physics in this regime as being controlled by the highly red-shifted AdS cap transitioning directly to flat space.
As noted above, the cap potential is a good approximation for r √ n a, or x 1 2 log n. For small N and large , the modes are strongly trapped by the gravitational potential and hence become localized in the cap and do not feel the other features of the full geometry.
The potential barrier for tunneling is set by the barrier height, ( + 1), and so the relevant question is when the potential starts to level off at this value. One possibility is that this transition is in the cap region and determined by the cap potential as in Fig.3(a); one sees that this occurs when the last two terms of V cap (x) in (3.45) become smaller than the first term. This happens when Using (3.44), this is equivalent to Given that the cap region is approximately at x 1 2 log n, we see that the transition is indeed in the cap region as soon as the mode numbers, N , is in the range Thus for the lowest modes, satisfying (3.48), the long BTZ throat plays a relatively minor role in interpolating between V cap (x) and V Flat (x). One should note that the BTZ throat plays an essential role in the physics of superstata as it is enables the existence of an extremely highlyredshifted cap. Moreover, in the Green function computations of [14], the BTZ throat led to thermal decay at intermediate times. However, from the point of view of the lowest lying bound states and of the quasi-normal modes, all that really matters is that the cap is there and that it transitions smoothly to flat space. This leads to the following picture.
-The low-energy regime: When the mode number is bounded by (3.48), the wave is essentially contained in the IR AdS 3 cap. Its potential will be well-approximated by the highly-redshifted AdS 3 potential glued to flat space and the BTZ part of the geometry has a negligible effect, as depicted in Fig.3(a): (3. 49) In this sense, the physics here is similar to the analyses of quasi-normal modes in other AdS 3 geometries that are glued to flat space in the UV [18,19]. The important difference in our work is that we have more parameters to control the depth of the throat and our AdS 3 region is highly redshifted, by a factor n 1 n 5 j L . One should therefore expect similar results to those of [18,19] except that our frequencies are quantized in units of 2j L n 1 n 5 with arbitrarily low j L . This will lead to a much slower decay rates for the modes trapped in superstrata with deeply-capped BTZ throats.
-The intermediate-energy regime: When the mode number lies in the range N √ n , as depicted in Fig.3(b), the energy level is large enough for the wave to explore the BTZ throat of the geometry. In this regime, one necessarily has to make use of the details of the BTZ potential, V BTZ (x), to describe the transition from the cap potential to the flat potential. One might therefore expect to find some effects of the BTZ throat on the spectrum of quasi-normal modes.
We call this regime the "intermediate-energy regime" to differentiate between the highenergy modes, with N 2 Q 1,5 a 2 , that correspond to a potential where the barrier starts to be negative and where quasi-normal modes no longer exist.
To summarize, we have two energy regimes. We depict these regimes in Fig.4. They are separated by a boundary region around the line N ∼ √ n . At low energy, N √ n , the properties of the modes are determined by the red-shifted AdS 3 potential glued to flat space. In the intermediate-energy regime, the spectrum of quasi-normal modes will be modified by the BTZ part of the geometry. There are two parts of these regimes that will be important to us later. The low-energy regime contains the large-limit at fixed N ; the intermediate-energy regime contains large-region with N fixed at a value larger than √ n.
Quasi-normal modes of asymptotically-flat superstrata
For superstrata, much of the essential physics is encoded in the radial components of the wave equation and so we have examined various limits of the radial potential function. In particular, in the last section we exhibited a deeply red-shifted, global AdS 3 cap that is connected to the asymptotically-flat region via a deep BTZ throat. We also showed that, for low-energy modes, the effect of the BTZ region is negligible and such bound states are largely determined by the red-shifted cap. We now use this structure to compute the quasi-normal modes. As we remarked earlier, because the physical structure of superstrata with deeply-capped BTZ throats is universal, we expect our conclusions to be largely independent of the details.
Quasi-normal modes in the low-energy regime
We apply the WKB techniques described in Section 2.3 to the superstratum. The quasi-normal modes are labelled by a mode number, N ∈ N, defined by: and the first-order correction, δΩ N , is purely imaginary and is given by:
Low-energy Regime
Remember that Ω is the frequency along the t−y direction and P is the momentum along the t+y direction, whereas, in the general formula (2.18) of Section 2.3, ω was the conjugate momentum of t. This means that sign(Re(ω)) is now replaced by sign(Re(Ω + P )) = sign(2Re(Ω) + q y ).
We now have to evaluate the integrals: where x 0 , x 1 and x 2 are the three turning points as depicted in Fig.3(a). For the three approximate potentials (3.45) these integrals are elementary .
The classical turning points that define the bound states are given by: and the integral Θ in (4.4), yields: The WKB approximation requires modes with a large number of oscillations between x 0 and x 1 , and this means: 9) and this leads to Therefore, using (4.2), one finds that the ground-state frequencies are given by: 11) for N ∈ N. As with any physical system, we have two branches of frequencies, one positive and one negative. Note that, as we anticipated in (3.44), one has Ω N = ± 2j L n 1 n 5 (N + . . . ) . (4.12) In particular, the frequencies are quantized in units of 2j L n 1 n 5 1. The fact that the frequencies are extremely small was essential in going from (3.40) to the simpler form of V cap in (3.45).
Last but not least, the precision of the WKB approximation requires that N 10 to have a large number of oscillations between the turning points. At the other extreme, to compute Θ using the potential, V cap , means that the classical turning point, x 1 , must remain in the cap region, which means x 1 < 1 2 log n, which is guaranteed if B < 1 2 n( + 1) 2 . Using (4.11) in (4.6), the validity of the computation above leads to a bound on N : which is exactly the bound we have already established for the low-energy regime.
The quasi-normal decay rates, δΩ N
We now apply (2.18) to obtain the perturbative imaginary corrections to the normal modes caused by the tunneling through the asymptotically flat region.
The first part is straightforward. It follows from (4.10) that
∂Θ ∂Ω
(4.14) The evaluation of the integral, (4.4), that defines T is more of a challenge because it crosses between regions in which we have made different approximations to the potential. Indeed, we first note that the endpoint, x 1 , of the integral is determined by V cap (x) and is given by (4.7), while the other endpoint, x 2 , is determined by V Flat (x) and is given by: One can make a reasonably good estimate of the value of T by approximating the entire integral by the area of a rectangular plateau of height ( + 1). Since V Flat (x) is dropping exponentially fast, the right end of the plateau is well approximated by x 2 . Locating the left end of the plateau,x 1 , is a little more difficult. It turns out that x 1 is not a good estimate for this point because the ramp up to the plateau can be fairly gradual. It is better to estimate the point at which V cap (x) is approaching ( + 1). We claim that the following is a better estimate ofx 1 : 0 <x 1 < 1 2 log n . , wherex 1 = h 2 log n for 0 < h < 1. We now make a much more precise evaluation of T by performing a calculation that may be viewed as the WKB analogue of a matched asymptotic expansion. The strategy is extremely simple: we know that V cap (x) and V Flat (x) provide accurate approximations to the exact potential and that the domains of validity of these approximations overlap for a substantial interval at the top of the plateau where V (x) ≈ + 1. We therefore know that, to a very good approximation, one has where x 1 x x 2 is chosen to lie in the overlap region at the top of the plateau as depicted in Figure 5 As we remarked earlier, both integrals in (4.18) are elementary and can be obtained in closed form. The detailed analysis may be found in Appendix A.3. The general formulae are far from simple, however it is easy to make approximations that improve upon (4.17). Indeed, motivated by the results coming from matched asymptotic expansions like those of [32,18,19], we have shown that the following result closely approximates the WKB expressions for T : where p C q is the standard binomial coefficient. Note that Stirling's approximation gives ∼ ! to leading order, and so the first terms in (4.19) coincide with the simple estimate, (4.17).
We have tested (4.19) against the WKB formula for T in the Appendix A.4. We found that they exactly match up to third order in the large-N and large-expansions. Moreover, we used numerics to show that the mismatch is less than 1% as soon as (N, ) > 10.
Combining (4.19) with (4.14) and (4.3), we arrive at the main result for the low-energy regime: The sign-sensitive terms in (4.14) and (4.3) combine to the sign of Ω The last inequality is simply (3.29), which is required for having quasi-normal modes.
Thus, the right-hand side of the expression is a positive purely imaginary number. The time dependence of the modes is given by which guarantees that the wave profile is decaying in time for both branches of frequencies (4.11).
One important feature is that the decay time-scale is set by n 1 n 5 j L ∼ b 2 a 2 , which is extremely long because of the very large red-shift between flat space and the cap.
We also note that the essential, leading-order physics of the quasi-normal decay is captured by the simple "rectangle" approximation that led to (4.17). The more accurate computation leads to corrections that are sub-leading at large .
Finally, for our analysis in Section 4.3.1, we note that the low-energy regime contains the large limit of the spectrum of quasi-normal modes for N √ n.
Quasi-normal modes in the intermediate-energy regime
We now consider mode numbers with N √ n . These intermediate-energy states start exploring the BTZ throat of the geometry. In particular, the middle classical turning point, x 1 , is no longer in the cap region as depicted in Fig.3(b). Once again to obtain the spectrum of quasi-normal modes via WKB, one needs to estimate the integrals Θ and T (4.4) using the approximate potentials. The computation proceeds much as in Section 4.1
The computation of Θ
In the low-energy regime, the first two turning points are in the cap region. This facilitates the computation of Θ because it only involves V cap (x). In the intermediate-energy regime, we simply follow the approach of Section 4.1.2 and estimate Θ using V cap (x) from x 0 to x ∼ 1 2 log n and V BTZ (x) from x to x 1 . However, because of the depth of the potential well and the rapidity of the climb of the BTZ potential (see Fig.3(b)), almost all the support of the WKB integrals lies within the cap region.
One can easily estimate the error in simply using V cap (x). The crossover between the cap and the BTZ throat starts at x ∼ 1 2 log n, at which point the potential has some large, negative value, V c . The potentials V cap (x) and V BTZ (x) lead to two different values, x 1,cap and x 1,BTZ , for the classical turning point (see Fig.3(b)). The difference of the WKB integrals for the two potentials is approximately the area of the triangle with base x 1,BTZ − x 1,cap and height |V c |. This leads to an error estimate of order where the last inequality follows from N √ n .
Thus we find that Θ receives a small correction from the result for the low-energy regime (4.10): which, at zeroth order, leads to the two same branches of normal frequencies for Ω N , as in (4.11).
The computation of T and the decay time
We compute T just as in Section 4.1.2, but now we use V BTZ (x) to define the left side of the plateau. In particular, the classical turning point is defined by the vanishing of V BTZ (x) in (3.45). This yields wherep is the effective BTZ momentum: Since V BTZ (x) rises to the plateau extremely fast, and V Flat (x) descends similarly fast, one expects that the WKB integral can be well approximated by a rectangular plateau of height ( + 1) and width x 2 − x 1 . Using (4.15) and (4.23), this leads to 2 a 2 Ω (Ω + q y ) ( + 1) 4 R 2 y n 1 n 5 Ω j L p + p 2 + (n + 1 2 ) 2 ( + 1) 2 ( +1) .
(4.25)
A more precise computation in which one uses (4.18) with V cap replaced by V BTZ yields (4.27) The quantity κ that is sub-leading in the large-expansion. Thus (4.25) does indeed yield a good estimate of T .
Applying the WKB formula at zeroth order, using (4.3) and (4.11) we obtain the following results in the intermediate-energy regime, N √ n : (4.28) Once again, once can use the same arguments as in the low-energy regime to show that both branches of frequencies are decaying in time. Moreover, we also note that the essential, leadingorder physics of the quasi-normal decay is captured by the simple "rectangle" approximation that led to (4.25). The more accurate computation leads to corrections that are sub-leading at large . Also, for our analysis in Section 4.3.1, we note that the intermediate-energy regime contains the large limit of the spectrum of quasi-normal modes for N ≥ √ n.
The eikonal limits
One of the original motivations for the stability analysis of [18] was the fact that microstate geometries with evanescent ergosurfaces will have time-like geodesics with extremely low energies and that are trapped for extremely long periods of time. The link between this and the study of modes of the scalar wave equation arises through the standard geometric-optics limit in which the phase function of the WKB solution provides a Hamilton-Jacobi function for geodesics. In particular, the normals to the wave-fronts become the tangents of the geodesics. Just as in WKB, the geometric optics limit, or eikonal limit, becomes more accurate at higher wavenumbers. Moreover, by taking these limits in the right way, one can localize the wave in various geometric regions and use this to capture the physics of particular geodesics.
For our geometries it is interesting to consider low-energy modes in the large-limit. This is because the three-dimensional mass of the scalar modes is ( + 1), and massive modes are more strongly trapped in the three-dimensional geometry. Equivalently, ( + 1) 2 is the height of the potential barrier between the bound states and the asymptotically flat region. As we have remarked, low-energy modes localize in the cap, near r = 0. Moreover, at large-one can localize the scalar harmonics on the S 3 , and especially near the evanescent ergosurface, r = 0, θ = π 2 . Such limits were a major focus of [18]. There are several obvious limits to consider. First, one can take large while holding the other mode numbers, q ψ , q φ , q y and N fixed, and small relative to . These are "generic" sphere modes in that they do not localize in any particular region. Of more interest is to take |q ψ | = , q φ = 0, or q ψ = 0, |q φ | = . (Remember that one must respect (3.36).) It is evident from (3.35) that these choices localize the wave at θ = 0 or θ = π 2 respectively. From (3.7), one sees that the evanescent ergosurface is located where P diverges. From (3.12) and (3.10) one sees that this corresponds to r = 0 and θ = π 2 . Thus we anticipate that stronger "trapping" of modes in the cap (localized near r = 0) will arise if one takes |q φ | = .
The physical difference between these limits, and the significance of |q φ | = become apparent in our results for the normal modes, (4.11) and (4.28): One sees that a generic choice of mode numbers leads the Ω (0) N growing linearly with . However, this growth with can be cancelled to produce if and only if we take where the "±" depends on which branch of Ω (0) N is considered. Thus generic modes have frequencies that grow linearly with , and it is only the modes that localize near the evanescent ergosphere that have frequencies that do not grow with . This is the wave-equation analogue of the statement that it is the geodesics that localize near the evanescent ergosphere that can have arbitrary low energy.
From the results of Section 4.1, we found that the decay rates depend, to leading order, on the quantum numbers and Ω (0) N as: We therefore see the competition between mode energies and barrier height and length. Observe that if Ω (0) N grows linearly with , then the numerator and denominator grow with at the same leading-order rate. If, however, Ω (0) N does not grow with , δΩ N becomes extremely small at large . These are the states that lie close to the ergosphere and are trapped for extremely long times.
We will therefore study the difference in decay times, at large , for generic modes and for modes localized near the evanescent ergosphere. It will be convenient to introduce the shorthand ω (0) N ≡ 2N + 2 + |q y | ∓ q y ∈ N . (4.33)
The eikonal limits for low-energy modes
The spectrum of low-energy quasi-normal modes, N √ n , is given by (4.11) and (4.20). For modes localized around the evanescent ergosurface we have Using Stirling's formula, we obtain the generic expressions
(4.36)
Thus, for the modes at the evanescent ergosurface, the decay rate at large is minimal when q y = 0 and the leading-order terms are where e = exp(1) and we have highlighted the factor of −2 .
For generic modes, we take q φ and q ψ to be arbitrary but differing from (4.31). We will assume, for simplicity, that N, q y , q φ and q ψ are all fixed and small compared to . However, one will obtain a similar result, but with different coefficients, if one allows some of the mode numbers to scale with . Using the same procedure we obtain δΩ N = i exp −2 log n 1 n 5 j L + 2 1 + log a R y + 1 2 log 1 + q y n 1 n 5 j L + (2N + |q y + q ψ | + 1) log − 3 log n 1 n 5 j L + O(1) .
(4.38)
The decay rate at large is minimal when q y = q ψ = 0 and we have where e = exp(1).
Both expressions, (4.37) and (4.39), for δΩ N are products of th powers of small parameters. Most notable is the factor j L n 1 n 5
+3
, (4.40) which represents the effect of the large red-shift between flat space and the cap.
The primary, and most significant, difference between (4.37) and (4.39) is the factor of −2 . It is this factor that led to the suggestion that the evanescent ergosurfaces of microstate geometries give rise to exceptionally long-term trapping of matter. We we will discuss this further, and explain why this conclusion is unwarranted, in Section 6. We will also discuss why the factor (4.40) carries the important physics of the quasi-normal decay of microstate excitations.
We note that the factor of −2 in the decay rate is cancelled when the mode number, N , scales with . (A similar conclusion holds for the mode number, q y , so long as it has the proper sign.) This means that the −2 scaling is only a property of the lowest modes, whose frequencies and y-momenta remain small compared to . Since the intermediate-energy modes necessarily have frequencies that scale with , one should also not expect the −2 factor in their decay rates, as we will now establish.
The eikonal limits for intermediate-energy modes
The intermediate-energy modes are defined as the excitations with N √ n . Their frequencies and decay rates are given in (4.28). It is evident from these expressions that even if one chooses q φ and q ψ as in (4.31) so as to cancel the explicit -dependence and arrive at (4.30), there is still the implicit -dependence in N . To take this into account, we define by α the fixed ratio For simplicity, we will also assume that {q y , q φ , q ψ } are fixed (one obtains a similar result with different coefficients if they scale with ). By expanding δΩ N in (4.28), we obtain δΩ N = i exp −2 log n 1 n 5 j L + 2 + log α 3 a 2 n + 1 2 R 2 y + log 1 + q y α n 1 n 5 j L − 3 log n 1 n 5 j L + O(1) .
(4.42)
The decay rate is then minimal when q y = 0 and we have
.43)
A priori, this decay is faster than that of (4.39) because of the factors of α 3 n n 5 2 . This is because we are considering intermediate-energy states that have N scaling with . Hence, despite being highly-localized on the sphere, the high occupation numbers mean that these excitations are beginning to explore the BTZ throat and have more energy to tunnel through the barrier. Such modes are no longer strongly localized near the evanescent ergosphere, located at r = 0, θ = π 2 , and our analysis shows that these higher modes do not have the exceptionally low decay rates that result from the extra factor of −2 in (4.37).
It is interesting to push (4.39) and (4.43) slightly outside their domains of validity and look at the crossover between these formulae at large N , as well as large . The ratio of these expressions is 2N +1 .
(4.44)
As N becomes large, one sees that the numerator grows faster than the denominator. This is because (4.39) is based on the AdS cap, which does not limit, or contain, the modes nearly as strongly as the BTZ throat. Indeed, (4.43) does not explicitly depend on N . This is because the extremely steep BTZ throat strongly attenuates any mode that enters the throat and confines modes very strongly within the cap. This attenuating effect of the BTZ throat was also very noticeable in the thermal decay of the Green functions studied in [14].
Quasi-normal modes of other microstate geometries
One of the simpler families of three-charge microstate geometries, obtained by Giusto, Mathur and Saxena (GMS), are those generated through a spectral flow of the Lunin-Mathur D1-D5 geometries [26][27][28]. These are closely related to the GLMT geometries, which are obtained by fractional spectral flow [29].
Because of their simple relationship with the two-charge D1-D5 system, the GMS and GLMT geometries and their scalar wave equations are relatively simple. In fact, the wave equation is exactly separable. It is for these reasons that GMS geometries were recently used [18,19] to study instabilities and compute quasi-normal modes.
Both derivations have been done using an asymptotic expansion analysis but in different limits: in the large-limit (eikonal limit) for [18] and in the near decoupling limit for [19]. Our purpose here is to re-examine the results of [18,19] and compare and contrast them with our WKB analysis of quasi-normal modes of superstrata.
Unlike superstrata, the GMS geometries do not have the same charges and angular momentum as a black hole with a macroscopically large horizon area, and hence are dual to a more restricted family of CFT states. Because of this, GMS geometries do not develop a long black-hole-like throat. However, GMS geometries involve a Z k orbifold and one can generate large red-shifts by taking the orbifold parameter, k, to be large. This leads to more stringent limits on the redshifts of GMS solutions when compared to superstrata because the supergravity approximation will break down for high levels of orbifolding. Superstrata do not suffer from any such limitations.
The GMS geometries
Here we summarize the essential details of the GMS geometries, their charges and quantum numbers. We refer the interested reader to the original papers [26][27][28] for more details about their construction and the holographically dual CFT states.
As with superstrata, GMS solutions are most simply described within the six-dimensional (0, 1) supergravity obtained by compactifying and truncating IIB supergravity on T 4 (or K3).
The six-dimensional metric takes the form [26][27][28] where u and v are the null coordinates composed from the time coordinate and the common S 1 of the D1 and D5 branes in (3.8). The functionsΣ and P are defined bȳ This metric is asymptotically flat and caps off in its center as an orbifold of global AdS 3 ×S 3 . Once again we are not interested in the three-form fluxes of the solutions since scalar excitations are insensitive to them. Explicit expressions can be found in the references cited above.
The solution depends on the parameters Q 1 , Q 5 , a, γ 1 , γ 2 and η, which determine the charges of the system. As one would expect, Q 1 and Q 5 are the D1-and D5-brane supergravity charges. These are related to the parameter a via the regularity condition: One should remember that this solution was constructed starting from a 16-supercharge asymptotically-AdS solution that only had D1 and D5 charges, and its momentum charge was added by performing a spectral flow 8 rather than by adding an explicit momentum wave, as is done in superstrata. The parameters γ 1 ,γ 2 and η are related to the momentum charge via: By expanding the metric at infinity, one can also obtain the two angular momenta of the solution The parameters γ 1 and γ 2 are related to the spectral flow parameter, n, and the orbifold parameter, k ∈ Z, via [29]: Corresponding to the supergravity charges, (Q 1 , Q 5 , Q P , J L , J R ), there are the quantized charges (n 1 , n 5 , n P , j L , j R ) (2.2). These charges are related to the parameters via: j L = n 1 n 5 2 γ , j R = (n + 1 2 ) n 1 n 5 γ , n P = n (n + 1) n 1 n 5 γ 2 . (5.7) Finally, it will be convenient to define a scaled version of the a-parameter: a ≡ √ ηγ a . (5.8) While the underlying CFT states and the geometry are different, it is convenient to relate the quantized charges to those of the superstratum in order to obtain an approximate correspondence. In particular, (5.7) matches (3.4) is we identify: N 2 ↔ n n 1 n 5 γ , n + 1 ↔ (n + 1) γ . One can similarly match the supergravity charges to arrive at The GMS solution and the superstratum are not, of course, the same solution, and have different ranges of validity, but the charges in (5.10) and (5.11) correspond perfectly in the regions where the phase spaces overlap. Therefore the superstratum can be compared, at the mathematical level, to the GMS solution that satisfies the constraint (5.10). In particular, this correspondence provides a very useful comparison between the energy regimes of both geometries.
It is important to remember that geometric details are very different and that highlyredshifted GMS geometries have a pathology that superstrata do not share. The redshift parameter between flat space and the core of the GMS geometry is To obtain a highly-redshifted geometry one must take k to be extremely large. Indeed, for j L ∼ O(1) one must take k ∼ O(n 1 n 5 ). However, one should remember that the AdS 3 and the S 3 have radii of order (n 1 n 5 ) 1/4 in Planck units. This means that if the orbifold is to avoid breaking the geometry into sub-Planckian pieces one must require k (n 1 n 5 ) 1/4 .
Scalar wave perturbations
The massles Klein-Gordon equation (3.20) is directly separable in the GMS geometry. We consider the mode expansion 9 : 14) The radial and angular wave equations are then: whereā is defined in (5.8) and the potentials are defined by: The asymptotic potential, V asymp (r), is identical to the one obtained in the superstratum solution at large-n (3.42). The only other part of the potential is V cap (r), which is purely of the form of a global AdS 3 potential. There is no intermediate BTZ throat and no corresponding intermediate regime in the potential like that of (3.41).
The redshift factor can be extracted from the coefficient of Ω 2 in V cap (r), and here we find a redshift of ∼ k (since η 1) whereas (3.45) leads to a factor of 2(1 + b 2 2a 2 ). This is in accord with the correspondence (5.11). The redshift is given universally by The angular potential (5.16) is almost identical to the superstratum angular potential (3.32). The last term, proportional to P Ω, generates a correction from the usual spherical harmonics on S 3 . Thus we have In the near-decoupling limit, a 2 R 2 y , we will show that P Ω scales with 1 k 2 2 for bound states.
Thus, for the low-energy excitations (when the mode number N is smaller than kRy a ), one can take λ = ( + 2) and S(θ) is given in (3.35) with Just as for the (2, 1, n)-superstratum potential, we will use the integrally-moded quantum number, q y , and replace P = Ω + q y .
Quasi-normal modes via asymptotic matching
Following [19], we introduce the short-hand notation: (5.20) The radial wave equation becomes We will show that the frequencies of the normal modes have the form where N ∈ N is the mode number. Since we will take k large in order to have a solution with a long throat, this means that the terms involving Ω in the definition of ν can be taken to be small and so, just as for the superstratum, we have ν ∼ + 1 .
If N starts to become large, of the order N ∼ kRy √ Q 1,5 , then ν 2 will become negative and there will be no quasi-normal modes. We therefore restrict our attention to modes with N kRy √ Q 1,5 The standard approach to quasi-normal modes is to apply matched asymptotic expansions. Indeed this was done in [16][17][18][19] and we briefly recap this computation. The details can be found in Appendix A.2. In doing this analysis, we will restore a sign restriction in [16,17,19] that make them suggest that one branch of quasi-normal modes is potentially unstable by growing with time. We will show that without this sign restriction, both branches correspond to quasinormal modes that decay with time for GMS backgrounds. The inner equation is simply that of a global AdS 3 cap and the outer equation is solvable in terms of Bessel functions.
In the near-decoupling limit (a 2 R 2 y ), the overlapping region, where the radial potential is dominated by ( + 1) 2 − 1, is large. This means that the matching of K in and K out provides an accurate approximation. The wave profile of quasi-normal modes is constrained by imposing smoothness at the origin and an outgoing boundary condition at infinity. As for the (2, 1, n) superstrata, having an outgoing wave solution to (5.24) necessarily requires For more details of the method we refer the interested reader to Appendix A.2.
In a nutshell: one imposes the proper boundary conditions in each region and then matches the power-law behavior of the hypergeometrics of global AdS at large r, to the small-r power-law behavior of the Bessel functions. This leads to the following constraint 10 : This equation is not exactly solvable, but, because the right-hand side is small for the lowestenergy states, one can work perturbatively. At zeroth order, the left-hand side must vanish. Thus the Gamma functions on the denominator must hit their poles. This results in the spectrum of normalizable modes of the AdS 3 cap. For the two different Gamma functions, we have two branches of frequencies, one mostly positive and one mostly negative labelled by a mode number N ∈ N: 2N + +2 + |k q y + n q φ − (n + 1)q ψ | ∓ (k q y + (n + 1)q φ − n q ψ ) . (5.27) To find the first-order correction, Ω = Ω (0) N + δΩ N , we expand the Gamma function around its pole and obtain a purely imaginary contribution l+1+N +|ζ| C l+1 , (5.28) 10 We have changed a sign restriction in [19]. This paper derives the formula prematurely fixing the sign of Re(ω), where ω is the momentum of the modes along t (corresponding to ω = 2Ω + qy in our convention). However, we have two branches of frequencies, one positive and one negative (5.27). Thus, their formula applied to the branch with opposite sign leads them to the conclusion that this branch might correspond to unphysical mode that grow in time. If we do not fix the sign convention prematurely, we can see from equation (5.26) that we obtain a factor of e i sign(2Re(Ω)+qy )π instead of e i π and both branches lead to decaying modes in time.
where n C m is the usual binomial coefficient. The time dependence of the modes is given by (4.21) which guarantees that the wave profile is decaying in time for both branches of frequencies (5.27).
To summarize, the spectrum of quasi-normal modes of GMS solutions is given by two towers of frequencies labelled by N ∈ N, one positive and one negative, With the condtion that Ω We could, equally well, have used the WKB approach that we used for superstrata. Indeed, the techniques are almost certainly equivalent in that we match two accurate but approximate solutions in an inner and outer region and this matching is achieved in the large overlap region where the potential is constant. The advantage of the WKB method is that it is easily applicable to geometries with more than two regions, such as superstrata.
In Appendix A.3, we apply our WKB techniques to the GMS backgrounds. This allows us to check in Appendix A.4 the precision of the WKB spectrum formulae (2.15) and (2.18) compared to the matched asymptotic expansion calculation. In a concrete example we show that the mismatch is below 5% as soon as we take N > 10 and > 10.
The eikonal limits
Once again, we are interested in the slowest possible decay rates and the discussion is directly parallel to our discussion for the superstrata.
Slow decay means that we look at the large-limit and arrange the mode numbers so that Ω (0) N remains as small as possible and, if possible, cancel the explicit growth with . This cancellation is slightly more tedious than for quasi-normal modes of the (2, 1, n) superstratum. This is caused by the nontrivial mixing of q φ and q ψ with the parameter n of the background, as is evident in (5.30). It is also related to the non-trivial form of the evanescent ergosphere. We will skip most of the details of the computation, which may be found in [19,18].
The important result is that for any value of n, one can pick a pair of (q φ , q ψ ) satisfying where the ± depends on which branch of Ω (0) N is considered, and for which the ratio q φ q ψ is bounded by n n+1 . As for superstrata, these modes correspond to modes for which the wave profile is strongly localized at the evanescent ergosurface. One then finds In addition to the generic formulas (4.35), one will need +1+X+j C +1 ≈ exp j log 1 + j −1 + log(1 + j) − 1 2 log + O(1) , (5.33) and the imaginary part of the frequency (5.30) behaves as . Thus, for the modes at the evanescent ergosurface, the decay rate at large is minimal when q y = 0 and the leading-order terms are where e = exp(1) and we have highlighted the factor of −2 .
For generic modes, we consider arbitrary q φ and q ψ but differing from (5.31). We will assume, for simplicity, that N, q y , q φ and q ψ are all fixed and small compared to . However, one will obtain a similar result, but with different coefficients, if one allows some of the mode numbers to scale with . Using the same procedure we obtain δΩ N = i exp −4 log k + 2 + log a 2 η 3 4 R 2 y + log 1 + k 2 q y η + (2N + |ζ| + 1) log − 5 log k + O(1) .
(5.36)
The decay rate at large is minimal when q y = ζ = 0 and we have We can now compare this with the decay rate of low-energy quasi-normal modes of the superstrata, derived in (4.37) for modes at the evanescent ergosurface and in (4.39) for generic modes. Taking into account that k = n 1 n 5 2 j L , it may appear that the smallest decay rate for GMS backgrounds is smaller that the smallest decay rate for superstrata, However, as explained earlier, the GMS background has a reliable supergravity description if and only if k (n 1 n 5 ) 1 4 , which is equivalent to j L (n 1 n 5 ) 3
Decay timescales
We now examine the leakage of energy from superstrata with a deeply-capped BTZ throat. Our discussion will closely follow that of [18]. In particular, given the imaginary parts of the quasinormal modes, one is looking for a uniform bounding function, g(t), on a generic energy function, E(t), that measures the energy of a scalar field in the microstate geometries. The function, E(t), is defined on space-like hypersurfaces, Σ t , obtained by time slicing the microstate geometry. The goal is to find a bounding function, g(t), that is independent of the details of the modes.
One should first recall the separated form of our wave-functions, (4.21): .
and note that it is always negative, and represents the inverse decay time of the quasi-normal mode.
Any basic energy function, E 1 (t) should be quadratic in the scalar and its first derivatives, and so should behave, for large quantum numbers, as where 1 is the dominant quantum number on the S 3 .
As pointed out in [18], because microstate geometries have evanescent ergospheres, and may involve trapping, the energy function E 1 (t) might only be bounded by a second-order energy function, E 2 (t), which will be a quadratic in Φ, ∂Φ and ∂ 2 Φ. Thus one seeks a "universal" function, g(t), for which one has for t > 0. The function, E 2 (t) will obey (6.3), but with 2 replaced by 4 . Thus, we expect E 2 (0) < C 4 for some constant, C, that depends only on the energies of the waves and not on the details of the modes. Thus, we are seeking a universal function, g(t) that satisfies e 2ω I t < C g(t) 2 , (6.5) in the large-limit, for some constant, C.
For ultra-compact stars, at large t, the standard uniform bounding functions have the form [45][46][47][48]18] 11 : where D is some constant that only depends on the background. One can then test this bounding function to see if it works for all modes at late times. Indeed, consider the time scale t ∼ e τ for large and for some choice of τ . The condition (6.5) then becomes: which must be satisfied at large for all values of τ > 0. In particular, note that it is compatible with ω I having the form: for any fixed β 0 , β 1 > 0, independent of . Indeed, (6.7) becomes which is obviously satisfied at large by choosing D appropriately.
The important point is that (4.39) has the form of (6.8) and so do most of the factors in (4.37). The problem [18] is the factor of −2 , which means that for low-energy modes that localize near the evanescent ergosphere, ω I contains a piece of the form This means that the right-hand side of (6.7) will always go to zero at large , while the left-hand side of (6.7) can be made arbitrarily large by taking τ large enough. This led to the conclusion in [18] that bound states of microstate geometries decay more slowly than for ultra-compact stars.
However, if has a natural cut-off, Λ, then e −2 log will be bounded below by e −2Λ log Λ . Thus, |ω I | β 0 e −2Λ log Λ e −β 1 , (6.11) and we can show that for appropriately-chosen D we have the bound Thus, in the presence of a UV cut-off for , the standard bounding functions for ultra-compact stars (6.6) is also valid for microstate geometries.
The important point is that superstrata, and microstate geometries, have precisely such a cut-off imposed by the validity of the supergravity approximation. For superstrata, the radius of the S 3 is given by (Q 1 Q 5 ) 1 4 . The th mode has a angular profile on S 3 that has necessarily zeroes between the North and South poles and so slices the sphere into sectors of size −1 (Q 1 Q 5 ) Thus, this means we need to limit by (n 1 n 5 ) 1 4 , (6.14) and has a UV bound given by Λ ∼ (n 1 n 5 ) 1 4 . Thus the supergravity cut-off on the modes in , means that all terms that decay slower than e −β 1 are not an issue. Even more importantly, the primary effect on the leakage of energy are the terms involving j L n 1 n 5 ≈ 1 2 E gap and the energy decay is bounded by the standard expression, (6.6), as ultra-compact stars with a large value for D that will depend on E gap −1 . This is precisely what one should expect for microstate geometries. They look like black holes until very near the horizon scale. They are thus as compact as an object can be, short of being a black hole. It is also extremely natural that the time-scale for the decay is set by the inverse energy gap for the lowest-energy excitations of the system.
We therefore find that, for modes below the supergravity cutoff,
Final comments
We have analyzed the decay rates of quasi-normal modes in superstrata. While we only computed these decay rates using a WKB approximation in a particular family of superstrata in which the massless scalar wave equation is "almost separable," we believe that our results are universal. We have shown that there are two regimes of energy. The low-energy modes are only sensitive to the highly-redshifted AdS 3 cap of the superstrata and the spectrum is the one we obtain in asymptotically-flat redshifted AdS 3 backgrounds, (4.11) and (4.20). In particular, the time scale for the decay is set by the energy gap of the lowest-energy states in the microstate geometry: At intermediate energy, the modes start exploring the BTZ throat of the geometry. We have shown that the real part of the frequencies have almost exactly the same frequencies as the lowenergy modes, but the imaginary part is strongly attenuated in the BTZ throat, (4.28). This attenuatation has the effect of confining the modes for much longer in the cap, when compared to an AdS 3 cap glued directly to flat space. Intuitively, this effect can be thought of as coming from the strong rigidity against perturbation of the AdS 2 throat that interpolates between the cap in the IR and the flat space in the UV.
We have also shown that the extremely-long-duration trapping described in [18] is not an issue, neither for superstrata with deeply-capped BTZ throats nor for the shallow GMS geometries. The concern was that such trapping would lead to instabilities. However, for superstrata with long throats, the modes that would be subject to such long-term trapping have extremely sub-Planckian wavelengths. If one stays within the validity of the supergravity approximation, the trapping has the natural decay timescale determined by the energy gap.
In addition, there exist families of modes that are trapped forever, and never decay. The non-trivial examples of such modes have a "momentum charge" opposite to the momentum of the solution, and the attraction between these opposite charges ensure that the force felt by these modes will always be attractive, and these modes will never be able to escape at infinity. Since these modes never decay, one might worry that if one creates them at the bottom of the solution they would give rise to non-linear instabilities and lead to black hole formation.
However, things are not so simple. First, the microstate geometries we consider have a moduli space whose dimension is n 1 n 5 [23,24], and hence any energy one puts in them will excite the massless degrees of freedom corresponding to moving in this moduli space, and simply move the microstate geometry to another nearby one. This observation was also made in [25,12].
Second, since the momentum charge of the eternally trapped modes is negative (compared to the momentum charge of the background), we expect their physics to be similar to the physics of antibranes. In fact it is not hard to see that if one dualizes the anti-branes in bubbling solutions analyzed in [51,52] to the D1-D5-P duality frame, one of the possible anti-brane charges corresponds exactly to the negative momentum of the eternally-trapped modes.
Hence, we expect these modes to have other decay channels, similar to the brane-flux annihilation of anti-branes [53]. This process was studied in a dual frame where microstate geometries with multiple bubbles have charges corresponding to three M2 branes wrapping two-tori inside T 6 [51,52], and it was found that for microstate geometries with a very long throat this process can be very fast [54]. It would be very interesting to work out the details of this non-perturbative process in the D1-D5-P duality frame, using superstrata instead of multi-bubble solutions, and calculate the decay times for the modes which perturbatively appear to be trapped forever.
It would also be interesting to try to construct the non-supersymmetric solutions sourced by these modes, especially in light of the recent observation that certain six-dimensional superstratum solutions can be described using a consistent truncation to three-dimensional supergravity [55].
Returning to our study of the quasi-normal modes, we have shown that the WKB method can be used to extract the leading-order physics of trapping. In particular, the decay rate is given by (2.18) and is determined by standard barrier penetration calculations. Moreover, rough estimates of the area under the barrier provide the leading-order time-scales. This leads us to believe that are results are universal for all deeply-capped BTZ geometries and not limited to the family of superstrata that we analyzed here.
The final result is that the decay time-scale for states in a superstratum with a deeply-capped BTZ throat is set by where is the "three-dimensional mass," or the dominant wave-number on the S 3 that would represent the horizon of the corresponding black hole.
It would be very interesting to compute this decay time using CFT methods. When R 2 y Q 1 Q 5 the solutions have an AdS 3 × S 3 throat in the intermediate region, between the AdS 2 throat and the flat space at infinity. As such, they are dual to certain states of the D1-D5 CFT [33,56,57]. This CFT has central charge c CF T ≡ n 1 n 5 and is unitary; thus it cannot capture all by itself the decay of our modes. To do this one should couple the CFT to flat space using the same technology as that used for computing the decay time of the JMaRT solution [16,[58][59][60]17]. This could be done by considering an operator of dimension h and Rcharge j L in the D1-D5 CFT coupled to flat space. In the language of this CFT, one expects the decay time (7.2) to be and one may envision doing a calculation of the type presented in [16,[58][59][60]17] in order to evaluate this decay time.
Moreover we have shown that the energy bounds on trapping in superstrata seem to be more consistent with the energy bounds of ultra-compact stars, rather than behaving as some exotic new class of objects. This is precisely what one would hope for a microstate geometry: it is supposed to behave just like a black hole until close to the horizon region, where it caps off and looks just like an ultra-compact star. In this framework, the information problem is resolved by having the state of the entire system encoded and accessible in precisely such an ultra-compact star created and supported by the microstate geometry.
On the other hand, the analysis of very simple microstate geometries has been performed via asymptotic matching. These include supersymmetric GMS solutions [18,19] and nonsupersymmetric solutions [30,16,58,59]. These three-charge solutions have an AdS 3 ×S 3 cap that is directly glued to a flat five-dimensional space with an extra S 1 . This means that exact solutions can easily be constructed in separate, but overlapping, regions and the quasi-normal modes can be obtained by asymptotic matching in the ovelap.
Superstrata, and other microstate geometries with long, black-hole-like throats are far more complicated, and so require a more universally applicable approximation method and this is where WKB methods become more appropriate. One of the goals of this Appendix is to assess the accuracy of WKB methods by making a detailed examination of solutions with a global AdS 3 ×S 3 region glued in the UV to flat space. In particular, we derive the spectrum of quasinormal modes in supersymmetric GMS solutions, using both matched asymptotic expansions as in [16], and using the WKB technique detailed in Section 2.3. We will see that these methods produce very similar results.
wave equation. When this angular wave equation can be reduced to a spherical harmonic equation, ν is labelled by a positive integer, ∈ N, Moreover, the coefficients ζ and ξ will depend on the momenta along φ and ψ (q φ and q ψ ). This dependence will be determined by the AdS 3 cap, essentially its redshift, and the details of the S 3 fibration. For the (2, 1, n) superstratum in the low energy regime, we have (3.40) whereas for GMS solutions, we have (5.16) Because the computation does not require the details of the expression of ζ and ξ, we will keep them as arbitrary parameters.
By inspecting the various terms of the potential (A.3), we easily recognize the potential of flat space, − 4 Ω P R 2 y r 2 + ν 2 − 1, as well as the potential of global AdS 3 , ν 2 − 1 + a 2 ζ 2 r 2 − a 2 ξ 2 r 2 +a 2 . Thus, by requiring that the plateau given by ν 2 − 1 is large, we expect that a WKB approximation or an asymptotic matching method are accurate. The size of the plateau requires us to impose a hierarchy of scales between the turning points of the flat-space potential and the turning points of the AdS 3 potential. This is guaranteed if In addition, the WKB approximation needs a large number of oscillations in the classical regions of the potential. This will require the classical turning points of the AdS 3 potential to be significantly separated; hence this method will not necessarily provide a good approximation for the decay of the first few quasi-normal modes.
The quasi-normal modes are characterized by their oscillatory behavior at large distance. At infinity, their wave profiles are determined by the term − 4 Ω P R 2 y r 2 . Having an oscillatory wave then requires Ω P > 0 .
We prefer to work with the conjugate momentum along the periodic direction y, q y , which is integer-moded: A. 13) and the scalar potential is that of flat space. In the overlapping region, 1 R y a r a R y a , (A.14) both potentials are valid and their solutions can be matched. The philosophy of the matched asymptotic expansion is: -To solve the wave equation in the inner region by imposing the quasi-normal-mode boundary condition at r → 0. For a smooth background as a global AdS 3 cap, this requires a smooth wave profile at r → 0.
-To solve the wave equation in the outer region by imposing the quasi-normal-mode boundary condition at r → ∞. This requires a purely outgoing wave at infinity.
-To match the asymptotic expansion of the wave profiles in the overlapping region. This matching will give an expression that will constrain the frequencies of the modes.
-To solve, perturbatively or exactly, the matching condition. This will give a tower of frequencies labelled by a mode number N ∈ N.
A.2.1 Solution in the inner region
In the inner region, the scalar equation (A.3) is approximated by the AdS 3 scalar equation The solution regular at r = 0 (satisfying K in (0) = 0) is In the overlapping region, r a ∼ Ry a 1, the radial wave profile behaves as Note that it we are implicitly considering that ν ∈ Z which is contradiction with ν = + 1 ∈ N for GMS backgrounds or superstrata. However, as in the usual holographic analysis, one has to consider ν ∈ Z first, remove the divergences to obtain the quasi-normal modes, and then do an analytic continuation to integer ν.
A.2.2 Solution in the outer region
In the outer region, the radial equation (A.3) is approximated by the scalar equation in flat space 1 r d dr The generic solutions are given by a linear combination of Bessel functions In the overlapping region r a ∼ 1 Ry a , that is r Ry 1, we have whereas in the asymptotic region, r R y , Thus, we see that if Re(Ω + P ) > 0, the wave is outgoing when out e iνπ = 0 .
Consequently, the outgoing condition is out e i sign(Re(Ω+P )) νπ . (A.23) References [19] prematurely fix a convention for the sign of Re(Ω + P ) at this point. However, the spectrum of quasi-normal modes gives two branches of frequencies, one with mostly-positive Re(Ω + P ) and one with mostly-negative Re(Ω + P ).
The existence of branches with opposite signs led the authors of these references to the conclusion that the corresponding modes might grow with time, and, a posteriori, they show that they are not in the spectrum since they do not have the same sign as their convention. At a technical level, this is caused by fixing the sign of Re(Ω + P ) too early in the calculation. As we will see, if we carry the "sign(Re(Ω + P ))" factors all along, both branches of frequencies will lead to quasi-stable modes that decay with time.
A.2.3 Matching solutions in the overlapping region
We match the asymptotic inner-wave profile (A.17) to the asymptotic outer-wave profile (A. 20) in the overlapping region taking into account the outgoing condition (A.23): 24) The quasi-normal-mode frequencies are obtained by solving this equation considering Ω as the variable. Because Ω may enter non trivially in ξ and ζ, this expression is not solvable analytically.
However, we have assumed that a R y . Thus, as soon as we are considering low-energy modes (such that ΩP is not "too large" which we will make precise later), the left hand-side of the equation is very small. This is a manifestation of the huge potential barrier that the wave has to go through in order to be able to leak to infinity.
Under this assumption, one can solve the equation perturbatively. The zeroth-order solution is obtained by considering the left-handside to be zero, so the arguments in the Gamma functions on the denominator of the right hand-side must be at the poles. This will give two towers of normal frequencies, labelled by a mode number N , that correspond to the real part of the frequencies of the quasi-normal modes: We obtain the first-order correction by perturbing the Gamma functions around their poles. This will give the imaginary leading-order correction to the normal frequencies: A.2.4 The normal frequencies, Ω (0) N As explained above, the zeroth-order expression is obtained when one of the two Gamma functions in the denominator of (A.24) has a pole. This happens when To find the final expression for Ω (0) N , one needs to know the dependence of ξ and ζ on the mode momenta (Ω, q y , q φ , q ψ ). For the backgrounds considered in this paper, the superstrata (A.5) or the GMS solutions (A.6), the centripetal coefficient ζ does not depend on the frequencies Ω since P − Ω = q y and Ω enters in ξ as where χ is independent of Ω and E gap is given by the background and will correspond to the gap of energy between two successive normal modes. For the (2, 1, n) superstratum at large n, E gap = 2j L n 1 n 5 whereas for the GMS background E gap = η k = j L n 1 n 5 η where η is defined in (5.4). Thus, we have two branches of normal frequencies depending of the "±" choice: These normal frequencies are those of the bound states in the AdS 3 cap only. Indeed, the contribution from the gluing to flat space cannot be captured at zeroth order, because the left hand-side of (A.24) is approximated to be zero. Moreover, as expected, we have two branches of normal frequencies, one positive and one negative.
A.2.5 The quasi-normal decay rates, δΩ N The computation of the first-order correction is slightly more involved. We proceed following the steps of [16]. We change the argument of the divergent Gamma function from −N to −N − δN where δN is small. We also replace ξ by (A. 27), Ω by Ω We first simplify the non-divergent Gamma functions. We recall that ζ is an integer (A.4) and, at this level, ν is considered to be a real number. Thus, using that where (X) i = i−1 j=0 (X + j) is the Pochhammer symbol, we have Then, we expand the divergent Gamma function We finally have When ν is not an integer, δN has a real and an imaginary part and then the first-order correction also slightly changes the normal frequencies. However, when we analytically continue ν to an integer, ν = + 1 ∈ N, only the imaginary part gets a finite value. We use the relation , (A. 34) and obtain where we have introduced the usual binomial coefficients The last step consists in replacing δN by δΩ N . We differentiate (A.27): By inserting the expression into (A.35), we see that we have a product of sign functions. In [19], because the sign convention for 2Re(Ω)+q y was fixed too early, the result obtained was dependent on sign(Ω (0) N ). As a result, one branch of normal frequencies had a positive decay rate i δΩ N whereas the other appeared to have a negative decay rate. This led to the incorrect conclusion that some of the wave functions were growing with time signifying instabilities.
By rectifying the sign restriction, we see that the sign of the decay rate does not depend on the sign of Ω
A.3 The spectrum of quasi-normal modes via WKB
We now solve the problem analyzed in Appendix A.2 using the WKB approximation method detailed in Section 2.3. We will see that it follows the same philosophy as the asymptotic matching method and it leads to a very similar result.
As explained in Section 2.3, we first need to transpose the wave equation (A.3) into a Schrödinger problem. There are many ways to do so and we will use the one of [14] which gives a better accuracy for the mass term: where V (x) is given by: V (x) ≡ e 2x e 2x + 1 − 4 Ω P a 2 R 2 y e 2x + ν 2 + e −2x ζ 2 − ξ 2 − 1 e 2x + 1 . (A.42) The form of the potential with the assumption (A.7) and (A.8) is depicted in Fig.6. In the inner region, x ∼ −∞ to the middle of the barrier x ∼ log Ry a , the potential is well-approximated by the AdS 3 potential, V AdS (x) ≡ e 2x e 2x + 1 ν 2 + e −2x ζ 2 − ξ 2 − 1 e 2x + 1 .
(A. 43) whereas from the middle of the barrier to the boundary, x ∼ +∞, the potential is given by the flat potential, In Section 2.3, we showed that the WKB approximation gives a spectrum of quasi-normal modes as a tower of frequencies labelled by a mode number, N ∈ N, Ω N = Ω To find this correction one needs to evaluate the following integrals where x 0 , x 1 and x 2 are the three turning points as depicted in Fig. 6. Unfortunately, the square root of the potential |V (x)| 1 2 is not integrable in a closed form and one will need to use 12 Note a slight difference in the expression of the sign(. . .) compared to the general formula (2.18). In Section 2.3, ω is the conjugate momentum of t whereas here we are working with Ω, the momentum along u and with P , the momentum along v. The translation of the condition of having an outgoing wave (which involves t) is: sign(Re(ω)) = sign(Re(Ω + P )) = sign(2Re(Ω) + qy).
the approximate potentials to estimate Θ and T . They will strongly depend on the values of x 0 , x 1 and x 2 . The two first can be obtained using V AdS (x) whereas x 2 is given by V flat (x): N , we need to compute Θ. The integral is supported in a region where the potential is given by the AdS 3 potential, and one can simply use |V AdS (x)| 1 2 which is integrable. We obtain We know that the WKB approximation is precise when there are significantly many oscillations between x 0 and x 1 , which happens when ξ 2 1. This excludes the first few modes. For the higher modes we then have Returning to the assumption ξ 2 1, we can check that it indeed requires that N 10 and therefore the WKB method loses accuracy for the first few quasi-normal frequencies.
A.3.2 The first order, δΩ N We aim to apply (A.47) to obtain the expression of δΩ N . Using (A.52) with (A.28), we have
∂Θ ∂Ω
≈ π E gap sign (Re(ξ)) = π E gap sign Ω Now, we need to estimate the integral T in equation (A.48) which is a bit of a challenge since it is supported in the overlapping region between V AdS (x) and V Flat (x). We will use an intuitive procedure that is equivalent to translating the asymptotic matching method in WKB language. This will consist in computing the integral of |V (z)| √ e 2x 1 + 1 − √ e 2x 0 + 1 √ e 2x 1 + 1 + √ e 2x 0 + 1 ν 2 (e 2x 1 +1)(e 2x 0 +1) . Given the completely different functions that appear on the left and on the right, this approximate equality is far from obvious. We will show in the next section that this approximate equality is satisfied with a difference less than 1% as soon as (N, ) 10 and that the large-or the large-N expansions are miraculously identical until the third order!
A.4 Comparison
As stated above, the difference in the spectrum obtained via WKB or via asymptotic matching is determined by the difference between two quantities that we denote L and R: The functions depend on three variables: the mode number N , the mass ν and the centripetal coefficient ζ. Because of the very different forms of L and R, it appears complicated to do a direct comparison. This is why we will show that the large-expansions for arbitrary (N, ζ) are strictly identical until the third order and similarly at large N for arbitrary ( , ζ). For values in between, we will simply give a numerical plot.
(A.69) The three first leading orders then match exactly. One can also push to the fourth order, e O(1) , and we can see that even if they do not match exactly, they are very close to each other when N 10. The two first leading orders then match exactly. As for the coefficient in front, one can actually show as soon as ν 10, the expansion of the Gamma function gives, We have thus proven that, despite their very different analytical expressions, the results obtained via WKB and via asymptotic matching agree incredibly well at large and large N . To corroborate this, we can plot the error function, as a function of N , ν and ζ. It is not hard to observed that the value of ζ does not modify significantly this error function. Thus in Fig.7 we show the dependence of this error function on N and ν, and we can see that as soon as N 10 and ν 10 the difference between the WKB and the asymptotic-matching result is less than 1%.
Our WKB techniques are therefore as accurate as asymptotic matching, and can be used for backgrounds that have more than two overlapping regions, such as superstrata. | 25,957.4 | 2020-05-22T00:00:00.000 | [
"Physics"
] |
Anonymization of German financial documents using neural network-based language models with contextual word representations
The automatization and digitalization of business processes have led to an increase in the need for efficient information extraction from business documents. However, financial and legal documents are often not utilized effectively by text processing or machine learning systems, partly due to the presence of sensitive information in these documents, which restrict their usage beyond authorized parties and purposes. To overcome this limitation, we develop an anonymization method for German financial and legal documents using state-of-the-art natural language processing methods based on recurrent neural nets and transformer architectures. We present a web-based application to anonymize financial documents and a large-scale evaluation of different deep learning techniques.
Introduction
The automatic processing of text documents has become of vital importance in several industrial applications.The availability of digital financial and legal documents is increasing and companies rely on automated methods for handling and analysis, often based on or assisted by machine learning tools.The development of such tools usually requires researchers and developers to have access to documents as part of data exploration or the model training pipeline.However, such financial data typically cannot be processed or shared beyond authorized parties due to the prevalence of sensitive information regarding specific individuals and organizations, which significantly restricts development even within the organization.One possible solution is to perform
Legal context
While the principle of anonymization is simple, concrete applications must follow narrow legal guidelines which we want to elaborate on for the European and German market.
With the introduction of the General Data Protection Regulation, (GDPR) 1 personal data can only be further processed if they are compatible with the very strict purposes permitted by law for which this data were collected. 2These purposes usually do not include the usage of the collected data for the training of machine learning tools.In fact, the GDPR does not even mention the processing of "Big Data" or algorithms with a single-word [1].This does not change with the 2019's entry into force of a new regulation of the EU on the free flow of non-personal data.As the name already suggest, this regulation allows the storage and processing of data across the Member States without unjustified restrictions, as long as the data are not personal.However, the principle of purpose limitation is not applicable once the data are anonymized, 3 and therefore, this data can be used for developing digital solutions across Europe.
Furthermore, if the personal data are no longer necessary for the purpose for which it was collected, the GDPR grants the data subject a "right to be forgotten," i.e., the right that its data are being erased. 4In practice, a company that collects personal data, like every service provider, would need to delete their customer contracts at the time of its termination date.However, this could contradict legal retention periods, for example, for tax purposes.This may be avoided, if the company anonymizes their contracts at the termination date.Considering the amount of the corresponding documents, manual anonymization is not appropriate under these circumstances.
However, the demand for anonymization of confidential data has always been present, not only since the introduction of the GDPR.For instance, publication of judgments in the public interest is, at least in Germany, a direct constitutional task for the judicial power and therefore for every single court. 5However, these publications need to be anonymized, regardless of the GDPR, to protect the fundamental right to informational self-determination. 6Until now, such anonymization is mainly done manually, resulting in a publication of only a mere fraction of the judgments that are in the public interest.
Our contributions
All of the examples above have in common that the data with the need for anonymization is usually part of documents like contracts or other reports.Consequently, we address this concern of data privacy and protection and present a web-based anonymization application that anonymizes sensitive information such as names of persons, locations, organizations, numbers, telephone numbers, dates and URLs in a piece of 2 Art.17 GDPR. 3Recital 26 GDPR. 4 Art. 5 GDPR. 5 BVerwG, 26.2.1997 -6 C 3/96. 6 Art. 2 Abs. 1 GG in conjunction with Art. 1 Abs. 1 GG.Fig. 1 General workflow for anonymizing a document using named entity recognition.First, sensitive entities are identified using deep learning methods and rule-based post-processing.Then, the identified entities are replaced with appropriate tags to preserve the text structure or hidden behind a general anonymized tag writing by the example of financial documents.We tackle this using state of the art deep learning and natural language processing techniques as well as rule-based post-processing.A general outline of the workflow is shown in Fig. 1.
Our main contributions in this work are: -A method to anonymize 99% of all sensitive entities contained in German financial documents while maintaining high readability and preserving the structure of the given text -Presenting a web-based application and an API to use our method on various types of documents -A quantitative evaluation of multiple state-of-the-art deep learning techniques for anonymization as well as the impact of domain-specific language models for financial documents.
Note that a preliminary version of this work was presented (unpublished) at an AAAI-20 7 workshop.This version of the paper includes discussion of a new type of deep-learning architecture (see Sect. 4.1.3)with theory, details on training and new experimental results.
Related work
Earlier systems on anonymization focused primarily on medical records.The first anonymization system was developed by [2] used several pattern matching algorithms which detect names, phone numbers, etc.Later in 2006, a challenge was hosted to anonymize clinical data which were also made available as public dataset, namely i2b28 for deidentification.Several systems were developed as a result of this challenge which tackled the problem using named-entity recognition [3,4], rule-based systems [5] and hybrid system [6] which uses look-up on dictionaries, regular expressions Fig. 2 A screenshot of our anonymization tool; the left pane contains the UI controls for uploading the document and other settings such as to turn on the anonymization for numbers and to enable masking.To the right of it, there is the document pane and it shows the content of the document in which sensitive entities are highlighted if the mask option is not selected.If the mask option is selected, then the document pane shows the same content instead with sensitive entities masked and as well as model-based classifiers.To the best of our knowledge, we present the first large scale of evaluation of anonymization techniques with respect to financial documents.9
Web-based application
A screenshot of the application is shown in Fig. 2. It is a web-based application (implemented via the Flask10 framework) which allows the user to upload text documents (e.g., .docx,.pdf,.txt,.json)and visualize the anonymized content.The interface contains two panes; a left pane with controls and a right pane where the anonymized document is rendered.There are two basic configurable settings: by default, names, locations, organizations and other entities are anonymized using our deep learning methods.Additionally, one can enable anonymization of numbers, dates, etc., which are detected using regular expressions.The sensitive entities are highlighted in different colors based on their types; In Fig. 2, the names of persons, companies and locations are highlighted in red, green and blue, respectively.Further, the tool allows the user to enable masking such that sensitive entities are blacked out entirely as shown in the figure on the rightmost pane.
Parsing the original document allows for replacement of text within the document format (e.g., .docximplemented using the python-docx 11 python library, .xslxusing the openpyxl 12 library) while keeping formatting like text size, fonts and layout intact.Once a document is processed, the tool lets the user download an anonymized version of the document in the original format (e.g., .docx), in which all relevant entities are replaced by generic tokens (e.g., <PER>, <ORG>, <LOC>, …).
Additionally, the tool anonymizes .pdf-documents and application of OCR methods (pytesseract 13 library) allow for anonymization of scanned .pdffiles.
All machine-learning related work was implemented using the pytorch 14 framework.
API
Since the main application of this tool is document preprocessing for further distribution or use in the training of machine learning systems, we desire the anonymization of an entire document corpus.These anonymized documents can afterward be handled by developers without clearance for the original data.For this reason, we provide a REST API and python package for internal usage.This makes it possible for an employee with the required clearance for the original documents and no involvement in the development 11 https://pypi.org/project/python-docx/.
post-processing anonymization
Fig. 3 Workflow from raw text to final anonymized output.We convert each token into a numerical vector using a trained language model, use a neural net classifier to predict probabilities for each class for each token, choose the class with the highest probability, apply post-processing and finally replace named entities with corresponding labels in text, leaving words classified as 0 intact process to use the tool to anonymize a corpus of documents at once and return the anonymized data.This leaves a readable text without sensitive information that can be further analyzed by different machine learning approaches.
Anonymization as sequence tagging
We tackle the problem of anonymization as a sequence tagging task [7].Given a document consisting of several sentences in which each sentence is a sequence of words (tokens), our goal is to assign a suitable label to each token indicating if it contains sensitive information or not.
The possible labels include -0 (contains non-sensitive information), -ORG (contains an organization or part of an organizations name), -PER (contains a person or part of a persons name), -LOC (contains a location or part of a location name), -PROD (contains a product name), -SEG (contains information about the industry of the company), -URL (contains an URL), -TEL (contains a phone number), -DATE (contains a date), -NUM (contains a number), -EMAIL (contains an e-mail address) and -OTH (contains any other sensitive information).
In particular, we refer to ORG, PER, LOC, PROD, SEG and OTH as named entities as it is part of the well-known problem of named entity recognition [8] in natural language processing.
We employ a multi-step approach as depicted in Fig. 3.
Step 1: Predict the named entities in each document using language models and deep learning methods.Step 2: Make these predictions consistent across each document.Step 3: Predict the remaining labels using rule-based classifiers and assign to the respective tokens.Step 4: Replace the text of tokens by appropriate tags in order to preserve the sentence structure and semantics.
Word embeddings and contextual language models
Unlike traditional string-based methods (e.g., rule-based systems using regex), modern deep learning approaches for text classification require a two-step approach; first, the raw text has to be converted into a numeric representation, usually a vector of fixed dimension for each word in the text.The numeric representation of a token is then fed into a classifier that outputs probabilities for each class.
Word representations can be obtained in two forms: global word embeddings and contextual word embeddings.Global word embeddings provide numeric vectors for each word in a vast vocabulary and they are obtained using a large corpus of language data to capture semantic information of each word.Typically, these embeddings are trained to satisfy some distance metric (e.g., Euclidean distance or cosine similarity) between semantically similar vectors.For example, the word vector corresponding to finance would be closer to the vector for banking than to the vector corresponding to apple.Popular word embedding models include word2vec [9] and glove [10].The advantage of these models lies in their ease of use that they can be distributed as text documents containing words and corresponding vector weights.And retrieving an embedding for a certain word simply requires just a lookup of the corresponding entry in the list of vectors.However, the reliance on exactly one vector per word has a major disadvantage that the same word can have multiple meanings depending on context, which cannot be captured by these global embeddings.
Consider the following two sentences -"Herr Vogel ist Geschäftsführer der Test GmbH." 15 -"Der frühe Vogel fängt den Wurm." 16 The word Vogel refers to a bird in one sentence and a person in the other.A global word embedding model would retrieve the same vector for both tokens and an anonymization model based on individual word embeddings would either anonymize the animal or let the name pass through.A prediction model that takes as input a sequence of word embeddings, as they appear in the sentence, might be able to differentiate the meanings in this context.However in this work, we only consider prediction models based on singleword embeddings.
In contrast, contextualized language models offer embeddings that include context for each word.Like word embeddings, these models are also pretrained on a large corpus of language data, but are based off neural networks themselves that process each sentence to capture the context.In the example mentioned above, the language model would capture the context of the sentence (see sections below for details on how) and calculate distinct word embeddings for both instances of the word Vogel.An anonymization model could then learn from these contextual word embeddings to anonymize the appropriate name.However, this means that the retrieval of word embedding which is part of the prediction pipeline is not a simple dictionary lookup, but rather a deep learning model itself that can vastly exceed the prediction model in size and complexity.In our experiments, the 15 "Mr.Vogel (bird) is CEO of Test GmbH (equiv.LLC)." 16 "The early bird catches the worm."retrieval of the contextualized word embeddings takes up the majority of the processing power and inference time.
In our setting, the most important distinction between classic word embeddings and language models is the handling of out-of-vocabulary words.Though global word embeddings offer vectors for large vocabularies of words (e.g., Glove with up to 400,000 words), there is no guarantee that for names of persons, locations and companies there even exist an embedding.While there are several ways, depending on the task, of dealing with these missing embeddings, obtaining a reasonable embedding for each token naturally is definitely preferable.
In contrast, contextualized language models work with a vocabulary of either characters or so-called sub-word tokens [11].In either case, the vocabulary contains each character that is needed to construct words in a given sentence.Therefore, a contextualized language model is able to embed any word, no matter how common or rare.
These theoretical considerations hold up in practice, where architectures based on contextual language models severely outperform traditional approaches based on word embeddings.For instance, [12] report a F1-score of 76-79% on the CONLL-2003 task [13], compared to 91% reported in [14] on the same task.In a similar work on German NER [15], the use of contextual embeddings [14] obtained better performance when compared to using only the Fasttext word embeddings [16].For these reasons, we do not consider classical word embeddings for our task and refrain from an additional evaluation of these methods on our dataset.
Recurrent neural net-based language models
In our work, we utilize flair [14], which employs a bidirectional character-based recurrent neural net that traverses each sentence in both forward and backward direction, which is trained to predict the next character conditioned on the ones it saw before.In order to predict the beginning of the next word or the next character in a word, it needs information on the sentence context that will be stored in the hidden states of the network layers.The corresponding hidden states of the network at the beginning and end of each token together act as the numeric vector representation for that token.It contains both information of the word itself and an encoding of the surrounding words, thereby capturing the context of the token.
In this paper, we evaluate several versions of this language model that differ both in training data (i.e., what language corpus the model was trained on) and their size, referring to the dimension of the output vector.A smaller language model outputs a smaller token vector that stores less information but can process a document significantly faster.See Fig. 4 and Table 1 for a quantitative evaluation of language models of different sizes.
Transformer-based language models
In recent years, the development of the transformer model [17] has led to many breakthroughs in natural language processing.Transformer-based architectures rely almost entirely on self-attention, which processes the sequence of words as a whole and considers relationships between all pairs of tokens in the sentence.This architecture allows for tracking long dependencies in text, which may be an issue with recurrent neural net-based architectures that lose their "memory" of processed words rather quickly [18].
One very popular transformer-based architecture is BERT [19].BERT trains a transformer-based neural net model by masking random tokens in a sentence and trying to reconstruct them.While recurrent architectures, as described above, receive the entire sentence to one side to reconstruct the next token, BERT receives the entire sentence context except the tokens that needs to be reconstructed.Additionally, the same model architecture can be trained on many tasks like language modeling (i.e., token reconstruction), translation and token or sequence classification.This way researchers are able to train a single model on various datasets to improve general language understanding within the model.This type of architecture has led to new state of the art results, for instance in machine translation.However, one drawback of BERT is a reliance on a maximum sequence length of 512, which other models are able to overcome [20].
Prediction models
After obtaining the token representations using the language models, the text is fed into the classifier network as an ordered list of numeric vectors, one for each token, which is then subsequently mapped onto corresponding probabilities for each of the 7 named entities (0, ORG, PER, LOC, PROD, SEG and OTH).During training, the network is trained to predict the expert annotated labels for each token by minimizing the cross-entropy loss.Once the network is trained in this fashion, during inference, the label with the highest probability is predicted.We consider three different classifiers architectures:
MLP
First, we consider a simple fully connected network (multilayer perceptron) that takes each token representation individually, passes it through several layers and outputs probabilities for each of the 7 named entities.In this case, the prediction for each token is treated independently and relies solely on the contextual representation provided by the language model.This classifier is preferred because of faster inference time and easier interpretability of results.
RNN
Although a simple MLP is sufficient to classify a token since the representation contains the context, it is still beneficial to process the text using a recurrent neural net which further enhances the context and more importantly the required span of context can be trained for the given task.For this reason, we consider a bi-directional variant of Long Short Term Memory (LSTM) [21] which traverses the list of vectors in both directions, processing stored context information from previous tokens and the current token.The outputs along both directions (forward and backward) are concatenated and passed through a final fully connected prediction layer mapping to probabilities for each of the 7 named entities.
RNN + CRF
With MLP and RNN, the prediction of each token is treated independently.In order to incorporate dependencies between predicted labels, the fully connected layer from the output states of the RNN to the output layer can be replaced by a conditional random field (CRF) [22] that learns a mapping of sequences of representations taking into account the predicted labels of consecutive tokens.
Post-processing
As discussed in Sect.3.1, we also provide an option in our application to anonymize URLs, dates, numbers and e-mail addresses.Since they mostly have regular patterns, we have implemented regular expressions to detect these entities.
For the task of anonymization, we want to give higher preference to recall than precision, since anonymizing too many words is preferable to missing a word that should be anonymized.Due to the context dependence of the applied language and prediction models, there might be tokens in the given text which are predicted as sensitive in one place and as not sensitive in other places.To this end, we propose the application of a post-processing step that ensures consistency in the predicted labels: a token (e.g., a persons name) that is predicted as a named entity once in the document is always replaced by the corresponding label, even if the classifier predicted it as non-sensitive in another sentence.We provide all metrics on the positive class (PER, ORG and LOC).The best performance for each metric is marked bold for each column, respectively.Post-processing for these classes only consists of ensuring label consistency.We do not evaluate post-processing for Germeval since its structure (independent sentences) does not fit our post-processing methods
Language model corpus
As discussed in the previous section, in order to obtain contextual representations for tokens, we consider different language models.The baseline model that we use is a pre-trained language model provided by the flair framework which is trained on a large general corpus of German sentences consisting of 500 million words.We refer the embedding obtained using this model as flairDE.The language corpus used in the training of this embedding might cause licensing issues, e.g., the Wikipedia corpus is distributed under GNU Free Documentation License and Creative Commons Attribution-Share-Alike 3.0 License, which prohibit commercial use without adapting the same license to the project.Additionally, a language model trained on data that is similar to the financial text might provide an advantage over a language model trained on general language data and a custom language model allows for tuning the embedding size in order to optimize runtime.We therefore train language models on a corpus of language data from Bundesanzeiger 17 (BANZ), consisting of 19,000 German financial documents (200 million words).
Document corpus
We train our deep learning classifier models using a corpus of 407 published German financial documents, annotated manually by domain experts.We split the dataset into 305 training and 102 validation documents.Once a model is trained, we provide a final evaluation dataset consisting of 45 thoroughly annotated documents.This evaluation dataset contains a total of 189k tokens, 17k (9.1%) of which belong to one of the classes ORG, LOC and PER.In order to provide results comparable to other NER and anonymization projects, we additionally evaluate all trained models on the GermEval 2014 NER Shared Task corpus [23], consisting of 29k sentences annotated for NER with a total of approximately 590k tokens, 8.4% of which are named entities.
RNN-based language models
To train and use a language model on our data, we employ the framework provided by the flair python-package. 18It implements a bidirectional LSTM on a character level.We train language models on the BANZ-corpus with 1024, 2048 and 4096 dimensions.These are denoted by BANZ1024, Fig. 4 Influence of language model on precision, recall, F 1 -score and inference time on evaluation documents.Precision and recall are reported without post-processing.Inference time measured in seconds per document (10 pages).We see that for the RNN-based architectures, the choice of language model makes little difference in anonymiza-tion performance.However, a smaller language model reduces the time it takes to process one document significantly.Note that there are no major differences in processing time between classifier architectures, the language model is the main contributor to processing time BANZ2048, BANZ4096, respectively.We train for 100 epochs using the default parameters suggested by the package.
BERT language models
The used BERT model consists of a model pretrained on general German language data 19 that we fine-tuned (i.e., continued to train) on the BANZ data corpus described above.
The model provides embeddings of dimension 3072.
Classifiers
The RNN classifier as suggested by [14] is a one-layer BiL-STM with a hidden representation of 256 dimensions.We use the framework provided by the flair package to train RNNbased NER classifiers on the NER training dataset.We train for 100 epochs using the default parameters suggested by the package.Each MLP model consists of one intermediate hidden layer, mapping the input onto a lower-dimensional 19 https://deepset.ai/german-bert.
representation.This hidden representation is then mapped onto the 7-dimensional output vector.The number of neurons in the intermediate hidden layer are 500, 500, 1000, depending on the input dimension 1024, 2048 and 4096, respectively.We train the MLP classifier for 100 epochs, using a batch size of 100 tokens.As optimizer, we use Adadelta with a learning rate of 0.1 and weight decay of 1e-5.Further, to provide a baseline evaluation we consider a pretrained classifier for named entity recognition that has been trained on general language and named entity recognition data and has never seen our BANZ corpus or any annotated financial documents.For this, we apply the pre-trained NER model provided by the flair package, which is a RNN+CRF classifier trained on the CoNLL-2003 German NER dataset [13] and a general corpus language model.We denote this classifier as flairNER.
Results and analysis
In this section, we present quantitative results on the performance of the described language models and classifiers.For our task of anonymization, it is desired to have a good binary classification performance, i.e., we tolerate a PER entity being tagged as an ORG entity and at the same time, we consider a PER entity tagged as 0 as a mis-classification and vice versa.For this reason, before evaluation all predicted and annotated tags are re-mapped onto two classes only, the negative class 0 indicating they are not sensitive entities and the positive class 1 indicating such tokens to be anonymized.Further, we are mostly interested in the performance on the positive class and therefore provide its metrics (precision, recall and F 1 -score) only.Due to the lack of reliable available data for SEG, PROD and OTH, we do not consider them during this evaluation.
Table 1 presents the complete experimental results with different classifier architectures and language models.The evaluation on financial documents suggests that the RNN+CRF achieves the best performance, at over 97% recall without post-processing and around 99% after post-processing, without compromising precision of over 90%.
This results in a near complete anonymization of the entire document with very little unnecessarily anonymized words.Using domain-specific language model gives slight improvements over general language models for RNN-based classifiers.On the other-hand, the general corpus was beneficial while using a MLP classifier.
Figure 4 captures the influence of language model on the performance metrics.From the runtime and recall plots, we can observe that even with the smaller domain-specific language models, the RNN classifiers are able to out-perform the general language model, while reducing the runtimes of the anonymization process by over 50%.We further see that the RNN-based prediction models achieve comparable results for the larger RNN-based language models and the transformer-based BERT.Depending on application a slight drop in recall when employing a smaller language model (e.g., going from BANZ4096 to BANZ2048) can be tolerated, considering it greatly improves on the inference time per document.
In order to evaluate the generalizability of our classifiers, we evaluate our models on GermEval dataset.For this evaluation, we do not apply any post-processing since it contains only sentences obtained from different sources and they do not follow any document structure.The results suggest that the RNN classifiers using a general language model performs better than one trained only on financial documents, which is expected since the sentences in GermEval correspond to sentences from a variety of sources.Nevertheless, the performance is comparable to the current state-of-the-art for NER.Further, the pre-trained NER classifier, trained on a general language German NER corpus, only yields a recall of 84% and 93% on the financial documents, without and with postprocessing, respectively.
Discussion and future work
In this work, we focus on the anonymization of financial documents and mention the use case for court records and legal documents in general.Another example for a possible application is healthcare.The ongoing battle with the corona pandemic showed how beneficial it is when hospitals and researchers work together and share their findings and information.At the same time, patient data often contains sensitive information prohibiting a fast exchange without prior anonymization.Therefore, an expansion of our approach to this field can enable and speed-up the data transfer and increase the amount of available data.
In order to apply this work to a new group of documents, one can use the following approach.As there are many similarities between entities of different domains, the presented models will likely work well even with no adaption.As seen in Table 1, a model pre-trained on general text data already performs decently at almost 90% anonymization performance.The next step to further increase the performance and recognize new patterns will be to train a domain specific language model and if available, fine-tune the model on annotated data of that field.We expect the post-processing steps described in Sect.4.3 to also improve anonymization in most other domains, domain-specific post-processing steps might have to be developed.
In the experiments, we consider BERT as a contextualized language model that provides word embeddings which are passed as inputs to the separate prediction model.To further improve the language model, we plan on integrating named entities directly into the pre-training.Yamada et al. [24] show that treating words and entities as independent tokens during the masking task and within the self-attention mechanism can lead to better performances on named entity recognition tasks.Furthermore, we intend to explore transformers and self-attention as an end-to-end model for named entity recognition.
Another limiting factor for our method that inspires further research is the quality of annotations.Often, mistakes in the annotation lead to worse models by internalizing annotation mistakes during training.Additionally, Manning [25] demonstrates that the agreement between annotators can be another constraint.In the future, we intend to reduce the effect of both cases by identifying suspicious samples during training as shown by [26].
Conclusion
We presented a method to reliably anonymize the names of persons, locations and organizations using state-of-theart deep learning techniques, as well as URLs, telephone numbers, dates and other numbers using classical rule-based approaches in financial documents.For internal use, this method can be applied to a single document or entire document corpora using a web-based application and a REST API.This allows for pre-processing of documents that can then be used by developers and researchers to train and evaluate further models for machine learning on financial data (e.g., [27]).
A quantitative evaluation of language models and text classifiers shows that domain-specific training of language models improves classification performance and smaller language models significantly improve runtime while maintaining anonymization performance.As future work, we would like to incorporate methods to anonymize additional identifying information (e.g., the segments the organization operates in) as well as analyze the impact of anonymized data as inputs for the training of machine learning algorithms over the original text. https://pytorch.org/.
Table 1
Quantitative evaluation of all described language models and classifiers on the NER evaluation dataset of financial documents and the GermEval dataset | 7,093.8 | 2021-10-02T00:00:00.000 | [
"Computer Science"
] |
On the Foundations of the Brussels Operational-Realistic Approach to Cognition
The scientific community is becoming more and more interested in the research that applies the mathematical formalism of quantum theory to model human decision-making. In this paper, we provide the theoretical foundations of the quantum approach to cognition that we developed in Brussels. These foundations rest on the results of two decade studies on the axiomatic and operational-realistic approaches to the foundations of quantum physics. The deep analogies between the foundations of physics and cognition lead us to investigate the validity of quantum theory as a general and unitary framework for cognitive processes, and the empirical success of the Hilbert space models derived by such investigation provides a strong theoretical confirmation of this validity. However, two situations in the cognitive realm, 'question order effects' and 'response replicability', indicate that even the Hilbert space framework could be insufficient to reproduce the collected data. This does not mean that the mentioned operational-realistic approach would be incorrect, but simply that a larger class of measurements would be in force in human cognition, so that an extended quantum formalism may be needed to deal with all of them. As we will explain, the recently derived 'extended Bloch representation' of quantum theory (and the associated 'general tension-reduction' model) precisely provides such extended formalism, while remaining within the same unitary interpretative framework.
Introduction
A fundamental problem in cognition concerns the identification of the principles guiding human decisionmaking. Identifying the mechanisms of decision-making would indeed have manifold implications, from psychology to economics, finance, politics, philosophy, and computer science. In this regard, the predominant theoretical paradigm rests on a classical conception of logic and probability theory. According to this paradigm, people take decisions by following the rules of Boole's logic, while the probabilistic aspects of these decisions can be formalized by Kolmogorov's probability theory [1]. However, increasing experimental evidence on conceptual categorization, probability judgments and behavioral economics confirms that this classical conception is fundamentally problematical, in the sense that the cognitive models based on these mathematical structures are not capable of capturing how people concretely take decisions in situations of uncertainty.
In the last decade, an alternative scientific paradigm has caught on which applies a different modeling scheme. The research that uses the mathematical formalism of quantum theory to model situations and processes in cognitive science is becoming more and more accepted in the scientific community, having attracted the interest of renowned scientists, funding institutions, media and popular science. And, quantum models of cognition showed to be more effective than traditional modeling schemes to describe situations like the 'Guppy effect', the 'combination problem', the 'prisoner's dilemma', the 'conjunction and disjunction fallacies', 'similarity judgments', the 'disjunction effect', 'violations of the Sure-Thing principle', 'Allais', 'Ellsberg' and 'Machina paradoxes' (see, e.g., [2,3,4,5,6,7,8,9,10,11,12,13,14,15]).
There is a general acceptance that the use of the term 'quantum' is not directly related to physics, neither this research in 'quantum cognition' aims to unveil the microscopic processes occurring in the human brain. The term 'quantum' rather refers to the mathematical structures that are applied to cognitive domains. The scientific community engaged in this research does not instead have a shared opinion on how and why these quantum mathematical structures should be employed in human cognition. Different hypotheses have been put forward in this respect. Our research team in Brussels has been working in this domain since early nineties, providing pioneering and substantial contributions to its growth, and we think it is important to expose the epistemological foundations of the quantum theoretical approach to cognition we developed in these years. This is the main aim of the present paper.
Our approach was inspired by a two decade research on the mathematical and conceptual foundations of quantum physics, quantum probability and the fundamental differences between classical and quantum structures [16,17,18,19]. We followed an axiomatic and operational-realistic approach to quantum physics, in which we investigated how the mathematical formalism of quantum theory in Hilbert space can be derived from more intuitive and physically justified axioms, directly connected with empirical situations and facts. This led us to elaborate a 'State Context Property' (SCoP) formalism, according to which any physical entity is expressed in terms of the operationally well defined notions of 'state', 'context' and 'property', and functional relations between these notions. If suitable axioms are imposed to such a SCoP structure, then one obtains a mathematical representation that is isomorphic to a Hilbert space over complex numbers (see, e.g., [20]).
Let us shortly explain the 'operational-realistic' connotation characterizing our approach, because doing so we can easily point out its specific strength, and the reason why it introduces an essentially new element to the domain of psychology. 'Operational' stands for the fact that all fundamental elements in the formalism are directly linked to the measurement settings and operations that are performed in the laboratory of experimentation. 'Realistic' means that we introduce in an operational way the notion of 'state of an entity', considering such a 'state' as representing an aspect of the reality of the considered entity at a specific moment or during a specific time-span. Historically, the notion of 'state of a physical entity' was the 'easy' part of the physical theories that were the predecessors of quantum theory, and it was the birth of quantum theory that forced physicists to take also seriously the role of measurement and hence the value of an operational approach. The reason is that 'the reality of a physical entity' was considered to be a simple and straightforward notion in classical physics and hence the 'different modes of reality of a same physical entity' were described by its 'different states'. That measurements would intrinsically play a role, also in the description of the reality of a physical entity, only became clear in quantum physics for the case of micro-physical entities.
In psychology, things historically evolved in a different way. Here, one is in fact confronted with what we call 'conceptual entities', such as 'concepts' or 'conceptual combinations', and more generally with 'any cognitive situation which is presented to the different participants in a psychology experiment. Due to their nature, conceptual entities and cognitive situations are 'much less real than physical entities', which makes the notion of 'state of a conceptual entity' a highly non-obvious one in psychology. And, as far as we know, the notion of state is never explicitly introduced in psychology, although it appears implicitly within the reasoning that is made about experiments, their setups and results. Possibly, the notion of 'preparation of the experiment' will be used for what we call 'the state of the considered conceptual entity' in our approach. Often, however, the notion of state is also associated with the 'belief system' of the participant in the experiment. In our approach we keep both notions of 'state' and 'measurement' on equal footing, whether our description concerns a physical entity or a conceptual entity. In this way, we can make optimal use of the characteristic methodological strengths of each one of the notions. It is in doing so that we observed that there is an impressive analogy between the operational-realistic description of a physical entity and the operational-realistic description of a conceptual entity, in particular for what concerns the measurement process and the effects of context on the state of the entity. As a matter of fact, one can give a SCoP description of a conceptual entity and its dynamics [2,3,4]. This justifies the investigation of quantum theory as a unified, coherent and general framework to model conceptual entities, as quantum theory is a natural candidate to model context effects and context-induced state transformations. Hence, the quantum theoretical models that we worked out for specific cognitive situations strictly derive from such investigation of quantum theory as a scientific paradigm for human cognition. In this respect, we think that each predictive success of quantum modeling can be considered as a confirmation of such general validity. It is however important to observe that, recently, potential deviations from Hilbert space modeling were discovered in two cognitive situations, namely, 'question order effects' [21] and 'response replicability' [22]. According to some authors, question order effects can be represented by sequential quantum measurements of incompatible properties [9,13,21]. However, such a representation seems to be problematical, as it cannot reproduce the pattern observed in response replicability [22], nor it can exactly fit experimental data [23,24]. We put forward an alternative solution for these effects within a 'hidden measurement formalism' elaborated by ourselves (see, e.g., [17,25,26,27,28,29] and references therein), which goes beyond the Hilbert space formulation of quantum theory (probabilities), though it remains compatible with our operational-realistic description of conceptual entities [24,30].
For the sake of completeness, we summarize the content of this paper in the following.
In Section 2, we present the epistemological foundations of the quantum theoretical approach to human cognition we developed in Brussels. We operationally describe a conceptual entity in terms of concrete experiments that are performed in psychological laboratories. Specifically, the conceptual entity is the reality of the situation which every participant in an experiment is confronted with, and the different states of this conceptual entity are the different modes of reality of this experimental situation. There are contexts influencing the reality of this experimental situation, and the relevant ones of these contexts are elements of the SCoP structure, the theory of our approach, and their influence on the experimental situation is described as a change of state of the conceptual entity under consideration. There are also properties of this experimental situation, the relevant ones being elements of the SCoP structure, and they can be actual or potential, their 'amount of actuality' (i.e. their 'degree of availability in being actualized') being described by a probability measure. The operational analogies between physical and conceptual entities suggest to represent the latter by means of the mathematical formalism of quantum theory in Hilbert space. Hence, we assume, in our research, the validity of quantum theory as a scientific paradigm for human cognition. On the basis of this assumption, we provide a unified presentation in Section 3 of the results obtained within a quantum theoretical modeling in knowledge representation, decision theory under uncertainty and behavioral economics. We emphasize that our research allowed us to identify new unexpected deviations from classical structures [32,33], as well as new genuine quantum structures in conceptual combinations [34,35,36], which could not have been identified at the same fundamental level as it was possible in our approach if we would have adopted the more traditional perspective only inquiring into the observed deviations from classical probabilistic structures. In Section 4, we analyze question order effects and response replicability and explain why a quantum theoretical modeling in Hilbert space of these situations is problematical. Finally, we present in Section 5 a novel solution we recently elaborated for these cognitive situations [24,30]. The solution predicts a violation of the Hilbert space formalism, more specifically, the Born rule for probabilities is put at stake. We however emphasize that this solution remains compatible with the general operational and realistic description of cognitive entities and their dynamics given in Section 2. In Section 6, we conclude our article by offering a few additional remarks, further emphasizing the coherence and advantage of our theoretical approach. We stress, to conclude this section, that the deviation above from Hilbert space modeling should not be considered as an indication that we should better come back to more traditional classical approaches. On the contrary, we believe that new mathematical structures, more general than both pure classical and pure quantum structures, will be needed in the modeling of cognitive processes.
2 An operational-realistic foundation of cognitive psychology Many quantum physicists agree that the phenomenology of microscopic particles is intriguing, but what is equally curious is the quantum mathematics that captures the mysterious quantum phenomena. Since the early days of quantum theory, indeed, scholars have been amazed by the the success of the mathematical formalism of quantum theory, as it was not clear at all how it had come about. This has inspired a longstanding research on the foundations of the Hilbert space formalism of quantum theory from physically justified axioms, resting on well defined empirical notions, more directly connected with the operations that are usually performed in a laboratory. Such an operational justification would make the formalism of quantum theory more firmly founded.
One of the well-known approaches to the foundations of quantum physics and quantum probability is the 'Geneva-Brussels approach', initiated by Jauch [37] and Piron [38], and further developed by our Brussels research team (see, e.g., [16,19]). This research produced a formal approach, called 'State Context Property' (SCoP) formalism, where any physical entity can be expressed in terms of the basic notions of 'state', 'context' and 'property', which arise as a consequence of concrete physical operations on macroscopic apparatuses, such as preparation and registration devices, performed in spatio-temporal domains, such as physical laboratories. Measurements, state transformations, outcomes of measurements, and probabilities can then be expressed in terms of these more fundamental notions. If suitable axioms are imposed on the mathematical structures underlying the SCoP formalism, then the Hilbert space structure of quantum theory emerges as a unique mathematical representation, up to isomorphisms [20].
There are still difficulties connected with the interpretation of some of these axioms and their physical justification, in particular for what concerns compound physical entities [16]. But, but this research line was a source of inspiration for the operational approaches applying the quantum formalism outside the microscopic domain of quantum physics [39,40]. In particular, as we already mentioned in the Introduction, a very similar realistic and operational representation of conceptual entities can be given for the cognitive domain, in the sense that the SCoP formalism can again be employed to formalize the more abstract conceptual entities in terms of states, contexts, properties, measurements and probabilities of outcomes [2,3,4].
Let us first consider the empirical phenomenology of cognitive psychology. Like in physics, where laboratories define precise spatio-temporal domains, we can introduce 'psychological laboratories' where cognitive experiments are performed. These experiments are performed on situations that are specifically 'prepared' for the experiments, including experimental devices, and, for example, structured questionnaires, human participants that interact with the questionnaires in written answers, or each other, e.g., an interviewer and an interviewed. Whenever empirical data are collected from the responses of several participants, a statistics of the obtained outcomes arises. Starting from these empirical facts, we identify in our approach entities, states, contexts, measurements, outcomes and probabilities of outcomes, as follows.
The complex of experimental procedures conceived by the experimenter, the experimental design and setting and the cognitive effect that one wants to analyze, define a conceptual entity A, and are usually associated with a preparation procedure of a state of A. Hence, like in physics, the preparation procedure sets the initial state p A of the conceptual entity A under study. Let us consider, for example, a questionnaire where a participant is asked to rank on a 7-point scale the membership of a list of items with respect to the concepts Fruits, Vegetables and their conjunction Fruits and Vegetables. The questionnaire defines the states p F ruits , p V egetables and p F ruits and V egetables of the conceptual entities Fruits, Vegetables and Fruits and Vegetables, respectively. It is true that cognitive situations exist where the preparation procedure of the state of a conceptual entity is hardly controllable. Notwithstanding this, the state of the conceptual entity, defined by means of such a preparation procedure, is a 'state of affairs'. It indeed expresses a 'reality of the conceptual entity', in the sense that, once prepared in a given state, such condition is independent of any measurement procedure, and can be confronted with the different participants in an experiment, leading to outcome data and their statistics, exactly like in physics.
A context e is an element that can provoke a change of state of the conceptual entity. For example, the concept Juicy can function as a context for the conceptual entity Fruits leading to Juicy Fruits, which can then be considered as a state of the conceptual entity Fruits. A special context is the one introduced by the measurement itself. Indeed, when the cognitive experiment starts, an interaction of a cognitive nature occurs between the conceptual entity A under study and a participant in the experiment, in which the state p A of the conceptual entity A generally changes, being transformed to another state p. Also this cognitive interaction is formalized by means of a context e. For example, if the participant is asked to choose among a list of items, say, Olive, Almond, Apple, etc., the most typical one with respect to Fruits, and the answer is Apple, then the initial state p F ruits of the conceptual entity Fruits changes to p Apple , i.e. the state describing the situation 'the fruit is an apple', as a consequence of the contextual interaction with the participant.
The change of the state of a conceptual entity due to a context may be either 'deterministic', hence in principle predictable under the assumption that the state before the context acts is known, or 'intrinsically probabilistic', in the sense that only the probability µ(p, e, p A ) that the state p A of A changes to the state p is given. In the example above on typicality estimations, the typicality of the item Apple for the concept Fruits is formalized by means of the transition probability µ(p Apple , e, p F ruits ), where the context e is the context of the typicality measurement.
Like in physics, an important role is played by experiments with only two outcomes, the so-called 'yes-no experiments'. Suppose that in an opinion poll a participant is asked to answer the question: "Is Gore honest and trustworthy?". Only two answers are possible: 'yes' and 'no'. Suppose that, for a given participant, the answer is 'yes'. Then, the state p Honesty of the conceptual entity Honesty and Trustworthiness (which we will denote by Honesty, for the sake of simplicity) changes to a new state p Gy , which is the state describing the situation 'Gore is honest'. Hence, we can distinguish a class of yes-no measurements on conceptual entities, as we do in physics.
The third step is the mathematical representation. We have seen that the Hilbert space formalism of quantum theory is general enough to capture an operational description of any entity in the microphysical domain. Then, the strong analogies between the realistic and operational descriptions of physical and conceptual entities, in particular for what concerns the measurement process, suggest us to apply the same Hilbert space formalism when representing cognitive situations. Hence, each conceptual entity A is associated with a Hilbert space H, and the state p A of A is represented by a unit vector |A ∈ H.
A yes-no measurement is represented by a spectral family {M, ½ − M }, where M denotes an orthogonal projection operator over the Hilbert space H, and ½ denotes the identity operator over H. The probability that the 'yes' outcome is obtained in such a yes-no measurement when the conceptual entity A is in the state represented by |A is then given by the Born rule µ(A) = A|M |A . For example, M may represent an item x that can be chosen in relation to a given concept A, so that its membership weight is given by µ(A).
The Born rule obviously applies to measurement with more than two outcomes too. For example, a typicality measurement involving a list of n different items x 1 , . . . , x n with respect to a concept A can be represented as a spectral measure {M 1 , . . . , M n }, where n k=1 M k = ½ and M k M l = δ kl M k , such that the typicality µ k (A) of the item x k with respect to the concept A is again given by the Born rule An interesting aspect concerns the final state of a conceptual entity A after a human judgment. As above, we can assume the existence of a nonempty class of cognitive measurements that are ideal first kind measurements in the standard quantum sense, i.e. that satisfy the 'Lüders postulate'. For example, if the typicality measurement of a list of items x 1 , . . . , x n with respect to a concept A gave the outcome x k , then the final state of the conceptual entity after the measurement is represented by the unit vector . This means that the weights µ k (A) given by the Born rule can actually be interpreted as transition probabilities µ(p k , e, p A ), where e is the context producing the transitions from the initial state p A of the conceptual entity A, represented by the unit vector |A , to one of the n possible outcome states p k , represented by the unit vectors |A k . So, how can a Hilbert space model be actually constructed for a cognitive situation? To answer this question let us consider again a conceptual entity A, in the state p A , a cognitive measurement on A described by means of a context e, and suppose that the measurement has n distinct outcomes, x 1 , x 2 , . . . , x n . A quantum theoretical model for this situation can be constructed as follows. We associate A with a n-dimensional complex Hilbert space H, and then consider an orthonormal base {|e 1 , |e 2 , . . . , |e n } in H (since H is isomorphic to the Hilbert space C n , the orthonormal base of H can be the canonical base of C n ). Next, we represent the cognitive measurement described by e by means of the spectral family {M 1 , M 2 , . . . , M n }, where M k = |e k e k |, k = 1, 2, . . . , n. Finally, the probability that the measurement e on the conceptual entity A in the state p A gives the outcome x k is given by What about the interpretation of the Hilbert space formalism above? Two major points should now be reminded, namely: (i) the states of conceptual entities describe the 'modes of being' of these conceptual entities; (ii) in a cognitive experiment, a participant acts as a (measurement) context for the conceptual entity, changing its state.
This means that, as we mentioned already, the state p A of the conceptual entity A is represented in the Hilbert space formalism by the unit vector |A , the possible outcomes x k of the experiment by the base vectors |e k , and the action of a participant (or the overall action of the ensemble of participants) as the state transformation |A → |e k induced by the orthogonal projection operator M k = |e k e k |, if the outcome x k is obtained, so that the probability of occurrence of x k can also be written as µ k (A) = µ(|e k , e, |A ), where e is the measurement context associated with the spectral family {M 1 , M 2 , . . . , M n }.
It follows from (i) and (ii) that a state, hence a unit vector in the Hilbert space representation of states, does not describe the subjective beliefs of a person, or collection of persons, about a conceptual entity. Such subjective beliefs are rather incorporated in the cognitive interaction between the cognitive situation and the human participants deciding on that cognitive situation. In this respect, our operational quantum approach to human cognition is also a realistic one, and thus it departs from other approaches that apply the mathematical formalism of quantum theory to model cognitive processes [7,9,12,13,21,22]. Of course, one could say that the difference between interpreting the quantum state as a 'state of belief' of a participant in the experiment, or as a 'state of a conceptual entity', i.e. a 'state of the situation which the participant is confronted with during an experiment', is only a question of philosophical interpretation, but comes to the same when it concerns the methodological development of the approach. Although this is definitely partly true, we do not fully agree with it. Interpretation and methodology are never completely separated. A certain interpretation, hence giving rise to a specific view on the matter, will give rise to other ideas of how to further develop the approach, how to elaborate the method, etc., than another interpretation, with another view, will do. We believe that an operational-realistic approach, being balanced between attention for idealist as well as realist philosophical interpretations, carries in this sense a particular strength, precisely due to this balance. A good example of this is how we were inspired to use the superposition principle of quantum theory in our modeling of concepts as conceptual entities. We represented the combination of two concepts by a state that is the linear superposition of the states describing the component concepts. This way of representing combined conceptual entities captures the nature of emergence, exactly like in physics. It would not be obvious to put forward this description when state of beliefs are the focus of what can be predicted.
We stress a third point that is important, in our opinion. For most situations, we interpret the effect of the cognitive context on a conceptual entity in a decision-making process as an 'actualization of pure potentiality'. Like in quantum physics, the (measurement) context does not reveal pre-existing properties of the entity but, rather, it makes actual properties that were only potential in the initial state of the entity (unless the initial state is already an eigenstate of the measurement in question, like in physics) [2,3,4].
It follows from the previous discussion that our research investigates the validity of quantum theory as a general, unitary and coherent theory for human cognition. Our quantum theoretical models, elaborated for specific cognitive situations and data, derive from quantum theory as a consequence of the assumptions about this general validity. As such, these models are subject to the technical and epistemological constraints of quantum theory. In other terms, our quantum modeling rests on a 'theory based approach', and should be distinguished from an 'ad hoc modeling based approach', only devised to fit data. In this respect, one should be suspicious of models in which free parameters are added on an 'ad hoc' basis to fit the data more closely in specific experimental situations. In our opinion, the fact that our 'theory derived model' reproduces different sets of experimental data constitutes in itself a convincing argument to support its advantage over traditional modeling approaches and to extend its use to more complex cognitive situations (in that respect, see also our final remarks in Section 6).
We present in Section 3 the results obtained in our quantum theoretical approach in the light of the epistemological perspective of this section.
On the modeling effectiveness of Hilbert space
The quantum approach to cognition described in Section 2 produced concrete models in Hilbert space, which faithfully matched different sets of experimental data collected to reveal 'decision-making errors' and 'probability judgment errors'. This allowed us to identify genuine quantum structures in the cognitive realm. We present a reconstruction of the attained results in the following.
The first set of results concerns knowledge representation and conceptual categorization and combination. James Hampton collected data on how people rate membership of items with respect to pairs of concepts and their combinations, conjunction [41], disjunction [42] and negation [43]. By using the data in [42], we reconstructed the typicality estimations of 24 items with respect to the concepts Fruits and Vegetables and their disjunction Fruits or Vegetables. We showed that the concepts Fruits and Vegetables interfere when they combine to form Fruits or Vegetables, and the state of the latter can be represented by the linear superposition of the states of the former. This behavior is analogous to that of quantum particles interfering in the double-slit experiment when both slits are open. The data are faithfully represented in a 25-dimensional Hilbert space over complex numbers [10,11].
In the data collected on the membership estimations of items with respect to pairs (A, B) of concepts and their conjunction 'A and B' and disjunction 'A or B', Hampton found systematic violations of the rules of classical (fuzzy set) logic and probability theory. For example, the membership weight of the item Mint with respect to the conjunction Food and Plant is higher than the membership weight of Mint with respect to both Food and Plant ('overextension'). Similarly, the membership weight of the item Ashtray with respect to the disjunction Home Furnishing or Furniture is lower than the membership weight of Ashtray with respect to both Home Furnishing and Furniture ('underextension'). We showed that overextension and underextension are natural expressions of 'conceptual emergence' [5,11]. Namely, whenever a person estimates the membership of an item x with respect to the pair (A, B) of concepts and their combination C(A, B), two processes act in the person's mind. The first process is guided by 'emergence', that is, the person estimates the membership of x with respect to the new emergent concept C(A, B). The second process is guided by 'logic', that is, the person separately estimates the membership of x with respect to A and B and applies a probabilistic logical calculus to estimate the membership of x with respect to C(A, B) [44]. More important, the new concept C(A, B) emerges from the concepts A and B, exactly as the linear superposition of two quantum states emerges from the component states. A two-sector Fock space faithfully models Hampton's data, and was later successfully applied to the modeling of more complex situations involving concept combinations (see, e.g., [44,45]). It is interesting to note that the size of deviation of classical probabilistic rules due to overextension and underextension generally depends on the item x and the specific combination C(A, B) of the concepts A and B that are investigated. However, we recently performed a more general experiment in which we asked the participants to rank the membership of items with respect to the concepts A, B, their negations 'not A', 'not B', and the conjunctions 'A and B', 'A and not B', 'not A and B', and 'not A and not B'. We surprisingly found that the size of deviation from classicality in this experiment does not depend on either the item or the pair of concepts or the specific combination, but shows to be a numerical constant. Even more surprisingly, our two-sector Fock space model correctly predicts the value of this constant, capturing in this way a deep non-classical mechanism connected in a fundamental way with the mechanism of conceptual formation itself rather than only specifically with the mechanism of conceptual combination [32,33].
Different concepts entangle when they combine, where 'entanglement' is meant in the standard quantum sense. We proved this feature of concepts in two experiments. In the first experiment, we asked the participants to choose the best example for the conceptual combination The Animal Acts in a list of four examples, e.g., The Horse Growls, The Bear Whinnies, The Horse Whinnies and The Bear Growls. By suitably combining exemplars of Animal and exemplars of Acts, we performed four joint measurements on the combination The Animal Acts. The expectation values violated the 'Clauser-Horne-Shimony-Holt' version of Bell inequalities [46,47]. The violation was such that, not only the state of The Animal Acts was entangled, but also the four joint measurements were entangled, in the sense that they could not be represented in the Hilbert space C 4 as the (tensor) product of a measurement performed on the concept Animal and a measurement performed on the concept Acts [34]. In the second experiment, performed on the conceptual combination Two Different Wind Directions, we confirmed the presence of quantum entanglement, but we were also able to prove that the empirical violation of the marginal law in this type of experiments is due to a bias of the participants in picking wind directions. If this bias is removed, which is what we did in an ensuing experiment on Two Different Space Directions, one can show that people pick amongst different space directions exactly as coincidence spin measurement apparatuses pick amongst different spin directions of a compound system in the singlet spin state. In other words, entanglement in concepts can be proved from only the statistics of the correlations of joint measurements on combined concepts, exactly as in quantum physics [35].
Since concepts exhibit genuine quantum features when they combine pairwise, it is reasonable to expect that these features should be reflected in the statistical behavior of the combination of several identical concepts. Indeed, we detected quantum-type indistinguishability in an experiment on the combination of identical concepts, such as the combination Eleven Animals. More specifically, we found significant evidence of deviation from the predictions of classical statistical theories, i.e. 'Maxwell-Boltzmann distribution'. This deviation has clear analogies with the deviation of quantum mechanical from classical mechanical statistics, due to indistinguishability of microscopic quantum particles, that is, we found convincing evidence of the presence of 'Bose-Einstein distribution'. In the experiment, indeed, people do not seem to distinguish two identical concepts in the combination of N identical concepts, which is more evident in more abstract than in more concrete concepts, as expected [36].
The second set of results concern 'decision-making errors under uncertainty'. In the 'disjunction effect' people prefer action x over action y if they know that an event A occurs, and also if they know that A does not occur, but they prefer y over x if they do not know whether A occurs or not. The disjunction effect violates a fundamental principle of rational decision theory, Savage's 'Sure-Thing principle' and, more generally, the total probability rule of classical probability [48]. This preference of sure over unsure choices violating the Sure-Thing principle was experimentally detected in the 'two-stage gamble' and in the 'Hawaii problem' [49]. In the experiment on a gamble that can be played twice, the majority of participants prefer to bet again when they know they won in the first gamble, and also when they know they lost in the first gamble, but they generally prefer not to play when they do not know whether they won or lost. In the Hawaii problem, most students decide to buy the vacation package when they know they passed the exam, and also when they know they did not pass the exam, but they generally decide not to buy the vacation package when they do not know whether they passed or not passed the exam. We recently showed that, in both experimental situations, this 'uncertainty aversion' can be explained as an effect of underextension of the conceptual entities A and 'not A' with respect to the conceptual disjunction 'A or not A', where the latter describes the situation of not knowing which event, A or 'not A', will occur. The concepts A and 'not A' interfere in the disjunction 'A or not A', which determines its underextension. A Hilbert space model in C 3 allowed us to reproduce the data in both experiments on the disjunction effect [45].
Ellsberg's thought experiments, much before the disjunction effect, revealed that the Sure-Thing principle is violated in concrete decision-making under uncertainty, as people generally prefer known over unknown probabilities, instead of maximizing their expected utilities. In the famous 'Ellsberg three-color example', an urn contains 30 red balls and 60 balls that are either yellow or black, in unknown proportion. One ball will be drawn at random from the urn. The participant is firstly asked to choose between betting on 'red' and betting on 'black'. Then, the same participant is asked to choose between betting on 'red or yellow' and betting on 'black or yellow'. In each case, the 'right' choice will be awarded with $100. As the events 'betting on red' and 'betting on black or yellow' are associated with known probabilities, while their counterparts are not, the participants will prefer betting on the former than betting on the latter, thus revealing what Ellsberg called 'ambiguity aversion', and violating the Sure-Thing principle [50]. This pattern of choice has been confirmed by several experiments in the last thirty years [51]. Recently, Machina identified in a couple of thought experiments, the '50/51 example' and the 'reflection example', a similar mechanism guiding human preferences in specific ambiguous situations, namely, 'information symmetry' [52,53], which was experimentally confirmed in [54]. In our quantum theoretical approach, ambiguity aversion and information symmetry are two possible cognitive contexts influencing human preferences in uncertainty situations and changing the states of the 'Ellsberg and Machina conceptual entities', respectively. Hence, an ambiguity aversion context will change the state of the Ellsberg conceptual entity in such a way that 'betting on red' and 'betting on black or yellow' are finally preferred. In other terms, the novel element of this approach is that the initial state of the conceptual entity, in its Hilbert space representation, can also change because of the pondering of the participants in relation to certain choices, before being collapsed into a given outcome. This opens the way to a generalization of rational decision theory with quantum, rather than classical, probabilities [55].
The results above provide a strong confirmation of the quantum theoretical approach presented in Section 2, and we expect that further evidence will be given in this direction in the years to come. In the next section we instead intend to analyze some situations where deviations from Hilbert space modeling of human cognition apparently occur. We will see in Section 5 that these deviations are however compatible with the general operational-realistic framework portrayed in Section 2.
Deviating from Hilbert space
As mentioned in Section 2, if suitable axioms are imposed on the SCoP formalism, the Hilbert space structure of quantum theory can be shown to emerge uniquely, up to isomorphisms [20]. However, we also know that certain experimental situations can violate some of these axioms. This is the case for instance when we consider entities formed by experimentally separated sub-entities, a situation that cannot be described by the standard quantum formalism [16,56]. Similarly, one may expect that the structural shortcomings of the standard quantum formalism can also manifest in the ambit of psychological measurements, in the form of data that cannot be exactly modeled (or jointly modeled) by means of the specific Hilbert space geometry and the associated Born rule. The purpose of this section is to describe two paradigmatic examples of situations of this kind: 'question order effects' and 'response replicability'. In the following section, we then show how the quantum formalism can be naturally completed to also faithfully model these data, in a way that remains consistent with our operational-realistic approach.
Let us first remark that the mere situation of having to deal with a set of data for which we don't have yet a faithful Hilbert space model should not make one necessarily search for an alternative more general quantum-like mathematical structure as a modeling environment. Indeed, it is very well possible that the adequate Hilbert space model has not yet been found. Recently, however, a specific situation was identified and analysed indicating that the standard quantum formalism in Hilbert space would not be able to be used to model it [22]. This situation combines two phenomena: 'question order effects' and 'response replicability'. We start by explaining 'question order effects' and how the cognitive situation in which they appear can be represented in Hilbert space.
For this we come back to the yes-no experiment of Section 2, where participants are asked: "Is Gore honest and trustworthy?". This experiment gives rise to a two-outcome measurement performed on the conceptual entity Honesty in the initial state p H , represented by the unit vector |H ∈ H, where H is a two-dimensional Hilbert space if we assume the measurement to be non-degenerate, or more generally a n-dimensional Hilbert space if we also admit the possibility of sub-measurements. Denoting {M G ,M G = ½ − M G } the spectral family associated with this measurement, the probability of the 'yes' outcome (i.e. to answer 'yes' to the question about Gore's honesty and trustworthiness) is then given by the Born rule µ Gy (H) = H|M G |H , and of course µ Gn (H) = H|M G |H = 1 − µ Gy (H) is the probability for the 'no' outcome. We then consider a second measurement performed on the conceptual entity Honesty, but this time associated with the question: "Is Clinton honest and trustworthy?". We denote {M C ,M C = ½ − M C } the spectral family associated with this second measurement, so that the probabilities for the 'yes' and 'no' outcomes are again given by µ Cy (H) = H|M C |H and µ Cn (H) = H|M C |H , respectively.
Starting from these two measurements, it is possible to conceive sequential measurements, corresponding to situations where the respondents are subject to the Gore and Clinton questions in a succession, one after the other, in different orders. Statistical data about 'Clinton/Gore' sequential measurements were reported in a seminal article on question order effects [57] and further analyzed in [9,58]. More precisely, after fixing a rounding error in [58], we have the following sequential (or conditional) probabilities [24]: where (1) corresponds to the sequence where first the Clinton and then the Gore measurements are performed, whereas (2) corresponds to the reversed order sequence for the measurements. Considering that the probabilities in each of the four columns above are sensibly different, these data describe typical 'question order effects'. Quantum theory is equipped with a very natural tool to model question order effects: 'incompatible measurements', as expressed by the fact that two self-adjoint operators, and the associated spectral families, in general do not commute. More precisely, the Hilbert space expression for the probability that, say, we obtain the answer CyGn when we perform first the Clinton measurement and then the Gore one, is [9,58]: µ CyGn (H) = H|M CMG M C |H . Similarly, the probability to obtain the outcome GnCy, for the sequential measurement in reversed order, is: µ GnCy (H) = H|M G M CMG |H . Since we have the operatorial identitȳ if the spectral families associated with the two measurements do not commute. In the following we will analyse whether non-compatibility within a standard quantum approach can cope in a satisfying way with these question order effects, and show that a simple 'yes' to this question is not possible. Indeed, a deep problem already comes to the surface in relation to the phenomenon of 'response replicability'.
Consider again the Gore/Clinton measurements: if a respondent says 'yes' to the Gore question, then is asked the Clinton question, then again is asked the Gore question, the answer given to the latter will almost certainly be a 'yes', independently of the answer given to the intermediary Clinton question. This phenomenon is called 'response replicability'. If in addition to question order effects also response replicability is jointly modeled in Hilbert space quantum mechanics, a contradiction can be detected, as shown in [22]. Let us indicate what are the elements that produce this contradiction. In standard quantum mechanics only if a state is an eigenstate of the considered measurement the outcome 'yes' will be certain in advance. Also, measurements that can transform an arbitrary initial state into an eigenstate are ideal measurements called of the first kind. According to response replicability, outcomes that once have been obtained for a measurement will have to become certain in advance if this same measurement is performed a second time. This means that the associated measurements should be ideal and of the first kind. For the case of the Gore/Clinton measurements, and the situation of response replicability mentioned above, this means that the Gore measurement should be ideal and of the first kind. But one can also consider the situation where first the Clinton measurement is performed, then the Gore measurement and afterwards the Clinton measurement again. A similar analysis leads then to the Clinton measurement needing to be ideal and of the first kind. This means however that after more than three measurements that alternate between Clinton and Gore, the state needs to have become an eigenstate of both measurements. As a consequence, both measurements can be shown to be represented by commuting operators. The proof of the contradiction between 'response replicability' and 'non-commutativity' worked out in [22] is formal and also more general than the intuitive reasoning presented above -for example, the contradiction is also proven when measurements are represented by positive-operator valued measures instead of projection valued measures, which is what we have considered here -and hence indicates that the non-commutativity of the self-adjoint operators needed to account for the question order effects cannot be realised together with the 'ideal and first kind' properties needed to account for the response replicability within a standard quantum Hilbert space setting.
Although refined experiments would be needed to reveal the possible reasons for response replicability, it is worth to put forward some intuitive ideas, as we have been developing a quantum-like but more general than Hilbert space formalism within our Brussels approach to quantum cognition [25,26,27], and we believe that we can cope with the above contradiction within this more general quantum-like setting in a very natural way. It seems to be a plausible hypothesis that response replicability is, at least partly, due to a multiplicity of effects, that however take place during the experiment itself, such as desire of coherence, learning, fear of being judged when changing opinion, etc. And a crucial aspect for both question order effects and response replicability appearing in the Gore/Clinton situation is that the sequential measurements need to be carried out with the same participant, who has to be tested again and again. This is different than the situation in quantum physics, where order effects appear for non-commuting observables also when sequential measurements are performed with different apparatuses. Hence, both question order effects and response replicability seem to be the consequence of 'changes taking place in the way each subject responds probabilistically to the situation -described by the state of the conceptual entity in our approach -he or she is confronted with during a measurement'. Since the structure of the probabilistic response to a specific state is fixed in quantum mechanics, being determined by the Born rule, it is clear that such a change of the probabilistic response to a given measurement, when it is repeated in a sequence of measurements, cannot be accounted for by the standard quantum formalism. And it is exactly such structure of the probabilistic response to a same measurement with respect to a given state that can be varied in the generalized quantum-like theory that we have been developing [25,26,27]. This is the reason that, when we became aware of the contradiction identified in [22], we were tempted to investigate whether in our generalized quantum-like theory the contradiction would vanish, and response replicability would be jointly modelizable with question order effects. And indeed, we could obtain a positive result with respect to this issue [24], which we will now sketch in the next section.
Beyond-quantum models
We presented in Section 4 two paradigmatic situations in human cognition that cannot be modeled together using the standard quantum formalism. We want now to explain how the latter can be naturally extended to also deal with these situations, still remaining in the ambit of a unitary and coherent framework for cognitive processes.
For this, we introduce a formalism where the probabilistic response with respect to a specific experimental situation, i.e. a state of the conceptual entity under consideration, can vary, and hence can be different than the one compatible with the Born rule of standard quantum theory. This formalism, called the 'extended Bloch representation' of quantum mechanics [25], exploits in its most recent formulation the fact that the states of a quantum entity (described as ray-states or density matrix-states) can be uniquely mapped into a convex portion of a generalized unit Bloch sphere, in which also measurements can be represented in a natural way, by means of appropriate simplexes having the eigenstates as vertex vectors. A measurement can then be described as a process during which an abstract point particle (representing the initial state of the quantum entity) enters into contact with the measurement simplex, which then, as if it was an elastic and disintegrable hyper-membrane, can collapse to one of its vertex points (representing the outcomes states) or to a point of one of its sub-simplexes (in case the measurement would be degenerate).
We do not enter here into the details of this remarkable process, and refer the reader to the detailed descriptions in [24,25,26,27]. For our present purposes, it will be sufficient to observe that a measurement simplex, considered as an abstract membrane that can collapse as a result of some uncontrollable environmental fluctuations, can precisely model that aspect of a measurement that in the quantum jargon is called 'wave function collapse'. More precisely, when the abstract point particle enters into contact with the 'potentiality region' represented by such membrane, it creates some 'tension lines' partitioning the latter into different subregions, one for each possible outcome. The collapse of the membrane towards one of the vertex points then depends on which subregion disintegrates first, so that the different outcome probabilities can be expressed as the relative Lebesgue measures of these subregions (the larger a subregion, the higher the associated probability). In other terms, this membrane's mechanism, with the tension lines generated by the abstract point particle, is a mathematical representation of a sort of 'weighted symmetry breaking' process. Now, thanks to the remarkable geometry of simplexes, it can be proven that if the membrane is chosen to be uniform, thus having the same probability of disintegrating in any of its points (describing the different possible measurement-interactions), the collapse probabilities are exactly given by the Born rule. In other terms, the latter can be derived, and explained, as being the result of a process of actualization of potential hidden-measurement interactions, so that the extended Bloch representation constitutes a possible solution to the measurement problem.
Thus, when the membrane is uniform, the 'way of choosing' an outcome is precisely the 'Born way'. However, a uniform membrane is a very special situation, and it is natural to also consider membranes whose points do not all have the same probability of disintegrating, i.e. membranes whose disintegrative processes are described by non-uniform probability densities ρ, which we simply call ρ-membranes. Nonuniform ρ-membranes can produce outcome probabilities different from the standard quantum ones and give rise to probability models different from the Hilbertian one (even though the state space is a generalized Bloch sphere derived from the Hilbert space geometry 1 ). But this is exactly what one needs in order to account, in a unified framework, for the situation we encounter when combining the phenomena of 'response replicability' and 'question order effects', as previously described and analysed in [22].
We thus see that it is possible to naturally complete the quantum formalism to obtain a finer grained description of psychological experiments in which the probabilistic response of a measurement with respect to a state can be different to the one described by the Born rule. Additionally, our generalized quantumlike theory also explains why, despite the fact that individual measurements are possibly associated with different non-Born probabilities, the Born rule nevertheless appears to be a very good approximation to describe numerous experimental situations. This is related to the notion of 'universal measurement', firstly introduced by one of us in [28] and further analyzed in [25,26,27,60]. In a nutshell, a universal measurement is a measurement whose probabilities are obtained by averaging over the probabilities of all possible quantum-like measurements sharing a same set of outcomes, in a same state space. In other terms, a universal measurement corresponds to an average over all possible non-uniform ρ-membranes, associated with a given measurement simplex. Following a strategy similar to that used in the definition of the 'Wiener measure', it is then possible to show that if the state space is Hilbertian (more precisely, a convex set of states inscribed in a generalized Bloch sphere, inherited from a Hilbert space), then the probabilities of a universal measurement are precisely those predicted by the Born rule.
In [24] we could show that the joint situation of question order effects and response replicability for the data collected with respect to the Gore/Clinton measurements, and others, is modelizable within our generalized quantum theory by introducing non-Born type measurements. However, we were also able to provide a better modeling of the question order effects data as such. Indeed, using standard Bornprobability quantum theory it was only possible to model approximately these data in earlier studies [58]. This is due to the existence of a general algebraic equality about sequential measurements in standard quantum mechanics which is the following [24,58,59]: This equality has been called the 'QQ-equality', and can be used as a test for the quantumness of the probability model, but only in the sense that a quantum model, necessarily, has to obey it, although the fact that it does so is not a guarantee that the model will be Hilbertian. Inserting the experimental values (1)-(2) into (4), one finds q = 0.0032 = 0. This value is small (being only 0.32% of the maximum value q can take, which is 1), which is the reason that approximate modeling can be obtained within standard quantum mechanics [58]. Note however that (3) does not depend on the dimension of the Hilbert space considered, which means that even in higher dimensional Hilbert spaces, if degenerate measurements are considered, an exact modeling would still be impossible to obtain. We have reasons to believe that also question order effects, with the QQ-equality standing in the way of an exact modeling of the data, contain an indication for the need to turn to a more general quantum-like theory, such as the one we used to cope with the joint phenomenon of question order effects and response replicability. We present some arguments in this regard in the following of this section. First, we note that in case one would choose a two-dimensional Hilbert space, which is the natural choice when dealing with two-outcome measurements, additional equalities can be written that are this time strongly violated by the data. As an example, consider the quantity [24]: If the Hilbert space is two-dimensional, one can write M G = |G G|,M G = |Ḡ Ḡ |, as well as M C = |C C|, M C = |C C |. Replacing these expressions into (6) one finds, after some easy algebra, that q ′ = 0. However, inserting the experimental values (1)-(2) into (6), one finds q ′ = −0.073 = 0, which not only is not zero, but also 29.2% of the maximum value that q ′ can take (which is 0.25). Second, let us repeat our intuitive reasoning as to why measurements in the situation of response replicability carry non-Bornian probabilities. Due to the local contexts of the collection of sequential measurements, Gore, Clinton, and then Gore again, the third measurement internally changes into a non-Bornian one, and more specifically a deterministic one for the considered state, since response replicability means that for all subsequent Gore measurements the same outcome is assured. It might well be the case, although an intuitive argument would be more complex to give in this case, that also for the situation of question order effects, precisely because they only appear if a same human mind is sequentially interrogated, non-Bornian probabilities would be required. An even stronger hypothesis, which we plan to investigate in the future, is that most individual human minds, and perhaps even all, would carry in general non-Bornian probabilities, so that the success of standard quantum mechanics and Bornian probabilities would be mainly an effect of averaging over a sufficiently large set of different human minds, which effectively is what happens in a standard psychological experiment. If this last hypothesis is true, the violation of the Born rule for question order effects and response replicability would be quite natural, since the same human mind is needed to provoke these effects. Indeed, our analysis in [26,27] shows that standard quantum probabilities in the modeling of human cognition can be explained by considering that in numerous experimental situations the average over the different participants will be quite close to that of a universal measurement, which as we observed is exactly given by the Born rule. In other terms, even if the probability model of an individual psychological measurement could be non-Hilbertian, it will generally admit a first order approximation, and when the states of the conceptual entity under investigation can be described by means of a Hilbert space structure, this first order approximation will precisely correspond to the quantum mechanical Born rule.
If the above considerations provide an interesting piece of explanation as to why the Born rule is generally successful also beyond the micro-physical domain, at the same time it also contains a plausible reason of why it will possibly be not successful in all experimental situations, i.e. when the average is either not large enough, or when the experiment is so conceived that it doesn't apply as such. This could be the typical situation of question order effects and response replicability, since in this case we do not consider an average over single measurements, but over sequential (conditional) measurements. And this could be an explanation of why Hilbertian symmetries like those described above can be easily violated and that it will not be possible, by means of the Born rule, to always obtain an exact fit of the data [24,30].
Additionally, as we said, it allowed us to precisely fit the data by using the extended Bloch representation, and more specifically simple one-dimensional locally uniform membranes inscribed in a 3-dimensional Bloch sphere that can disintegrate (i.e. break) only inside a connected internal region [24]. Thanks to this modeling, we could also understand that the reason the Clinton/Gore and similar data appear to almost obey the QQ-equality (4) is quite different from the reason the equality is obeyed by pure quantum probabilities. Indeed, in a pure quantum model two specific contributions to the q-value (4), called the 'relative indeterminism' and 'relative asymmetry' contributions, are necessarily both identically zero, whereas we could show, using our extended model, that for the data (2), and similar data, these two contributions are both very different from zero, but happen to almost cancel each other, thus explaining why the q = 0 equality is almost obeyed, although the probabilities are manifestly non-Bornian [24].
Final considerations
In this article we explained the essence of the operational-realistic approach to cognition developed in Brussels, which in turn originated from the foundational approach to physics elaborated initially in Geneva and then in Brussels (in what has become known as the 'Geneva-Brussels school'). Our emphasis was that this approach is sufficiently general, and fundamental, to provide a unitary framework that can be used to coherently describe, and realistically interpret, not only standard quantum theory, but also its natural extensions, like the extended Bloch model and the GTR-model. In this final section we offer some additional comments on our approach to cognition, taking into consideration the confusion that sometimes exists between 'ad hoc (phenomenological) models' and 'theoretical (first principle) models', as well as the critique that a Hilbertian model (and a fortiori its possible extensions) is suspicious because it allows 'too many free parameters' to obtain an exact fit (and not just an approximate fit) for all the experimental data.
In that respect, it is worth emphasizing that the principal focus of our 'theory of human cognition' is not to model as precisely as possible the data gathered in psychological measurements. A faithful modeling of the data is of course an essential part of it, but our aim is actually more ambitious. In putting forward our methodology, consisting in looking at instances of decision-making as resulting from an interaction of a decision-maker with a conceptual entity, we look first of all for a theory truly describing 'the reality of the cognitive realm to which a conceptual entity belongs', and additionally also 'how human minds can interact with the latter so that decision-making can occur'.
In this sense, each time we have put forward a model for some specific experimental data, it has always been our preoccupation to also make sure that (i) the model was extracted following the logic that governs our theory of human cognition, and (ii) that whatever other experiments would be performed by a human mind interacting with that same cognitive-conceptual entity under consideration, also the data of these hypothetical additional experiments could have been modeled exactly in the same way. Clearly, this requirement -that 'all possible experiments and data' have to be modeled in an equivalent wayposes severe constraints to our approach, and it is not a priori evident that this would always be possible. However, we are convinced that the fundamental idea underlying our methodology, namely that of looking upon a decision as an interaction of a human mind with a conceptual entity in a specific state (with such state being independent of the human minds possibly interacting with it), equips the theory of exactly those degrees of freedom that are needed to model 'all possible data from all possible experiments'.
As we already explained in the foregoing, in all this we have been guided by how physical theories deal with data coming from the physical domain. They indeed satisfy this criterion and are able to model all data from all possible experiments that can be executed on a given physical entity. What we have called 'conceptual entity' is what in physics corresponds to the notion of 'physical entity'. Now, in our approach we might be classified as adhering to an idealistic philosophy, i.e. believing that the conceptual entities "really exist," and are not mere creations of our human culture. Our answer to this objection is the following: to profit of the strength of the approach it is not mandatory to take a philosophical stance in the above mentioned way, in the sense that we are not obliged to attribute more existence to what we call a conceptual entity than that attributed, for example, to 'human culture' in its entirety. The importance of the approach lies in considering such a conceptual entity as independently existing from any interaction with a human mind, and describe the continuously existing interactions with human minds as processes of the 'change of state of the conceptual entity', and whenever applicable also as processes of the 'change of context'. And again, let us emphasize that this 'hidden-interaction' methodology is inspired by its relevance to physical theories. Our working hypothesis is that in this way it will be possible to advantageously model, and better understand, all of human cognition experimental situations.
Having said this, we observe that the interpretation of the quantum formalism that is commonly used in cognitive domains is a subjectivist one, very similar to that interpretation of quantum theory known as 'quantum Bayesianism', or 'QBism' [62]. In a sense, this interpretation is the polar opposite of our realistic (non-subjectivistic) operational approach. Indeed, QBism originates from a strong critique [63] of the famous Einstein-Podolsky-Rosen reality criterion [64], whereas at the foundation of the Geneva-Brussels approach there is the idea of taking such criterion not only extremely seriously, but also of using it more thoroughly, as a powerful demarcating tool separating 'actually existing properties' from 'properties that are only available to be brought into actual existence', and therefore exist in a potential sense [65]. In other terms, a quantum state is not considered in QBism as a description of the actual properties of a physical entity, but of the beliefs of the experimenter about it. Similarly, for the majority of authors in quantum cognition, a quantum state is a description of the state of belief of a participant, and not of the actual state of the conceptual entity that interacts with the participants. In ultimate analysis, this difference of perspectives is about taking a clear position regarding the key notion of 'certainty': is certainty (probability 1 assignments) just telling us something about the very firm belief of a subject, or also about some objective properties of the world (be it physical or cultural)? In the same way, are probabilities only shared personal beliefs, based on habit, or also elements of reality (considering that in principle their values can be predicted with certainty)? Although we certainly agree that it is not necessary to take a final stance on these issues to advantageously exploit the quantum mathematics in the modeling of many experimental situations, both in physics and cognition, we also think that the explicative power of a pure subjectivist view rapidly diminishes when we have to address the most remarkable properties of the physical and conceptual entities, like non-locality (non-spatiality) and the non-compositional way with which they can combine.
It is important to emphasize that the subjectivist view is also a consequence of the absence, in the standard quantum formalism, of a meaningful description of what goes on 'behind the scenes' during a measurement. On the other hand, the hidden-measurement paradigm, as implemented in the extended Bloch representation [25], or even more generally in the GTR-model [26,27,30], offers a credible description of the dynamics of a measurement process, in terms of a process of actualization of potential interactions, thus explaining a possible origin of the quantum indeterminism. This certainly allows understanding the socalled 'collapse of the state vector' as an objective process, either produced by a macroscopic apparatus in a physics laboratory, or by a mind-brain apparatus in a psychological laboratory. As we tried to motivate in the second part of this article, this completed version of the quantum formalism also allowed us to describe those aspects of a psychological measurements -the possible different ways participants can choose an outcome -that would be impossible to model by remaining within the narrow confines not only of the standard formalism, but also of a strict subjectivistic interpretation of it.
To conclude, a final remark is in order. Quantum cognition is undoubtedly a fascinating field of investi-gation also for physicists, as it offers the opportunity to take a new look at certain aspects of the quantum formalism and use them to possibly make discoveries also in the physical domain. We already mentioned the example of 'entangled measurements', that were necessary to exactly model certain correlations. Entangled (non-separable) measurements are usually not considered in the physics of Bell inequalities, while they are widely explored in quantum cryptography, teleportation and information. However, it is very possible that this stronger form of entanglement will prove to be useful for the interpretation of certain non-locality tests and the explanation of 'anomalies' that were identified in EPR-Bell experiments [34]. Also, for what concerns the notion of 'universal measurement', which is quite natural in psychological measurements, since data are obtained from a collection of different minds, could it be that 'universal averages' also happen in the physical domain? In other terms, could it be that a single measurement apparatus is actually more like 'a collection of different minds' than 'a single Born-like mind' ? Considering that the origin of the observed deviations from the Born rule, in situations of sequential measurements, can be understood as the ineffectiveness of the averaging process in producing the Born prescription, is it possible to imagine, in the physics laboratory, similar experimental situations where these deviations would be equally observed, thus confirming that the hypothesis of 'hidden measurement-interactions' would be a pertinent one also beyond the psychological domain? Whatever the verdict will be, we certainly live in a very stimulating time for foundational research; a time where the conceptual tools that once helped us building a deeper understanding of the 'microscopic layer' of our physical reality are now proving to be instrumental for understanding our human 'mental layer'; but also a time where all this is also coming back to physics, not only in the form of possible new experimental findings, but also of possible new and deeper understandings [66,67,68,69,70]. | 15,957.2 | 2015-12-29T00:00:00.000 | [
"Physics"
] |
Spontaneous emission of an atom near an oscillating mirror
We investigate the spontaneous emission of one atom placed near an oscillating reflecting plate. We consider the atom modeled as a two-level system, interacting with the quantum electromagnetic field in the vacuum state, in the presence of the oscillating mirror. We suppose that the plate oscillates adiabatically, so that the time-dependence of the interaction Hamiltonian is entirely enclosed in the time-dependent mode functions, satisfying the boundary conditions at the plate surface, at any given time. Using time-dependent perturbation theory, we evaluate the transition rate to the ground-state of the atom, and show that it depends on the time-dependent atom-plate distance. We also show that the presence of the oscillating mirror significantly affects the physical features of the spontaneous emission of the atom, in particular the spectrum of the emitted radiation. Specifically, we find the appearance of two symmetric lateral peaks in the spectrum, not present in the case of a static mirror, due to the modulated environment. The two lateral peaks are separated from the central peak by the modulation frequency, and we discuss the possibility to observe them with actual experimental techniques of dynamical mirrors and atomic trapping. Our results indicate that a dynamical (i.e. time-modulated) environment can give new possibilities to control and manipulate also other radiative processes of two or more atoms or molecules nearby, for example their cooperative decay or the resonant energy transfer.
The spontaneous emission rate and the emission spectrum of an atom inside a dynamical (time-modulated) photonic crystal, when its transition frequency is close to the gap of the crystal, have been recently investigated by the authors, finding modifications strictly related to the time-dependent photonic density of states [40]. These findings suggested that a dynamical environment can give further possibilities to control radiative processes of atoms, which is of fundamental importance for many processes in quantum optics and its applications. In this framework, the main aim of the present paper is to investigate the effects of a different kind of dynamical (time-dependent) environment, specifically an oscillating mirror, on the spontaneous decay of one atom in the vacuum, discussing both the decay rate and the emitted spectrum. As discussed later in this paper, this system appears within reach of actual experimental techniques of atomic trapping and dynamical mirrors. Spontaneous emission of a two-level atom near an oscillating plate has been recently investigated in [41] using a simple model, where the quantized electromagnetic field is modeled as two one-dimensional fields, and in the rotating wave approximation; in [41] it was also assumed that only field modes propagating within a small solid angle toward the mirror, and reflected back onto the atom, are affected by the mirror oscillation, while all other modes are assumed unaffected by its motion. These assumptions were justified by considering a specific atom-mirror-detector experimental setup, and in the spectrum they evaluate the photon population only in directions perpendicular and parallel to the atom-mirror direction. In our paper we instead consider a more general model, where the complete (three-dimensional, with all modes) electromagnetic field is quantized with the time-dependent boundary conditions determined by the (adiabatically) oscillating mirror. The contribution of all modes, whichever their propagation direction with respect to the mirror, is thus included in our calculation of the decay rate and of the spectrum, and we take into account the influence of the mirror's motion on photons propagating in all directions; also, a general orientation of the atomic dipole moment is considered. We also show that our system and our model are feasible with current experimental techniques, allowing observation of the effects we predict.
As mentioned above, we consider the full three-dimensional quantum electromagnetic field in the presence of a reflecting wall that oscillates adiabatically, so that the time dependence of the Hamiltonian is entirely enclosed in the time-dependent mode functions, satisfying the boundary conditions at the oscillating plate at any given time. By using time-dependent perturbation theory, we first evaluate the transition rate of the atom from the excited to the ground state, and show that, as a consequence of the motion of the conducting mirror, it depends on time. Because of our adiabatic approximation, its time dependence follows the law of motion of the plate. Moreover, we show that the oscillatory motion of the mirror significantly affects also other physical features of the spontaneous emission of the atom, in particular the spectrum of the emitted radiation. We find that, for times larger than the inverse of the mirror's oscillation frequency, two lateral peaks in the spectrum appear; their distance from the central peak is equal to the oscillation frequency of the mirror (smaller peaks at a distance twice the mirror's oscillation frequency are also present). These peaks, contrarily to the case investigated in [40] for an atom in a dynamical photonic crystal, are symmetric with respect to the central peak. All this allows a sort of fine tuning of the emitted radiation exploiting the environment, and it could be relevant when other resonant processes are considered, for example the cooperative spontaneous decay of two or more atoms, or the resonance energy transfer between two atoms or molecules.
The paper is organized as follows. In Section II, we introduce our system, and investigate the decay rate of the two-level system in the presence of the oscillating mirror. In Section III we investigate the spectrum of the radiation emitted by the atom, and discuss its main physical features. Section IV is devoted to our conclusive remarks.
II. SPONTANEOUS EMISSION RATE OF ONE ATOM NEAR AN OSCILLATING MIRROR
Let us consider an atom, modeled as a two-level system with atomic transition frequency ω 0 , located in the halfspace z > 0 near an infinite perfectly conducting plate. Let us suppose that the mirror oscillates with a frequency ω p , along a prescribed trajectory a(t) = a sin(ω p t), where a is the oscillation amplitude of the plate, and z = 0 is its average position. Although a mechanical motion with high oscillation frequencies is very difficult to obtain, it can be simulated by a dynamical mirror, that is a slab whose dielectric properties are periodically changed (from transparent to reflecting, for example), as obtained in proposed experiments for detecting the dynamical Casimir and Casimir-Polder effect [29,42,43]. Dynamical mirrors have been recently obtained in the laboratory, with oscillation frequencies up to several GHz. They are based on a superconducting cavity with one wall covered by a specific semiconductor layer, having a high mobility of the carriers and very short recombination times. A train of laser pulses, with multigigahertz repetition rate, is then sent to the semiconductor layer: it creates a plasma sheet, that periodically changes the semiconductor layer from transparent to reflecting, thus simulating a mechanical motion of the cavity wall with frequencies that cannot be reached through a mechanical motion [42,43]. The atom can be kept at a fixed position by atomic trapping techniques [44]. Our physical system is pictured in Figure 1, showing also relevant orientations, parallel and perpendicular, of the atomic transition dipole moment with respect to the plate.
We wish to investigate the effect of the mirror motion on the decay features (mainly decay rate and spectrum) of the two-level atom placed nearby, and interacting with the quantum electromagnetic field, initially in its vacuum state. We assume that the reflecting plate oscillates adiabatically, and that its maximum velocity is such that v p = aω p c, in order to have a nonrelativistic motion. Our adiabatic approximation is satisfied if the oscillation frequency of the plate ω p is much smaller than the atomic transition frequency ω 0 , and is also much smaller than the inverse of the time taken by a photon, emitted by the excited atom, to travel the atom-plate distance (ω p c/z 0 , where z 0 is the average atom-plate distance). In this case the atom instantaneously follows the plate's motion. Real photons emission from the oscillating mirror by the dynamical Casimir effect can be thus neglected. These conditions are satisfied for typical values of the relevant parameters, currently achievable in the laboratory: ω 0 ∼ 10 15 s −1 , ω p ∼ 10 9 s −1 , and z 0 ∼ 10 −6 m. Under these assumptions, the mode functions of the field, satisfying the boundary conditions at the plate surface, depend explicitly on time. We may obtain their expression by generalizing the usual expressions of the field mode functions for a static mirror [3], to the dynamical case using the instantaneous time-dependent atom-plate distance. The mode functions can be written in the following form, after separating their vector part, whereê kj are polarization unit vectors, k is the wavevector, j = 1, 2 the polarization index, and = x, y, z. The expression of the scalar functions f ( ) (k, r(t)), that do not depend from the polarization j, is where we have indicated with g (k x , k y ) ( = x, y, z) the time-independent part of the mode functions (2-4), given by In Eqs. (5)(6)(7), L is the side of a cubic cavity of volume V = L 3 where the field is quantized (the cavity walls are at x = ±L/2, y = ±L/2, z = 0, that is the average position of the oscillating mirror, and z = L); then, the limit L → ∞ is taken, in order to recover the single oscillating mirror at z = 0. Also, z(t) is the time-dependent atom-plate distance, that changes in time according to the equation of motion z(t) = z 0 − a(t) = z 0 − a sin(ω p t).
The Hamiltonian of our system, in the Coulomb gauge and in the multipolar coupling scheme, within the dipole approximation [45][46][47][48], is: where H I is the interaction term, given by In (9), µ is the matrix element of the atomic dipole moment operator between the ground and the excited states (assumed real), S z and S ± the atomic pseudospin operators, a kj (a † kj ) the bosonic annihilation (creation) operators of the electromagnetic field. We note that, due to the adiabatic approximation, the interaction Hamiltonian is timedependent through the mode functions only, while the field operators are the same as in the static case.
We now calculate the probability that the atom, initially excited with the field in the vacuum state, decays to its ground state at time t by emitting one photon with wavevector k and polarization j. Time-dependent perturbation theory up to the first order in the atom-field coupling gives After polarization sum, using the relation, with , m = x, y, z, Eq. (10) becomes The expression (12) is valid for any orientation of the atomic dipole moment with respect to the plate. In order to evaluate the decay probability, we sum (12) over k, obtaining |c(t)| 2 = k |c k (t)| 2 ; then we take continuum limit We now consider the specific cases of a dipole moment oriented parallel or orthogonal to the oscillating plate. If the dipole moment is along the x−direction, then only the components = m = x are nonvanishing, and, after some algebra, we get where we have limited the integration over k to a band of width ∆ω/c around k 0 = ω 0 /c. This is justified by the fact that only (resonant) field modes with a frequency around the atomic transition frequency ω 0 = ck 0 give a relevant contribution to the integral over k. At the end of the calculation, we will take the limit ∆ω → ∞.
Substituting the explicit expression of the scalar mode functions (2) into (13), after some algebra we find For a dipole moment oriented along the y−direction, we obtain for symmetry the same result (14), after substitution of µ x with µ y .
In the case of a dipole moment orthogonal to the mirror, that is = m = z, using the same procedure above, from (12) we obtain After some algebra, we get We wish to stress that our results given by Equations (14) and (16) are valid within our adiabatic approximation. meaning that the oscillation frequency of the plate is much smaller than both the atomic transition frequency (ω p ω 0 ) and the inverse of the time taken by a light signal to cover the atom-plate distance (ω p c/z 0 ). Typical experimental values, ω 0 ∼ 10 15 s −1 , ω p ∼ 10 10 s −1 and z 0 ∼ 10 −6 m, well satisfy these conditions.
From (14) and (16) we can obtain the corresponding decay rates by taking their time derivative, Γ x(y) (z 0 , t) = d dt |c(t)| 2 x(y) for a dipole moment oriented parallel to the oscillating mirror (i.e. oriented along the x− or y−direction), and Γ z (z 0 , t) = d dt |c(t)| 2 z for a dipole moment perpendicular to the plate. For a dipole moment randomly oriented, µ 2 x = µ 2 y = µ 2 z = µ 2 /3, we finally get the (time-dependent) decay rate where A 21 = 4µ 2 k 3 0 /3 is the Einstein coefficient for spontaneous emission. Our result (17) has a simple physical interpretation: it has the same structure of the rate for a static wall (see, for example Ref. [3]), but with the atom-wall distance replaced by the time-dependent distance z 0 − a sin(ω p t), as indeed expected on a physical ground due to the adiabatic hypothesis.
For small oscillations of the plate, keeping terms up to the first order in a, we obtain Second and higher-order terms are negligible when a/z 0 1, and k 0 z 0 of the order of unity or less. The expression above gives the total decay rate of our two-level atom near the oscillating mirror. When compared to the analogous quantity in the static case, the main difference is the presence of a time-dependent term. In particular, the quantity in the first line of (18) is the familiar decay rate of an atom near a static perfectly reflecting plate. Whereas, the other terms (second row of (18)) depend on time, and describe the effect of the adiabatic motion of the conducting plate. They oscillate in time according to the oscillatory motion of the mirror, coherently with our adiabatic approximation. These new terms are of the order of a/z 0 , and give a time and space modulation of the decay rate directly related to the dynamics of the environment.
III. SPECTRUM OF THE RADIATION EMITTED
We now show that other relevant features appear in the radiation emitted by the atom in the presence of the (adiabatically) oscillating boundary, specifically significant changes of its spectrum. We now evaluate the probability amplitude c(kj, t) that the atom, initially prepared in the excited state with the field in the vacuum state, decays to its ground state by emitting a photon in the field mode (kj). Time-dependent perturbation theory (at first order in the atom-field coupling) gives As in the previous section, we make the approximation of small oscillations of the plate (a z 0 ), and expand the mode functions of the field, keeping only terms up to the second order in a. A straightforward calculation gives where the functions g i (k x , k y ) (i = x, y, z) have been defined in Eqs. (5-7). In the vector notation, indicating with r 0 = (0, 0, z 0 ) the atom's position, the instantaneous atom-mirror distance is r(t) = r 0 − a sin(ω p t), with a = (0, 0, a), and we have where f [f with i = x, y, z (see Eqs. (1-4)). Putting the expansion (22) into (19), taking into account (23) and (24), and integrating over time, we obtain By taking the squared modulus of Eq. (25), we now evaluate the transition probability at time t. After some algebra, and keeping only terms up to second order, we find where is the 0−th order contribution, coinciding with that obtained for a static boundary, while P (dyn) kj (t) is the change to the spectrum due to the motion of the plate. The dynamical correcting term is obtained as where we have defined the functions and We stress that our results above include the effect of the mirror's motion on photons propagating in any direction. Inspection of Eqs. (27) and (28), taking into account (29), (30) and (31), clearly shows the modifications of the spectrum of the spontaneously emitted radiation: the presence of two lateral peaks, at frequencies ω k = ω 0 ± ω p , in addition to the ordinary resonance peak at ω k = ω 0 . From (31), the presence of (smaller) lateral peaks at the frequencies ω k = ω 0 ± 2ω p is also evident, indicating a sort of nonlinear behavior of the system.
In order to obtain an explicit expression of the emitted spectrum as a function of the frequency, we sum over polarizations and perform the angular integration, obtaining the probability density for unit frequency. Keeping only terms up to second order in a (a < z 0 , k −1 0 ), a lengthy but straightforward calculation gives with and Thus, in the dynamical case, we find, apart the usual peak at ω k = ω 0 , the presence of two lateral peaks of the radiation emitted at frequencies ω k = ω 0 ± ω p , related to the presence of the energy denominators in Eq. (34) (see also (29) and (30)). Other, smaller, lateral peaks at ω k = ω 0 ± 2ω p are also present, given by the last term in (34).
Our results above, being based on a second-order perturbative expansion on the oscillation amplitude a of the mirror, are valid for small oscillations, a/z 0 1, (ak 0 ) 2 1, and for k 0 z 0 of the order of unity or less. These conditions are feasible for typical experimental values. For example, if ω 0 = ck 0 ∼ 10 15 s −1 , we can reasonably take z 0 ∼ 10 −6 m and a ∼ 10 −7 m. These values are also fully compatible with our adiabatic assumption, using realistic values of ω p of a few GHz. Higher oscillation amplitudes can be exploited in the case of Rydberg states, which have much lower values of k 0 [50]; in such a case, the oscillation frequency of the plate ω p must be smaller accordingly, due to our adiabatic approximation. Figure 2 shows a plot of the emitted spectrum by the excited atom in the limit of long times (t ω −1 p , but small enough to make valid our perturbative approach), as a function of ω k − ω 0 , showing the two lateral peaks at ω = ω 0 ± ω p . We found a similar behavior of the spectrum for a two-level atom located inside a dynamical photonic crystal [40]; in that case, however, the two lateral peaks were strongly asymmetric due to the different density of states at the edges of the the photonic band gap. Instead, in the present case of an atom in the vacuum space near an oscillating boundary, the two lateral peaks are symmetric, because the photonic density of states is essentially the same at the peaks' frequencies (the rapid oscillations in the figure, as well as in the next Figure 3, come from the fact that we are considering finite times; thus, the physical meaning should be extracted from the envelope of the curve plotted).
Inspection of (34) shows that the lateral peaks become more and more evident for times larger than ω −1 p . Figure 3 gives the spectral density as a function of time, for ω p = 1.5 · 10 9 s −1 , clearly showing the lateral peaks growing with time, and becoming sharper and well identifiable when t 2πω −1 p . In order to resolve the lateral peaks in the emitted spectrum, their distance ω p from the central peak must be larger that the natural linewidth of the emission line. For example, if we consider the optical transition between the levels n = 3 and n = 2 of the hydrogen atom, the natural width of the line is ∼ 10 8 s −1 , and an oscillation frequency of ν p = ω p /2π ∼ 10 9 s −1 or more is thus sufficient to resolve the lateral lines; such a frequency can be actually reached with the technique of dynamical mirrors [42,43] mentioned in the Introduction. In the case of Rydberg atoms, the plate oscillation frequency must be much smaller; however, also the natural linewidth of the transition can be very small if the Rydberg atoms are prepared in a circular state [50]. Also, actual experimental techniques make feasible trapping of low-density gases of Rydberg atoms with micrometric precision, and for sufficiently long times [29,51,52], as well as trapping atoms at submicrometric distances from a surface [53]. This should make easier to experimentally observe the effects found in the system here investigated, in particular the lateral peaks of the spectrum, rather than in the case of atoms in a dynamical photonic crystal previously investigates [40].
Our results show that spontaneous emission can be controlled (enhanced or suppressed) by modulating in time the position of a perfectly reflecting plate, and that the spectrum of the emitted radiation can be controlled through the oscillation frequency of the plate. This suggest the possibility to control also other radiative processes through modulated (time-dependent) environments, for example the cooperative decay of two or more atoms, or the resonance energy transfer between atoms or molecules. These systems will be the subject of a future publication.
IV. CONCLUSIONS
In this paper, we have investigated the features of the spontaneous emission rate and of the emitted spectrum of one atom, modeled as a two level system, near an oscillating perfectly reflecting plate, in the adiabatic regime. We have discussed in detail the effect of the motion of the mirror on the spontaneous decay rate, and shown that it is modulated in time. We have also found striking modifications of the emission spectrum, that exhibits, apart the usual peak at ω = ω 0 , two new lateral peaks separated from the atomic transition frequency by the oscillation frequency of the plate. The possibility to observe these lateral peaks with current experimental techniques of dynamical mirrors and atomic trapping has been also discussed. Our findings for the spontaneous emission indicate that modulated environments can be exploited to manipulate and tailor the spontaneous emission process; also, they strongly indicate that a dynamical environment could be successfully exploited to modify, activate or inhibit also other radiative processes of atoms or molecules nearby. | 5,356.4 | 2019-09-26T00:00:00.000 | [
"Physics"
] |
Test Statistics for the Identification of Assembly Neurons in Parallel Spike Trains
In recent years numerous improvements have been made in multiple-electrode recordings (i.e., parallel spike-train recordings) and spike sorting to the extent that nowadays it is possible to monitor the activity of up to hundreds of neurons simultaneously. Due to these improvements it is now potentially possible to identify assembly activity (roughly understood as significant synchronous spiking of a group of neurons) from these recordings, which—if it can be demonstrated reliably—would significantly improve our understanding of neural activity and neural coding. However, several methodological problems remain when trying to do so and, among them, a principal one is the combinatorial explosion that one faces when considering all potential neuronal assemblies, since in principle every subset of the recorded neurons constitutes a candidate set for an assembly. We present several statistical tests to identify assembly neurons (i.e., neurons that participate in a neuronal assembly) from parallel spike trains with the aim of reducing the set of neurons to a relevant subset of them and this way ease the task of identifying neuronal assemblies in further analyses. These tests are an improvement of those introduced in the work by Berger et al. (2010) based on additional features like spike weight or pairwise overlap and on alternative ways to identify spike coincidences (e.g., by avoiding time binning, which tends to lose information).
Introduction
The principles of neural coding and information processing in biological neural networks are still not well understood and are the topic of ongoing debate. As a model of network processing, neuronal assemblies were proposed in [1], which are intuitively understood as groups of neurons that tend to exhibit synchronous spiking.
In recent years considerable improvements have been made in multiple-electrode recordings and spike sorting (see, e.g., [2,3]) that allow monitoring the activity of up to hundreds of neurons simultaneously. These improvements open the possibility of identifying neuronal assemblies from multiple-electrode recordings using statistical data analysis techniques. However, several methodological problems remain when trying to do so and, among them, a principal one is the combinatorial explosion that we face when considering all potential neuronal assemblies (since in principle every subset of the recorded neurons constitutes a candidate set for an assembly). For this reason, most studies that deal with temporal spike correlation still resort to analyzing only pairwise interactions (see, e.g., [4][5][6][7]), thus considerably reducing the computational complexity of such task. There are approaches in the literature that try to infer higherorder correlation and potential assembly activity by building primarily on these pairwise interactions (see, e.g., [8][9][10][11]) but, although they can sometimes provide a hint of higherorder correlation and even closely identify assembly activity (provided it is sufficiently pronounced), higher-order correlations need to be checked directly in order to properly identify neuronal assemblies, mostly for two reasons: first, to make sure that the activity reported is actually that of an assembly and not just of several overlapping pairs and, second, to increase the sensitivity for assembly activity as pairwise tests may not be affected sufficiently by assembly activity (see, e.g., [12,13]). Some approaches already do so (see, e.g., [14][15][16]) yet they are all generally limited to a small number of neurons. Others presented in some of our recent companion papers (see, e.g., [17][18][19]) push this limitation by employing frequent item set mining methodology and 2 Computational Intelligence and Neuroscience algorithms to ease and speed up the search through all the candidate sets for potential assemblies, yet combinatorial explosion remains a fundamental problem (especially since statistical tests aiming at identifying assembly activity often rely on randomization or surrogate data approaches, which drive up the computational complexity even further).
In this paper we present several statistical tests to identify individual assembly neurons (i.e., neurons that are part of an assembly). Our tests extend and considerably improve those presented in [20], which were based on time binning and were mostly intended to identify exact (or almost exact) spike synchrony-which is more a theoretical simplification for modelling purposes rather than a realistic assumption. With the new tests introduced in this paper we can do much better: first, we introduce new features into the tests that make them more sensitive (like, e.g., spike weights or pairwise overlap of spikes) and, second, we introduce new ways to identify spike coincidences (i.e., we introduce alternatives to time binning to avoid the loss of detectable synchronous activity). The main motivation of our tests is to reduce the set of neurons only to a relevant subset of them and in this way ease the task of identifying neuronal assemblies in further analyses (i.e., by reducing the total number of neurons to those that tested positive in our approach, the combinatorial explosion can be reduced significantly). The idea of all tests that we present in this paper is fairly simple: we evaluate whether an individual neuron is involved significantly more often in some correlated-spiking event (that depends on the particular test) than it would be expected by chance under the assumption of noncorrelation (i.e., independence). In order to assess significance we estimate the distribution of our test statistics by means of randomized trials (i.e., collections of parallel spike trains): modifications of our original data that are intended to keep all its essential features except synchrony for the neuron we are testing.
The paper is structured as follows: in Section 2 we mainly introduce some notation that we will be using throughout the paper and briefly discuss the notion of spike synchrony, central to our research. In Section 3 we introduce our test statistics to identify assembly neurons. First, in Section 3.1 we provide four statistical tests that rely on a windowbased approach to identify spike coincidences. Technically speaking, different collections of windows provide different ways of counting spike coincidences and thus different tests. We consider in our evaluations two collections of windows: the first one we consider is a partition of the recording time of our spike data into equal intervals (i.e, time bins), on which the bin-based model (the almost exclusively applied model of synchrony in the neurobiology literature) relies in order to identify spike coincidences. The second one we consider, more in keeping with a time-continuous account of spiking activity, is a collection of sliding windows (one for each spike time) able to account for all spike coincidences in our spike trains that fall within the window length and that is consistent with the common, intended characterization of spike synchrony in the field, which regards two or more spikes as synchronous if they lie within a certain distance from each other (to be determined by the modeller). Second, in Section 3.2, we offer a graded, continuous alternative to some of the previous tests. In Section 4 we briefly discuss the complexity of computing the test statistics presented in the two previous sections. In Section 5 we evaluate the performance of our new test statistics on artificially generated collections of spike trains based on parameters learned from typical real recordings, compared to the performance of those in [20], and show that the former clearly outperform the latter. Finally, in Section 6 we summarize results.
Preliminary Definitions, Remarks, and Notation
Let be our set of items (i.e., in our context, neurons). We will be working with parallel spike trains, one for each neuron in , formalized as spike-time sequences (i.e., point processes) of the form { 1 , . . . , } ⊂ (0, ], for ∈ and ∈ R (the recording time), where is the number of times neuron fires in the interval (0, ]. We denote the set of all these sequences by S. Sets of sequences like S constitute our raw data. In order to identify (potential) assembly neurons and, ultimately, neuronal assemblies we need to determine first what constitutes spike synchrony: exact spike coincidences cannot be expected and thus an alternative, nontrivial characterization of synchrony is needed. Generally it is considered that two or more spikes are synchronous (or coincident)that is, they constitute a synchronous event-if they lie within a certain (user-defined) distance from each other, say ∈ R + . We will assume this notion of spike synchrony throughout.
The bin-based method, the almost exclusively applied method for dealing with synchronous spiking in the neurobiology literature, builds on the notion of synchrony above: the recording time is partitioned into time bins (i.e., windows) of equal length ( above, the time distance within which the modeller intends to define synchrony) and all those spikes that lie in the same time bin are regarded as synchronous. Notice though that the bin-based method can fail to identify some synchronous events: two or more spikes can be separated by a time distance way smaller than and lie in two distinct time bins-what we called in other companion papers the boundary problem, which we addressed by means of an alternative method to identify and count spike coincidences which builds on an alternative window set, defined in the next section (that matches the intended characterization of spike synchrony given above), introduced in [17]. In order to illustrate the relevance of the boundary problem and the huge impact that time-bin boundaries have on the identification of synchrony we show, in Figure 1, the probability that spike coincidences of different sizes (with respect to different ratios between the scatter of the spikes-the time span of the spikes in the coincidence-and bin width) are cut by a time-bin boundary.
Statistics
In order to identify assembly neurons from S-like data we propose here several statistics based on a variety of ideas (already briefly sketched in Section 1). Figure 1: Probability that a group of spikes (i.e., an -spike coincidence) is cut by a bin boundary. The parameter is the scatter in the group (i.e., the time span or maximum distance that can exist between any two spikes in the group) and is the bin width (i.e., time span within which we characterize synchrony). Probabilities are on the vertical axis.
First Set: Window-Based Tests.
We first present four test statistics that are based on counts of spike coincidences in a collection of (sliding) bins or windows X. We denote the number of such windows by (i.e., |X| = ). We denote by the number of windows, where all neurons in a set ⊆ fire. To simplify we sometimes avoid set notation: instead of writing, for example, { , } , we write , for { , } ⊆ . ⊆ is the subset of neurons that fire in the th window.
Conditional Pattern Cardinalities (CPC 1 ). This test (first introduced in [20] for time binning) builds on the idea that neurons participating in assemblies should have more neurons firing synchronously with them (due to the spikes of the other assembly neurons and the background spikes that are merely synchronous by chance) than it would be expected by chance under the assumption that they are not assembly neurons. Therefore, if ∈ belongs to a neuronal assembly, the average cardinality of the spike coincidences in which neuron participates should be bigger than that expected by chance (i.e., under the assumption of independence). In order to formalize our test statistic (T cpc 1 ) we first define the amounts and as follows, for ∈ : where 1 is the indicator function of the set (i.e., 1 ( ) = 1 if ∈ and 1 ( ) = 0 otherwise) and ∈ [1,∞) is a user-specified variable that, for values greater than 1, weights large cardinalities more strongly than smaller ones (on the understanding that mainly large cardinalities tell us about assembly activity while small ones can simply respond to chance events). In other words, is the unconditional average (for = 1) pattern cardinality (taking all windows into account) while is the conditional average pattern cardinality given neuron (i.e., conditional on neuron : only windows containing a spike of neuron are taken into account). If neuron does not participate in an assembly the two averages should not differ significantly. However, if neuron participates in an assembly then we would expect to be (significantly) larger. Therefore, by comparing the two averages we obtain a test for assembly participation. We formalize this comparison by defining the test statistic T cpc 1 with respect to neuron ∈ , as follows: Conditional Item Frequencies (CIF 1 ). This test (first introduced in [20] for time binning) is based on the idea that, if ∈ belongs to one or more neuronal assemblies, it should fire more often with other neurons, namely, those that are also part of the assembly or assemblies, than it would be expected by chance under the assumption that it is not an assembly neuron.
For each neuron (for ̸ = ) we consider the number of windows, where neurons , fire together and its expected number̂to build our test statistic (the latter is estimated aŝ, witĥ= / -the estimated firing frequency of neuron ). If exceedŝ(significantly) then neurons , are likely to be part of the same assembly, due to which we see more cooccurrences of spikes of these two neurons that can be expected by chance. If, on the contrary, is less than̂, it is highly likely that the observed cooccurrences are merely chance events. We formally express this intuition by means of our test statistic T cif 1 as follows, for neuron : where is, here and throughout the rest of this section, a boolean operator that returns value 1 if the condition holds (i.e., in this statistic, if >̂) and 0 otherwise (note that we are only interested in the former case, which could be indicative that neuron belongs to an assembly). The value ∈ [1, ∞) offers the possibility of weighting large numbers of spike coincidences for pairs of the form { , } (over the expected ones) more than smaller ones.
Conditional Item Weight (CIW 1 ). The previous test statistic (i.e., T cif 1 ) was built based on the number of observed and expected spike coincidences of sets of the form { , } (where is the neuron tested and ∈ \ { }) without taking into account the cardinality of the sets ⊂ in the windows, where such { , }-coincidences occurred. It is plausible that { , }-coincidences that cooccur with many more spikes are more indicative of correlation (assembly activity) than only a few cooccurrences. Basically, in order to build this new statistical test, we combine the idea on which T cpc 1 is based (i.e., that larger pattern cardinalities are possibly indicative of assembly activity) and that of T cif 1 (i.e., that a neuron participating in an assembly fires more often together with some other specific neurons-those also in the assembly) and combine them by weighting spike cooccurrences with the corresponding pattern cardinality. This test statistic goes beyond what was presented in [20] and, given that we are bringing together two pieces of information that proved effective for our purposes (pattern cardinality and coincident spiking with other specific neurons), it can be expected to yield considerably better performance.
We formalize this idea by means of our test statistic T ciw 1 . In order to define such statistic we first need the values and defined as follows: In other words, gives us the sum of the cardinalities of all sets of neurons in \ { } that fire together with neuron over our collection of windows X (i.e., the occurrences of spikes of neuron are weighted with the cardinality of the pattern in the window they appear in. Thus, is the total size of patterns containing a spike of neuron ). Similarly, gives us the sum of the cardinalities of all sets of neurons in \ { } that fire together with neurons and over X (i.e., the cooccurrences of spikes of neurons , are weighted with the cardinality of the pattern in the window in which they occur).
We define the test statistic T ciw 1 , with a user-specified power , as follows: witĥ= / the estimated firing frequency of neuron . The parameter ∈ [1, ∞), as in previous statistics and in those that follow, offers the possibility of weighting larger (average) spike coincidences more than smaller ones.
Conditional Pattern Overlap (CPO 1 ). While all preceding statistics were computed from aggregates over values computed from individual windows, for the test statistic we present now, we consider pairs of windows in which the neuron ∈ tested fires together with another set of neurons. The idea underlying this statistic is that cooccurrences of spikes of neuron with those of any other neuron (as considered in the two preceding statistics) may still be chance events. However, if spikes of several other neurons all occur together twice (as we look at pairs of windows) with spikes of the tested neuron , this is a much stronger indicator of assembly activity. Apart from this difference, this statistic employs the same idea as T ciw 1 , only that the overlap of pairs takes the role of a single pattern.
We formalize this idea by means of the test statistic T cpo 1 , which we define as follows: : Example: A collection of spike trains for neurons , , , , that contain a neuronal assembly formed by { , , }three injected coincidences in the example, circled in blue. The window set X (i.e., time binning) is considered in our example (yielding a partition with ten windows). We are interested in testing whether neuron is part of an assembly. CPC 1 : in order to compute 1 ( ) we consider the number of spikes of neurons , , , in each window and sum over. We get, for our example, 1 ( ) = 1.8. We proceed in a similar way to assess 1 ( ) by only considering those windows in which neuron fires, which yields 1 ( ) = 2. We thus get that T cpc 1 1 ( ) = 1/9 (concluding that is an assembly neuron depends on the significance of the value 1/9-see Section 5.1). CIF 1 : we have that = 3 and̂= 2 and that = 3 and = 1.6 (for the other two pairs-i.e., , and , -its number of coincidences is lower than its expected one under independence). These numbers yield T where (| ∩ \ { }| > 1) excludes patterns overlapping only in one neuron.
A simple example on how these test statistics that we have just presented are computed is given in Figure 2.
In Section 5 we report results on the evaluation of these statistical tests for two window sets of particular interest, which we denote by X and X S (except T 1 , which was only evaluated on X ). "b" stands for "bin" and "w" for "(sliding) window." The subscript S reflects the dependence of X S on the underlying collection of spike trains S: (i) X is a partition (of intervals of length , the time span within which we define spike synchrony) of the recording time ; (ii) X S is the set given by all the intervals of the form [ , + ], for all ∈ { 1 , . . . , } (in S) and all ∈ . The real value refers to the particular (user-defined) time span.
Our definition of X is motivated by the bin-based model of synchrony that, as mentioned earlier, partitions the recording time into time bins of equal length and counts as synchronous those spikes that lie in the same bin (which constitutes the most popular method for the identification of synchronous spiking in the neurobiology literature and the reference for the statistical tests presented in [20]). However, as we explained earlier (and illustrated by means of Figure 1), such an account of synchronous spiking leads to missing Computational Intelligence and Neuroscience 5 potential synchronous groups: groups of spikes that lie within the time span that determines synchrony (say , as above)and thus should be identified as synchronous-but that, due to the placement of the bin boundaries, fall into different time bins and are thus not reported as synchronous by the bin-based model. In order to bring more flexibility to the placement of the bin boundaries and this way achieve a better account of spike synchrony some possibilities come naturally to our mind. Maybe the most natural way would be to look at each spike and check its neighborhood, considering a time span /2 in each direction (i.e., considering the window [ − /2, + /2], for the corresponding spike time). However, this has the disadvantage that looking only at /2 in each direction may still miss synchronous spiking, hence the natural possibility of considering a neighborhood with span in each direction, but this increases the number of chance occurrences. The next option is then to let a window (of length ) slide over the spike trains stopping at each spike, which captures each spike coincidence in the range given by at least once. Such a collection of windows is given by X S .
Second Set: Time-Continuous Approach.
In this section we offer a continuous version of some of the previous statistical tests that are implicitly built on a graded, continuous notion of spike synchrony. We consider, for each spike ∈ { 1 , . . . , } and ∈ , an influence region that corresponds to the distance within which two or more spikes are regarded as synchronous (i.e., for a time span ∈ R + , we would define the influence region of spike as the interval [ − /2, + /2]). From the influence region we define the function as follows: In what follows we will represent spikes by these maps (i.e., will be represented by above). We call functions of this form influence maps (and the windows of the form [ − /2, + /2] underlying them are called influence regions). Such functions constitute the building blocks of the synchrony model that we introduce in our companion paper [21], which is characterized by a graded notion of synchrony (which differs substantially from the intended notion of synchrony in this paper, which is bivalent): the degree of synchrony among two or more spikes is defined as the integral (i.e., area) of the intersection of their corresponding influence maps. Such degree is thus a value in the interval [0, 1] (e.g., 0 if the time distance between any two spikes is greater than or equal to and 1 if there is exact time synchrony between them).
Next we define F as follows: where is the map corresponding to spike . In other words, any spike time that lies in an interval of the form [ − /2, + /2], for a spike of neuron ∈ (and that, thus, should be regarded as synchronous with ), will be given, by F ( ), value 1.
Conditional Pattern Cardinalities (CPC 2 ).
We introduce now a continuous version of the test statistic T cpc 1 , in terms of influence regions and influence maps, which we denote by T cpc 2 . Formally, for ∈ , we define the values and as follows: with Here ∈ [1,∞) is, as in previous statistics and all others that we will be presenting in this section, a weighting parameter that, for values greater than 1, weights large spike coincidences more strongly than smaller ones. As with T cpc 1 , and measure average spike cardinalities (notice that gives us, at each , the number of influence regions corresponding to spikes of neurons in \{ } that overlap and, thus, the number of spikes that lie in the window [ − /2, + /2]). As with and in T cpc 1 , we expect ( ) to be bigger than ( ) if ∈ is an assembly neuron. Based on this intuition, we formally define the test statistic T cpc 2 as follows, for ∈ :
Conditional Item Frequencies (CIF 2 ).
We present now an adaptation of the test statistic T cif 1 to influence maps and a continuous domain, which we will denote by T cif 2 , and that responds to the same ideas as T cif 1 . For each neuron we define the values and̂as follows: We formally define the statistic T cif 2 as follows, for neuron : where is the boolean operator returns value 1 if >â nd 0 otherwise. 6 Computational Intelligence and Neuroscience
Conditional Item Weight (CIW 2 ). A continuous version of the test statistic T ciw 1 is that which we denote by T ciw 2 .
In order to formalize our continuous version of the statistic we first define the values and as follows: We define T ciw 2 as follows, for neuron : wherêis the frequency ( )/ . As before, is the boolean operator returns value 1 if > and 0 otherwise.
Computational Complexity
In this section we briefly analyze the complexity of computing our statistics.
First of all, if we take as reference the window set X (i.e., binning), we have that CPC 1 , CIF 1 , and CIW 1 are linear in the number of windows in X . Also, as it is probably clear, CPC 1 is constant in the number of neurons (only the pattern cardinality is taken into account; the composition of the pattern itself is irrelevant) and CIF 1 and CIW 1 are linear (since one needs to loop over the neurons). More formally, we have that the complexity of computing CPC 1 is at most of the order ( ), where is the number of time bins, and that the complexity of CIF 1 and CIW 1 is of the order ( + ), where is the number of spikes and is the number of neurons. As for CPO 1 , it is quadratic in the number of time bins and linear in the number of neurons. More formally, we have that its complexity is of the order ( 2 ) (this bound could be reduced by the size of the largest set of neurons that fires together in a window, which would replace ). If, instead, we consider the window set X S then we have that the computation of CPC 1 , CIF 1 , and CIW 1 is linear in the total number of spikes and that CIF 1 and CIW 1 are also linear in the number of neurons. Formally, the complexity of CPC 1 is of the order ( ) and that of CIF 1 and CIW 1 is of the order ( ) (where, as before, could be replaced by the largest number of neurons firing together in a window). The statistics CPC 2 , CIF 2 , and CIW 2 have the same complexities as its window-based counterparts.
Evaluation
In this section we show some results concerning the evaluation of our statistical tests on artificially generated collections of spike trains. Such artificially generated collections, in which all assemblies-and thus assembly neurons-are known, are necessary in order to assess whether our test statistics do what they are supposed to do which is to identify all assembly neurons and discard all those that are not. Only on such data a proper evaluation of our test statistics is possible.
For the results reported in this section we generate our collections of spike trains as follows: for each signature (where stands for the size of the neuronal group and for the number of spike coincidences injected) we generate 1000 trials, each consisting of 100 spike trains (one for each neuron) independently generated as 3-second Poisson processes (i.e., = 3) of constant rate 20 Hz (which represent the background activity), with injected spike coincidences of a particular -neuron pattern containing the neuron we are testing for (for the neurons with injected synchronous spikes, a corresponding number of background spikes were removed and thus the background firing of the assembly neurons was adjusted accordingly). In order to generate such coincidences a random choice of points in the interval (0, ] is considered for each trial and added to the background spiking activity. In trials with nonexact coincidences (i.e., jittered trials, as opposed to nonjittered trials with exact coincidences) a random shift is added, which we model by means of a uniform random variable on the interval [−0.0015, 0.0015] (i.e., ±1.5 maximal millisecond shift, in keeping with the time span = 0.003 and the corresponding length of windows and influence regions that we are considering for our statistics). More results and diagrams corresponding to artificially generated data with slightly different settings can be found in http://www.borgelt.net/docs/napa.pdf. The general conclusions that could be drawn from them do not differ from those reported here.
Significance.
To estimate the distribution of the test statistics we generate surrogate data from our original spike trains as follows: modifications of the original data that are intended to keep all its essential features except synchrony among the neuron we are testing and the others (see, e.g., [22] or [23] for a survey and analysis of methods to generate surrogate data from parallel spike trains). In order to keep as many properties of the original data as possible we create only a surrogate train for the neuron we are currently testing, which replaces the original train. The trains of all other neurons are left unchanged. With the surrogate train the test statistic is recomputed. Generating a surrogate train and recomputing the test statistic are repeated 1000 times, in order to obtain an estimate of the distribution of the test statistic. We then determine the fraction of surrogate trains that produced a test statistic value exceeding the one obtained with the actual (real) train and thus obtain a value. Note that, for testing another neuron, the original (real) train of any neuron tested before is used. That is, no surrogate trains are evaluated for neurons other than the one to be tested. Figures 3-6 feature diagrams with rates of false negatives for each signature ⟨ , ⟩, with , ∈ {1, . . . , 12} over the 1000 trials; that is, the rate of trials (over 1000) in which Figure 4: Rate of false negatives on jittered trials (i.e., with nonexact coincidences). Test statistics CPC 1 , CIF 1 , CIW 1 , and CPO 1 with respect to the window set X (i.e., binning). Column (a) shows results for the parameter = 1 and column (b) for = 3. the tested neuron that belongs to the group with injected coincidences is not identified as an assembly neuron-on the understanding that a group of neurons of size at least 3 with at least 3 spike coincidences in our trials constitutes a potential neuronal assembly (see, e.g., [17] or [18] for a better insight).
Results.
Maybe it is worth stressing that, if we were to test a neuron that does not belong to an assembly, it would be identified by our test statistics as an assembly neuron (i.e., a false positive) in about 1% of our trials (which is probably clear, since this is our significance level, learned from uncorrelated trials). In Figure 3 we show results for the window-based statistics CPC 1 , CIF 1 , CIW 1 , and CPO 1 on X (i.e., when considering time binning for the identification of spike coincidences). The first two test statistics (i.e., CPC 1 and CIF 1 ) were already introduced and evaluated in a companion paper ( [20]) on artificially generated trials based on slightly different-but essentially comparable-settings. As the diagrams in Figure 3 show, the two new test statistics CIW 1 and CPO 1 introduced in this paper report considerably lower rates of false negatives than those already introduced in [20] on nonjittered trials (the best performance being that of CPO 1 that, as was seen in the previous section, is more costly than the other three in terms of computational efficiency). The performance of all such statistics with respect to = 3 tends to be substantially better than statistics with = 1 for most signatures: for CPC 1 an increase in the exponent yields an increase in sensitivity towards smaller patterns (i.e., towards smaller values for ) while for CIF 1 such an increase yields an improvement Figure 6: Rate of false negatives on jittered trials (i.e., with nonexact coincidences). Test statistics CPC 2 , CIF 2 , and CIW 2 . Column (a) shows results for the parameter = 1 and column (b) for = 3.
in sensitivity towards a smaller number of coincidences (i.e., towards smaller values for ). CIW 1 combines both effects, since it combines pattern cardinality assessment and coincidence counts (which is precisely what was intended with the definition of this statistic). The effect of on CPO 1 is even higher, since it exploits cooccurrences not only of pairs but of larger groups of neurons. Figure 4 shows results for the same window-based statistics on jittered trials. As can be expected, the performance of all such statistics worsens substantially when dealing with nonexact spike coincidences. This is due to the above mentioned boundary problem when using the window set X (i.e., binning): two or more spikes can be less than milliseconds apart (in our evaluations = 0.003) but still lie in different windows and thus be regarded as nonsynchronous (a detailed analysis and quantification of the effect of the boundary problem can be found in our companion paper [24]). In order to improve performance when dealing with nonexact spike coincidences and to overcome the boundary problem in binning we introduced an alternative window set X S for our window-based statistics and a time-continuous alternative to them by means of our test statistics CPC 2 , CIF 2 , and CIW 2 . Diagrams in Figure 5 show the performance of our window-based statistics CPC 1 , CIF 1 , and CIW 1 on X S (CPO 1 becomes very inefficient on X S -due to the much larger number of windows and its quadratic complexity in the number of windows-and thus was not tested). Performance of such statistics on X S is, for most signatures, better than the corresponding performance on X (more so with respect to = 3). As was mentioned, such improvement is mostly due to the fact that, by considering X S in place of X , we identify all injected coincidences. Diagrams in Figure 6 show the performance of our test statistics CPC 2 , CIF 2 , and CIW 2 . Overall, the performance of our window-based statistics on X S and the corresponding time-continuous statistics (based on influence regions and influence maps) are not clearly distinguishable from the diagrams and, among all test statistics introduced in this paper, CIW 1 and CIW 2 seem to yield the best results.
We are currently exploring possibilities to transfer the ideas on which CPO 1 is based to work with X S without incurring quadratic computational complexity and also in the time-continuous approach. Although it is unclear how the statistic could be expressed in terms of influence regions (in the time-continuous approach), with such a transfer one can hope to achieve even better performance, as was seen for CPO 1 when working with X .
Conclusion
In this paper we have presented several test statistics to identify assembly neurons from multiple-electrode recordings. The aim of such statistics is to reduce the set of neurons to a relevant subset of them and in this way ease the task of identifying neuronal assemblies in further analyses (a task which, due to the large amount of neurons that can nowadays be recorded, is undermined by the computational explosion that comes from having to consider every possible subset of them as a potential neuronal assembly).
We have provided two types of statistics as follows: the window-based statistics (CPC 1 , CIF 1 , CIW 1 , and CPO 1 ) and the time-continuous statistics (CPC 2 , CIF 2 , and CIW 2 ). The former rely on a window-based approach to identify spike coincidences and the latter on what we called influence regions (i.e., a time span around each spike within which synchrony with other spikes is defined-two or more spikes are synchronous in these settings if their influence regions overlap). For the window-based statistics we considered two window sets in our evaluations as follows: a partition of the recording time of our spike data into equal intervals (which is called binning)-on which the bin-based model of synchrony relies in order to identify spike coincidences-and a collection of sliding windows (one for each spike time), able to account for all spike coincidences in our spike trains that fall within the window length, which is more in keeping with the common, intended characterization of spike synchrony in the field, which regards two or more spikes as synchronous if they lie within a certain distance from each other.
Two of the window-based statistics (CPC 1 and CIF 1 ) were first presented and evaluated with binning in a companion paper ( [20]) for artificially generated nonjittered trials (i.e., with exact spike coincidences injected). In this paper we have shown that the two novel window-based statistics here presented (i.e., CIW 1 and CPO 1 ) perform substantially better in such settings, in terms of rates of false negatives.
Performance of the latter is still better on jittered trials, yet, in these settings, test statistics based on the sliding-window set and the time-continuous ones yield much better results, as was shown. | 8,747.2 | 2015-03-08T00:00:00.000 | [
"Computer Science"
] |
A new dihydrofurocoumarin from the fruits of Pandanus tectorius Parkinson ex Du Roi
Abstract From the fruit of Pandanus tectorius Parkinson ex Du Roi, one new dihydrofurocoumarin, named pandanusin A (1) and 15 known compounds, including one furanocoumarin (2), two coumarins (3, 4), four lignans (5–8), one neolignan (9), two flavonoids (10, 11), three phenolics (12–14), one monoglyceride (15) and one monosaccharide (16) were isolated by various chromatography methods. Among them, compounds (3–5) were obtained from the Pandanus genus for the first time and compounds (9–14, 16) were reported from this species for the first time. Their structures were elucidated by HR–ESI–MS, NMR 1D and 2D experiments and comparison with previous reported data. The α-glucosidase inhibitory activity of all compounds was measured. The isolated compounds (1–12, 14) showed better α-glucosidase inhibitory activity (IC50 = 42.2, 36.5, 84.7, 73.2, 40.8, 26.7, 76.5, 33.8, 68.1, 14.4, 22.1, 81.5, 43.8 μM, respectively) than the standard drug acarbose (IC50 = 214.5 μM).
General experimental procedures
The optical rotations were determined on a Krüss-optronic gmbH polarimeter equipped with a sodium lamp (589 nm). The HR-ESI-MS was performed on a Bruker MicrOTOF-QII spectrometer. The 1 H NMR (500 MHz), 13 C NMR (125 MHz), DEPT, COSy, HSQC and HMBC spectra were recorded on a Bruker AM500 FT-NMR spectrometer using tetramethylsilane as internal standard. CD spectrum were recorded on Jasco J-815 CD spectropolarimeter. Column chromatography was carried out using Merck Silica gel normal-phase (230-240 mesh) and reversed-phase C 18 (Merck). Analytical thin-layer chromatography was carried out in silica gel plates (Merck DC-Alufolien 60 F 254 ). Compounds were visualised by spraying with aqueous 10% H 2 SO 4 and heating for 3-5 min.
Plant material
The fruits of P. tectorius were collected in Binh Thuan province, Vietnam in February 2013 and authenticated by Mr Dang Van Son, Institute of Tropical Biology, Vietnam Academy of Science and Technology. A voucher specimen (No PT-125) was deposited in Bioactive Compounds Laboratory, Institute of Chemical Technology, Vietnam Academy of Science and Technology, Vietnam.
α-Glucosidase inhibition assay
The inhibitory activity of α-glucosidase was determined according to the modified method of Kim et al. (2008) and 3 mM p-Nitrophenyl-α-d-glucopyranoside (25 μL) and 0.2 u/mL α-glucosidase (25 μL) in 0.01 M phosphate buffer (pH 7) were added to the sample solution (625 μL) to start the reaction. Each reaction was carried out at 37 °C for 30 min and stopped by adding 0.1 M Na 2 CO 3 (375 μL). Enzymatic activity was quantified by measuring absorbance at 401 nm. One unit of α-glucosidase activity was defined as amount of enzyme liberating p-nitrophenol (1.0 μM) per min. The IC 50 value was defined as the concentration of α-glucosidase inhibitor that inhibited 50% of α-glucosidase activity. Acarbose, a known α-glucosidase inhibitor, was used as positive control.
Conclusion
This phytochemical investigation on the ethyl acetate extract of the fruit of P. tectorius has led to the isolation and structure elucidation of one new dihydrofurocoumarin (1) and 15 known compounds (2-16). This study indicated that phenolic compounds are the main components of the fruits of P.tectorius. Moreover, we investigated the inhibitory activity against enzyme α-glucosidase of isolated compounds. As a result, the isolated compounds except for 13, 15 and 16 showed promoting effects on enzyme α-glucosidase. via inhibition of carbohydrate-hydrolysing enzyme α-glucosidase, the isolated compounds from this plant retarded absorption of glucose known to be beneficial in therapy in type-II diabetes. In conclusion, this species can be completely appropriated with the good antidiabetic drug.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
This work was supported by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) [grant number 104.01-2012.72]. | 851.2 | 2016-05-26T00:00:00.000 | [
"Chemistry"
] |
Single-atom alloy catalysts designed by first-principles calculations and artificial intelligence
Single-atom-alloy catalysts (SAACs) have recently become a frontier in catalysis research. Simultaneous optimization of reactants’ facile dissociation and a balanced strength of intermediates’ binding make them highly efficient catalysts for several industrially important reactions. However, discovery of new SAACs is hindered by lack of fast yet reliable prediction of catalytic properties of the large number of candidates. We address this problem by applying a compressed-sensing data-analytics approach parameterized with density-functional inputs. Besides consistently predicting efficiency of the experimentally studied SAACs, we identify more than 200 yet unreported promising candidates. Some of these candidates are more stable and efficient than the reported ones. We have also introduced a novel approach to a qualitative analysis of complex symbolic regression models based on the data-mining method subgroup discovery. Our study demonstrates the importance of data analytics for avoiding bias in catalysis design, and provides a recipe for finding best SAACs for various applications.
• Page 6: Can you comment at all on the relative importance of the primary features included in the model relative to their impact on the overall prediction? This may lead to interesting general conclusions, like elucidating whether the primary features of the guest or host metal play a larger role for a given target property. Discussing the relative importance of guest vs. host metal features would be quite informative.
• Page 7: It seems the authors identify Tc alloys as promising SAACs. It's worth noting that there may be other health/safety considerations when using Tc in catalytic applications due to the fact that all Tc isotopes are radioactive.
• Page 8: I believe that the manuscript would benefit from an expanded discussion of Figure 3 that explains the general trends that emerge from the high-throughput screening results (e.g., in general, what types of guest atoms yield SAACs with low hydrogen dissociation barriers? What guest/host combinations lead to small segregation energies and why in terms of atomic radii size or other features?) Minor Comments Main Text • Figure 1: If you change solid red circles to be different symbols for hollow bridge, bridge, top that would be more information-rich and potentially informative (just a suggestion). • Table S1 caption. "the surface-based primary features were calculated using the slab unit cell consisting of one atom per atomic layer." Should be "The". • Page 6: The text indicates that the primary features DC, DC*, DT, DT*, DS, and DS* appear in every dimension of the descriptors for hydrogen binding energy and dissociation barrier. However, based on Table 1, it is unclear what the DT and DS primary features are as opposed to the DT* and DS* primary features. From reading the SI, it seems * denotes host metal from guest atom feature. I think this * notation can be clarified in Table 1. • Page 9: "higher stability and efficiency than the reported ones, making them perfectly optimized for practical applications." Perfectly optimized seems to be a strong choice of words here. Perhaps remove the word "perfectly".
Minor Comments on Supporting Information • Page 1: "Spin-polarization effects are tested for and included where appropriate." Is it noted somewhere for which spin polarization effects are included? This is a vague statement and could perhaps be made more explicit • Figure S1 caption. "bcc(110) e," should be bcc(110) (e) • Table S3: "Binding energy of host metal dimers", So this is a dimer energy for A(g) + A(g) -> A2(g)? Could perhaps be clarified. • Font size for the captions in Figures S3-S5 are smaller than the other Figure S captions (i.e., font size 10 vs. 12). • Table S5: "Number of system with the predicted and calculated segregation energy meet the same condition of SE < kTln(10) (Nmeet)…" Perhaps it should read as "Number of systems with the predicted and calculated segregation energies that meet the same condition…" Reviewer #2 (Remarks to the Author): The manuscript presents machine learning models of single atom catalysts and screening procedure for design of hydrogenation catalysts based on this new type of alloys emerged in recent years. The features designed are easily available properties that are tabulated including electronic structure, bulk properties, etc. The target properties include the binding energy, activation barrier and the segregation. Those properties are crucial for screening high performance hydrogenation catalysts. While the work is thoroughly done in those aspects, this does not reach the standard of Nat Commu. 1. The novelty of the approach is lacking. Compressed sensing is used recently in M. Andersen, S. V. Levchenko, M. Scheffler, K. Reuter, Beyond Scaling Relations for the Description of Catalytic Materials. ACS Catal. 9, 2752-2759 (2019). 2. While the SISSO with cross validation is reasonably accurate for training a small dataset, its generalization to new systems is still the biggest problems for all current learning framework. Active learning approach was used to tackle this problem (K. Tran, Z. W. Ulissi, Active learning across intermetallics to guide discovery of electrocatalysts for CO2 reduction and H2 evolution. Nature Catalysis. 1, 696-703 (2018).), while a large amount of calculations are required. The current study used only ~300 datapoints for training and extend the model to ~5000 space without validation of model prediction. 3. The criteria for screening catalysts used in this study is arbitrary. Without detailed kinetics, the approach can only provide a rough screening of candidate materials. 4. For segregation, a recent study by Grabow et al. (K. K. Rao, Q. K. Do, K. Pham, D. Maiti, L. C. Grabow, Extendable Machine Learning Model for the Stability of Single Atom Alloys. Top. Catal. (2020), doi:10.1007/s11244-020-01267-2.). Even the *H binds weakly on the metals, its effect on the segregation is not considered in any of those studies. 5. The most fundamental problem of this study and the approach in general is their lacking of understanding the uniqueness of single atom alloys. Although the SISSO method comes up formula in reduced feature space, the physics is missing. The message to the community by the study is rather incremental while does not provide a way forward to tackle all those issues.
The authors show that by assembling a large number of atomic, bulk and allow descriptors (table1) they are able to perform a high dimensional correlation with the ab initio data to yield property predictions FAR more accurate than the existing simple concepts. On the one hand this is a great step forward for screening studies on the other hand if I have a more complex fitting function, I do expect a better fit. The one worry I have is this then become a brute force approach without the intellectual understanding that can be provided by a simple model. In this respect it might have been more intellectually pleasing for the authors to consider if there was a smaller subset of parameters (2-3) that might do a reasonable job (better than linear fits but not the fullblown set) which might hint at a simpler model. As is, the approach is fine I do worry about both overfitting/underfitting of data but do believe the authors have covered this ground adequately.
Finally, then the result of this study is that using their model they can rapidly predict the results of DFT calculations and use that data to make predictions about activity and stability based on simple energetic parameters such as presented in Figure 4. In my opinion this is the most important plod tin the whole paper and the authors did not really deal with its ramifications very well. The wisdom in single atom catalysts (particularly for hydrogenation) is that the more active the species the less stable if will be-hence the scarcity of single atoms (dilute alloys) that are reported. If the authors are correct there is a large abundance of materials far in the lower right-hand corner (active and stable) that should break this trend whereas those that do exist are mostly in the upper right-hand corner (active but less stable). This is the most significant discovery/prediction in the paper as far as I am concerned, and the authors barely comment on it. Sadly, a follow-on experimental study making targets and validating the prediction would be a breakthrough and this is also not done.
Ultimately my problem is that screening for screening sake, without understanding new things, and without verifying that my parameters to define the screening criteria are valid is a reasonable technical accomplishment and not consistent with an advance I would expect in a Nature Journal. I think this paper would be highly appropriate for a journal such as ACS catal or a chem-informatics journal but other than a more advanced fitting procedure for predicting DFT data I see no real advance here.
Response to Referees
Reviewer 1: This interesting work leverages the recently developed SISSO (sure independence screening and sparsifying operator) algorithm to develop descriptors of stability and activity for screening single atom alloy catalysts (SAACs). The main impact of the work seems to be in identifying a number of new SAACs for potential experimental study. Two SAACs (Mn/Ag(111) and Pt/Zn(0001)) are identified as particularly promising. The paper is application-based in nature and doesn't appear to contribute significant conceptual advances for SAACs or machine learning applications. While the work seems to be well done and the paper is well-written, and the SAAC and ML topics are of great interest these days, the paper scope is not particularly ambitious. This work may be suitable for Nat. Comm. if its scope is slightly broadened and fundamental insight is improved. My comments and suggestions are provided below.
Response:
We thank the reviewer for the positive comments on our work. Indeed, by identifying the model with both the activity and stability parameters of the SAACs we could confirm the experimentally studied high performance SAACs. Moreover, we predict two new particularly promising systems. Keeping the reviewer's suggestions in mind, in the revised manuscript we now analyze the correlations of each component of the selected best descriptor with the target properties and discuss their physical significance. We also highlight the importance of using the combination of features rather than focusing on individual feature's role in the description of the target properties. Thus, we have stepped beyond the well-established d-band center theory, scaling relationships, and the Brønsted-Evans-Polanyi relationship, and have focused on the importance of data analytics in finding new SAACs. 1) Page 3: Figure 1a shows the H-atom binding energy vs. the d-band center for the Ag(110) host surface. The d-band center is typically calculated from the projected DOS, and it is unclear which surface atom the d-states are being taken from based on the text. Please clarify. Furthermore, using the Ag(110) surface to show that the d-band model is broken because of SAACs is not very convincing. Ag(110) should not follow the d-band model due to the fact that it's d-band is completely full; therefore binding trends on Ag(110) should depend more on changes in sp electron density and Pauli repulsion (see DOI: https://link.springer.com/chapter/10.1007/978-94-015-8911-6_11 for details on why Ag(110) should not follow the d-band model). Do you have other examples of SAAC breaking scaling relations besides a host metal with a full d-band?
Response:
We thank the reviewer for highlighting this important aspect. In the present study, the d-band centers are calculated from the d orbitals projected on the single guest atom only. Note that, to validate the choice of our d-band centers, we have calculated d-band centers for the d orbitals projected on (i) the single guest atom and it's 1 st nearest neighbor shell and (ii) the whole slab. However, the correlation between the binding energy and these later two d-band centers are found to be worse compared to the d-band center of the single guest atom. This is now clarified in the revised manuscript [page 3]. In the revised manuscript we have included the correlations between the binding energy and the d-band center for another system as well [Pt(111) surface], whose d-band is not completely full.
Changes made: 1. We have replaced the sentence "we first investigate correlation between BE H and the d-band center for the alloyed systems" by "we first investigate correlation between BE H and the d-band center for the alloyed systems. Note that, d-band centers are calculated from the d orbitals projected on the single guest atom only. We find that this way of calculating d-band center provides better correlation with other properties than d-band centers for the d orbitals projected on (i) the single guest atom plus it's 1st nearest neighbor shell or (ii) the whole slab [Topics in Catalysis 61, 462-474 (2018)]." on page 3 of the revised manuscript. 2. We have added Figure S2 by including also the system of Pt(111) surfaces which is reproduced as Figure R1 below. 2) Page 6: The manuscript may benefit from some discussion on the robustness of the SISSO models identified in Table 2. Does adding or removing a training data point lead to a different descriptor? If so, does the new descriptor still yield similar model behavior?
Response: The descriptor is robust and remains unchanged upon randomly removing one training data point. We have randomly kept out one data point for each model and have repeated the process 5 times to check the robustness of the descriptor. Moreover, for the optimal dimensionality, the same set of primary features is found during CV10 in 9, 8, and 8 cases for the SISSO models of BE H , E b , and SE, respectively. Also, new systems that were not included in the training set were used as test set to further confirm the high transferability of our model. Finally, some of the high-throughput screening selected high performance SAACs, including all the experimentally evidenced systems and our suggested top two best systems, are validated confirmed by density-functional theory calculations.
Changes made:
We have added the sentence "For the optimal dimensionality, the same set of primary features is found during CV10 in 9, 8, and 8 cases for the SISSO models of BE H , E b , and SE, respectively" on page 6 of the revised manuscript.
3) Page 6: Can you comment at all on the relative importance of the primary features included in the model relative to their impact on the overall prediction? This may lead to interesting general conclusions, like elucidating whether the primary features of the guest or host metal play a larger role for a given target property. Discussing the relative importance of guest vs. host metal features would be quite informative.
Response:
We thank the referee for this suggestion. In this work we highlight the importance of the combinations of the primary features rather than using each feature individually to describe the target properties. However keeping the referee's advice in mind, we have now introduced a novel general approach to the analysis of complex symbolic-regression models, based on the data mining approach called subgroup discovery. This have allowed us to uncover physical role of particular features, as well as relative role of guest versus host features.
Changes made:
We have added the following paragraphs on page 10 and 11 of the revised manuscript. "Although the SISSO models are analytic formulas, the corresponding descriptors are complex, reflecting the complexity of the relationship between the primary features and the target properties. While potentially interpretable, the models do not provide a straightforward way of evaluating relative role of different features in actuating desirable changes in target properties. To facilitate physical understanding of the actuating mechanisms, we apply the subgroup discovery (SGD) approach. 55-60 SGD finds local patterns in data that maximize a quality function. The patterns are described as an intersection (a selector) of simple inequalities involving provided features, e.g., (feature1<a1) AND (feature2>a2) AND... . The quality function is typically chosen such that it is maximized by subgroups balancing the number of data points in the subgroup, deviation of the median of the target property for the subgroup from the median for the whole data set, and the width of the target property distribution within the subgroup. 60 " "Here, we apply SGD in a novel context, namely as an analysis tool for symbolic regression models, including SISSO. The primary features that enter the complex SISSO descriptors of a given target property are used as features for SGD (see Table 2). The data set includes all 5200 materials and surfaces used in the high-throughput screening. The target properties are calculated using the obtained SISSO models. Five target properties are considered: ∆ + , SE, SE H , E b , |∆ |, and BE H . Since we are interested mainly in catalysts that are active at normal conditions, ∆ is calculated at T = 300 K. Our goal is to find selectors that minimize these properties within the subgroup. Such selectors describe actuating mechanisms for minimization of a given target property. For SE, the following best selector is found: (EC* ≤ -3.85 eV) AND (-3.36 eV < EC ≤ -0.01 eV) AND (IP ≥ 7.45 eV). The corresponding subgroup contains 738 samples (14% of the whole population), and the distribution of SE within the subgroup is shown in Figure S10. Qualitatively, the first two conditions imply that the cohesive energy of the host material is larger in absolute value than the cohesive energy of the guest material. Physically this means that bonding between host atoms is preferred over bonding between guest atoms and therefore over intermediate host-guest binding. This leads to the tendency of maximizing number of host-host bonds by pushing guest atom to the surface. This stabilization mechanism has been discussed in literature, 61 and here we confirm it by data analysis. In addition, we find that stability of SAACs requires that ionization potential of the guest atom is high. This can be explained by the fact that lower IP results in more pronounced delocalization of the s valence electrons of the guest atom and partial charge transfer to the surrounding host atoms. The charge transfer favors larger number of neighbors due to increased Madelung potential, and therefore destabilizes surface position of the guest atom.
We calculate SE H using SISSO models for SE and BE H [see equation (3) in the Methods section]. Therefore, SGD for SE H is performed using primary features appearing in the descriptors of both SE and BE H . The top found subgroup contains features related to binding of H to the host and guest metal atoms, e.g. (EB* < -5.75 eV) AND (EH* ≤ -2.10 eV) AND (EH ≥ -2.88 eV) AND (IP* ≤ 7.94 eV) AND (IP > 8.52 eV) AND (R ≥ 1.29 Å). However, the distribution of SE for this subgroup is very similar to the distribution of SE H , which means that the stability of guest atoms at the surface is weakly affected by H adsorption when the surface guest atoms are already very stable. The important effect of H adsorption is revealed when we find subgroups minimizing directly SE H -SE (in this case only primary features that appear in the SISSO descriptor of BE H are considered for SGD analysis). The top subgroup we found contains 72 samples (1.4% of the whole population) and is described by several degenerate selectors, in particular (-2.35 eV ≤ EH* ≤ -2.32 eV) AND (EC* > -2.73 eV) AND (EC < -5.98 eV) AND (H ≥ -5.12 eV). This is a very interesting and intuitive result. Distributions of SE H and SE for this subgroup are shown in Figure S11. The SE for all materials in the subgroup is above 0 eV. However, SE H is much closer to 0 eV, and is below 0 eV for a significant number of materials in this subgroup. The conditions on the cohesive energy of guest and host metals (very stable bulk guest metal and less stable bulk host metal) are reversed with respect to SE, i.e., adsorption of hydrogen affects strongly the systems where guest atom is unstable at the surface. This increases the reactivity of the guest atom towards an H atom. The condition (EH* ≥ -2.35 eV) selects materials for which interaction of H with a host atom is not too strong, so that H can bond with the guest atom and stabilize it at the surface. The condition (EH* ≤ -2.32 eV) makes the subgroup narrower, which further decreases median difference SE H -SE but has no additional physical meaning. The condition (H ≥ -5.12 eV) has a minor effect on the subgroup.
The corresponding subgroup contains 1974 samples (38% of the whole population), and the distribution of E b within the subgroup is shown in Figure S10. The selector implies that systems providing low barrier for H 2 dissociation and at the same time balanced binding of H atoms to the surface are characterized by (i) d-band center of the bulk guest metal around the Fermi level and (ii) d-band center of the host surface top layer below the Fermi level. This can be understood as follows.
Condition (i) implies that there is a significant d-electron density that can be donated to the adsorbed H 2 molecule, facilitating its dissociation. A very similar (apart from slightly different numerical values) condition appears in the selector for the best subgroup for E b target property alone [(-2.05 eV ≤ DC ≤ 1.46 eV) AND (EC* ≥ -6.33 eV)]. Condition (ii) implies that the surface d-band center is more than half filled, which provides additional electrons for transferring to the H 2 molecule, but without excessive binding, to minimize |∆ | in accordance with Sabatier principle. Indeed, several subgroups of strongly bound H atoms (minimizing BE H ) are described by selectors including condition DT* > -0.17, which is exactly opposite to condition (ii). Analysis of BE H and |∆ | also shows that the strong and intermediate binding of H atoms to the surface is fully controlled by the features of host material.
We note that SGD is capable of finding several alternative subgroups, corresponding to different mechanisms of actuating interesting changes in target properties. These subgroups have a lower quality according to the chosen quality function, but they still contain useful information about a particular mechanism. In fact, they can be rigorously defined as top subgroups under additional constraint of zero overlap (in terms of data points) with previously found top subgroups. Analysis of such subgroups can be a subject of future work. We also note that quality function used in SGD is a parameter and can affect the found subgroups. It should be chosen based on the physical context of the problem. Exploring the role of different factors in the quality function and taking into account proposition degeneracy (no or minor effect of different conditions in the selectors due to correlation between the features) allows us to develop an understanding that may not be possible without the SGD analysis." 4) Page 7: It seems the authors identify Tc alloys as promising SAACs. It's worth noting that there may be other health/safety considerations when using Tc in catalytic applications due to the fact that all Tc isotopes are radioactive?
Response:
We agree with the referee that health/safety considerations are very important for catalytic applications. This point is now duly mentioned on page 11 of the revised manuscript.
Changes made:
We have changed the sentence "Considering stability, activity, and abundance, two discovered best candidates Mn/Ag(111) and Pt/Zn(0001) are highlighted in Figure 4" to "Considering stability, activity, abundance, and health/safety, two discovered best candidates Mn/Ag(111) and Pt/Zn(0001) are highlighted in Figure 4" on page 11 of the revised manuscript. 5) Page 8: I believe that the manuscript would benefit from an expanded discussion of Figure 3 that explains the general trends that emerge from the high-throughput screening results (e.g., in general, what types of guest atoms yield SAACs with low hydrogen dissociation barriers? What guest/host combinations lead to small segregation energies and why in terms of atomic radii size or other features?) Response: We thank the referee for these suggestions. In the revised manuscript, we apply the subgroup discovery (SGD) approach to evaluate relative role of different features in actuating desirable changes in target properties and to facilitate physical understanding of the actuating mechanisms. Please referee to comment 3) for detailed discussion.
6) Minor Comments Main Text
• Figure 1: If you change solid red circles to be different symbols for hollow bridge, bridge, top that would be more information-rich and potentially informative (just a suggestion).
• Table S1 caption. "the surface-based primary features were calculated using the slab unit cell consisting of one atom per atomic layer." Should be "The".
• Page 6: The text indicates that the primary features DC, DC*, DT, DT*, DS, and DS* appear in every dimension of the descriptors for hydrogen binding energy and dissociation barrier. However, based on Table 1, it is unclear what the DT and DS primary features are as opposed to the DT* and DS* primary features. From reading the SI, it seems * denotes host metal from guest atom feature. I think this * notation can be clarified in Table 1.
• Page 9: "Higher stability and efficiency than the reported ones, making them perfectly optimized for practical applications." Perfectly optimized seems to be a strong choice of words here. Perhaps remove the word "perfectly". Minor Comments on Supporting Information • Page 1: "Spin-polarization effects are tested for and included where appropriate." Is it noted somewhere for which spin polarization effects are included? This is a vague statement and could perhaps be made more explicit • Figure S1 caption. "bcc(110) e," should be bcc(110) (e) • Table S3: "Binding energy of host metal dimers", So this is a dimer energy for A(g) + A(g) -> A2(g)? Could perhaps be clarified.
• Font size for the captions in Figures S3-S5 are smaller than the other Figure S captions (i.e., font size 10 vs. 12).
• Table S5: "Number of system with the predicted and calculated segregation energy meet the same condition of SE < kTln(10) (Nmeet)…" Perhaps it should read as "Number of systems with the predicted and calculated segregation energies that meet the same condition…" Response: We thank the referee for pointing these issues/errors. We have modified all these issues/errors accordingly in the revised manuscript and supporting information.
Reviewer 2:
The manuscript presents machine learning models of single atom catalysts and screening procedure for design of hydrogenation catalysts based on this new type of alloys emerged in recent years. The features designed are easily available properties that are tabulated including electronic structure, bulk properties, etc. The target properties include the binding energy, activation barrier and the segregation. Those properties are crucial for screening high performance hydrogenation catalysts. While the work is thoroughly done in those aspects, this does not reach the standard of Nat Comm.
Response:
We thank the referee for the critical comments. In the revised manuscript we have applied the subgroup discovery (SGD) approach to evaluate relative role of different features in actuating desirable changes in target properties and to facilitate physical understanding of the actuating mechanisms. The combined SISSO and SGD data analytic approaches are novel which provide us not only predictive models but also new understanding. This allows us to go beyond the well-established d-band center theory, scaling relationships, and the Brønsted-Evans-Polanyi relationship.
1) The novelty of the approach is lacking. Compressed sensing is used recently in M. Andersen, S. V. Levchenko, M. Scheffler, K. Reuter, Beyond Scaling Relations for the Description of Catalytic Materials. ACS Catal. 9, 2752-2759 (2019).
Response:
We would like to emphasize the advancement and novelty of our work as follows: (i) The aim of our work is to predict potential SAACs for hydrogenation reactions, which are not only active but also stable and thus suitable for several practical applications. It is noteworthy that SAACs has attracted significant research interests lately due to their immense potential in cost-effective large-scale industrial usages. Thus, while the methodological workhorse of this study, i.e., SISSO with DFT inputs, has already been discussed before, the knowledge and understanding presented here are novel and suitable for Nat. Comm.. (111) and Pt/Zn (0001)) as particularly promising candidates. Moreover, in the updated manuscript we have also developed a novel strategy of analyzing complex models obtained by symbolic regression, based on the data-mining approach subgroup discovery (SGD). (iii) Besides the thermodynamic properties (i.e., binding energy, adsorption energy, and adsorption free energy) used in previous work [Nature Catalysis 1, 339-348 (2018); Nature Catalysis 1, 696-703 (2018); Nature 581, 178-183 (2020)] to describe the performance of catalysts, we have also included the kinetic property (energy barrier) and a stability indicator. As a result, our models both explain well the experimental results and enable design of high-performance catalysts with not only higher activity and but also stability.
2) While the SISSO with cross validation is reasonably accurate for training a small dataset, its generalization to new systems is still the biggest problems for all current learning framework. Active learning approach was used to tackle this problem (K. Tran, Z. W. Ulissi, Active learning across intermetallics to guide discovery of electrocatalysts for CO 2 reduction and H 2 evolution. Nature Catalysis. 1, 696-703 (2018).), while a large amount of calculations are required. The current study used only ~300 datapoints for training and extend the model to ~5000 space without validation of model prediction.
Response: Indeed, the referee is correct that active learning can ensure reliability of the model. However, combination of SISSO with active learning is a non-trivial task, because typical SISSO model construction is computationally expensive. To address referee's concern, we analyze in more detail the cross-validation results, in particular the stability of descriptor selection during the cross-validation. In addition, we validate our predictions by performing DFT calculations for some of the identified high-performance SAACs, including all the experimentally studied systems and our suggested top two best systems. We would also like to mention that the number of data points/systems in our training set is almost three times larger than that in the study of oxide-supported single-atom catalyst systems studied by Nolan and co-workers using the compressed-sensing LASSO approach [Nature Catalysis. 1, 531-539 (2018)].
Changes made:
We have added the sentence "For the optimal dimensionality, the same set of primary features are selected is found during CV10 in 9, 8, and 8 cases for the SISSO models of BE H , E b , and SE, respectively" on page 6 of the revised manuscript.
3) The criteria for screening catalysts used in this study is arbitrary. Without detailed kinetics, the approach can only provide a rough screening of candidate materials.
Response:
We agree with the referee that detailed kinetics would improve reliability of the predictions. However, this is currently not feasible. Nevertheless, we do include a kinetic property (dissociation barrier), while previously only thermodynamic properties (binding energy, adsorption energy, adsorption free energy) were considered [Nature Catalysis 1, 339-348 (2018) Response: Actually, adsorption energies of H on metal surfaces are not small for some systems. For example, at room temperature and partial pressure of H 2 = 1 atm, free energy of adsorption for the experimentally established Pt/Ag(111) system is -0.23 eV and the H adatom induced segregation energy change is as high as 0.49 eV.
Changes made:
The following was added to the main text: "We note that a machine-learning study of stability of single-atom metal alloys has been recently reported [Topics in Catalysis (2020) 63:728-741]. However, our analysis takes into account effects of adsorbates on the segregation energy, which has not been done previously." 5) The most fundamental problem of this study and the approach in general is their lacking of understanding the uniqueness of single atom alloys. Although the SISSO method comes up formula in reduced feature space, the physics is missing. The message to the community by the study is rather incremental while does not provide a way forward to tackle all those issues.
Response:
We are grateful to referee for this comment, as it shows that important implications of our study were unclear. Our results show that it is exactly the uniqueness of SAACs that requires advanced data analysis techniques to predict their properties. As we demonstrate, the easy-to-understand correlations that work well for simple metal surfaces are not applicable to SAACs. We use a methodology (compressed sensing) that not only provides a model based on easily accessible features, but also identifies the level of complexity of the problem in terms of those features. Nevertheless, we admit that additional data analysis that identifies common features of good SAACs would be useful. Therefore, we applied the subgroup discovery (SGD) approach to evaluate relative role of different features in actuating desirable changes in target properties and to facilitate physical understanding of the actuating mechanisms.
Changes made:
We have added the following paragraphs on page 10 and 11 of the revised manuscript. "Although the SISSO models are analytic formulas, the corresponding descriptors are complex, reflecting the complexity of the relationship between the primary features and the target properties. While potentially interpretable, the models do not provide a straightforward way of evaluating relative role of different features in actuating desirable changes in target properties. To facilitate physical understanding of the actuating mechanisms, we apply the subgroup discovery (SGD) approach. 55-60 SGD finds local patterns in data that maximize a quality function. The patterns are described as an intersection (a selector) of simple inequalities involving provided features, e.g., (feature1<a1) AND (feature2>a2) AND... . The quality function is typically chosen such that it is maximized by subgroups balancing the number of data points in the subgroup, deviation of the median of the target property for the subgroup from the median for the whole data set, and the width of the target property distribution within the subgroup. 60 " "Here, we apply SGD in a novel context, namely as an analysis tool for symbolic regression models, including SISSO. The primary features that enter the complex SISSO descriptors of a given target property are used as features for SGD (see Table 2). The data set includes all 5200 materials and surfaces used in the high-throughput screening. The target properties are calculated using the obtained SISSO models. Five target properties are considered: ∆ + , SE, SE H , E b , |∆ |, and BE H . Since we are interested mainly in catalysts that are active at normal conditions, ∆ is calculated at T = 300 K. Our goal is to find selectors that minimize these properties within the subgroup. Such selectors describe actuating mechanisms for minimization of a given target property. For SE, the following best selector is found: (EC* ≤ -3.85 eV) AND (-3.36 eV < EC ≤ -0.01 eV) AND (IP ≥ 7.45 eV). The corresponding subgroup contains 738 samples (14% of the whole population), and the distribution of SE within the subgroup is shown in Figure S10. Qualitatively, the first two conditions imply that the cohesive energy of the host material is larger in absolute value than the cohesive energy of the guest material. Physically this means that bonding between host atoms is preferred over bonding between guest atoms and therefore over intermediate host-guest binding. This leads to the tendency of maximizing number of host-host bonds by pushing guest atom to the surface. This stabilization mechanism has been discussed in literature, 61 and here we confirm it by data analysis. In addition, we find that stability of SAACs requires that ionization potential of the guest atom is high. This can be explained by the fact that lower IP results in more pronounced delocalization of the s valence electrons of the guest atom and partial charge transfer to the surrounding host atoms. The charge transfer favors larger number of neighbors due to increased Madelung potential, and therefore destabilizes surface position of the guest atom.
We calculate SE H using SISSO models for SE and BE H [see equation (3) in the Methods section]. Therefore, SGD for SE H is performed using primary features appearing in the descriptors of both SE and BE H . The top found subgroup contains features related to binding of H to the host and guest metal atoms, e.g. (EB* < -5.75 eV) AND (EH* ≤ -2.10 eV) AND (EH ≥ -2.88 eV) AND (IP* ≤ 7.94 eV) AND (IP > 8.52 eV) AND (R ≥ 1.29 Å). However, the distribution of SE for this subgroup is very similar to the distribution of SE H , which means that the stability of guest atoms at the surface is weakly affected by H adsorption when the surface guest atoms are already very stable. The important effect of H adsorption is revealed when we find subgroups minimizing directly SE H -SE (in this case only primary features that appear in the SISSO descriptor of BE H are considered for SGD analysis). The top subgroup we found contains 72 samples (1.4% of the whole population) and is described by several degenerate selectors, in particular (-2.35 eV ≤ EH* ≤ -2.32 eV) AND (EC* > -2.73 eV) AND (EC < -5.98 eV) AND (H ≥ -5.12 eV). This is a very interesting and intuitive result. Distributions of SE H and SE for this subgroup are shown in Figure S11. The SE for all materials in the subgroup is above 0 eV. However, SE H is much closer to 0 eV, and is below 0 eV for a significant number of materials in this subgroup. The conditions on the cohesive energy of guest and host metals (very stable bulk guest metal and less stable bulk host metal) are reversed with respect to SE, i.e., adsorption of hydrogen affects strongly the systems where guest atom is unstable at the surface. This increases the reactivity of the guest atom towards an H atom. The condition (EH* ≥ -2.35 eV) selects materials for which interaction of H with a host atom is not too strong, so that H can bond with the guest atom and stabilize it at the surface. The condition (EH* ≤ -2.32 eV) makes the subgroup narrower, which further decreases median difference SE H -SE but has no additional physical meaning. The condition (H ≥ -5.12 eV) has a minor effect on the subgroup.
The corresponding subgroup contains 1974 samples (38% of the whole population), and the distribution of E b within the subgroup is shown in Figure S10. The selector implies that systems providing low barrier for H 2 dissociation and at the same time balanced binding of H atoms to the surface are characterized by (i) d-band center of the bulk guest metal around the Fermi level and (ii) d-band center of the host surface top layer below the Fermi level. This can be understood as follows. Condition (i) implies that there is a significant d-electron density that can be donated to the adsorbed H 2 molecule, facilitating its dissociation. A very similar (apart from slightly different numerical values) condition appears in the selector for the best subgroup for E b target property alone [(-2.05 eV ≤ DC ≤ 1.46 eV) AND (EC* ≥ -6.33 eV)]. Condition (ii) implies that the surface d-band center is more than half filled, which provides additional electrons for transferring to the H 2 molecule, but without excessive binding, to minimize |∆ | in accordance with Sabatier principle. Indeed, several subgroups of strongly bound H atoms (minimizing BE H ) are described by selectors including condition DT* > -0.17, which is exactly opposite to condition (ii). Analysis of BE H and |∆ | also shows that the strong and intermediate binding of H atoms to the surface is fully controlled by the features of host material.
We note that SGD is capable of finding several alternative subgroups, corresponding to different mechanisms of actuating interesting changes in target properties. These subgroups have a lower quality according to the chosen quality function, but they still contain useful information about a particular mechanism. In fact, they can be rigorously defined as top subgroups under additional constraint of zero overlap (in terms of data points) with previously found top subgroups. Analysis of such subgroups can be a subject of future work. We also note that quality function used in SGD is a parameter and can affect the found subgroups. It should be chosen based on the physical context of the problem. Exploring the role of different factors in the quality function and taking into account proposition degeneracy (no or minor effect of different conditions in the selectors due to correlation between the features) allows us to develop an understanding that may not be possible without the SGD analysis."
Reviewer 3:
The authors report the use of modern data analytics towards the reliable prediction of activity and stability of dilute alloy "single atom catalysts" for hydrogenation. The topic of particular interest as single atom catalysts have made massive strides for oxidation reactions but have had limited success for reductions particularly due to lack of activity and/or abysmal stability.
1) The strength of the authors approach is that it addresses catalyst screening beyond the simple approximation BEP, d-band center etc. etc. etc. These concepts are embedded in the psyche of computational catalysis so deep that we forget they are simple models and, in many instances, to too simple for quantitative predictions-but excellent for rationalizations on small data sets.
Response:
We thank the reviewer for this comment. It correctly outlines the important aspect of our work.
2) The authors show that by assembling a large number of atomic, bulk and allow descriptors (table1) they are able to perform a high dimensional correlation with the ab initio data to yield property predictions FAR more accurate than the existing simple concepts. On the one hand this is a great step forward for screening studies on the other hand if I have a more complex fitting function, I do expect a better fit. The one worry I have is this then become a brute force approach without the intellectual understanding that can be provided by a simple model. In this respect it might have been more intellectually pleasing for the authors to consider if there was a smaller subset of parameters (2-3) that might do a reasonable job (better than linear fits but not the full-blown set) which might hint at a simpler model. As is, the approach is fine I do worry about both overfitting/underfitting of data but do believe the authors have covered this ground adequately.
Response: This is a very important comment that overlaps with similar concerns of the other referees. Indeed, we perform a careful cross-validation of our models and validate them on a test set never used for training, to ensure models' predictive power. However, the training and test sets are unavoidably limited, and there is never a guarantee that we capture all important physical variations present in the larger data set. This makes our mind crave for additional consistency check that we call "physical understanding". It justifies extrapolation of the models, possibly even to a different class of systems. Such extrapolation can be very useful, but also very misleading, as our study demonstrates. Nevertheless, we admit that additional data analysis that identifies common features of good SAACs would be useful. Therefore, we applied the subgroup discovery (SGD) approach to evaluate relative role of different features in actuating desirable changes in target properties and to facilitate physical understanding of the actuating mechanisms.
Changes made:
We have added the following paragraphs on page 10 and 11 of the revised manuscript. "Although the SISSO models are analytic formulas, the corresponding descriptors are complex, reflecting the complexity of the relationship between the primary features and the target properties. While potentially interpretable, the models do not provide a straightforward way of evaluating relative role of different features in actuating desirable changes in target properties. To facilitate physical understanding of the actuating mechanisms, we apply the subgroup discovery (SGD) approach. 55-60 SGD finds local patterns in data that maximize a quality function. The patterns are described as an intersection (a selector) of simple inequalities involving provided features, e.g., (feature1<a1) AND (feature2>a2) AND... . The quality function is typically chosen such that it is maximized by subgroups balancing the number of data points in the subgroup, deviation of the median of the target property for the subgroup from the median for the whole data set, and the width of the target property distribution within the subgroup. 60 " "Here, we apply SGD in a novel context, namely as an analysis tool for symbolic regression models, including SISSO. The primary features that enter the complex SISSO descriptors of a given target property are used as features for SGD (see Table 2). The data set includes all 5200 materials and surfaces used in the high-throughput screening. The target properties are calculated using the obtained SISSO models. Five target properties are considered: ∆ + , SE, SE H , E b , |∆ |, and BE H . Since we are interested mainly in catalysts that are active at normal conditions, ∆ is calculated at T = 300 K. Our goal is to find selectors that minimize these properties within the subgroup. Such selectors describe actuating mechanisms for minimization of a given target property. For SE, the following best selector is found: (EC* ≤ -3.85 eV) AND (-3.36 eV < EC ≤ -0.01 eV) AND (IP ≥ 7.45 eV). The corresponding subgroup contains 738 samples (14% of the whole population), and the distribution of SE within the subgroup is shown in Figure S10. Qualitatively, the first two conditions imply that the cohesive energy of the host material is larger in absolute value than the cohesive energy of the guest material. Physically this means that bonding between host atoms is preferred over bonding between guest atoms and therefore over intermediate host-guest binding. This leads to the tendency of maximizing number of host-host bonds by pushing guest atom to the surface. This stabilization mechanism has been discussed in literature, 61 and here we confirm it by data analysis. In addition, we find that stability of SAACs requires that ionization potential of the guest atom is high. This can be explained by the fact that lower IP results in more pronounced delocalization of the s valence electrons of the guest atom and partial charge transfer to the surrounding host atoms. The charge transfer favors larger number of neighbors due to increased Madelung potential, and therefore destabilizes surface position of the guest atom.
We calculate SE H using SISSO models for SE and BE H [see equation (3) in the Methods section]. Therefore, SGD for SE H is performed using primary features appearing in the descriptors of both SE and BE H . The top found subgroup contains features related to binding of H to the host and guest metal atoms, e.g. (EB* < -5.75 eV) AND (EH* ≤ -2.10 eV) AND (EH ≥ -2.88 eV) AND (IP* ≤ 7.94 eV) AND (IP > 8.52 eV) AND (R ≥ 1.29 Å). However, the distribution of SE for this subgroup is very similar to the distribution of SE H , which means that the stability of guest atoms at the surface is weakly affected by H adsorption when the surface guest atoms are already very stable. The important effect of H adsorption is revealed when we find subgroups minimizing directly SE H -SE (in this case only primary features that appear in the SISSO descriptor of BE H are considered for SGD analysis). The top subgroup we found contains 72 samples (1.4% of the whole population) and is described by several degenerate selectors, in particular (-2.35 eV ≤ EH* ≤ -2.32 eV) AND (EC* > -2.73 eV) AND (EC < -5.98 eV) AND (H ≥ -5.12 eV). This is a very interesting and intuitive result. Distributions of SE H and SE for this subgroup are shown in Figure S11. The SE for all materials in the subgroup is above 0 eV. However, SE H is much closer to 0 eV, and is below 0 eV for a significant number of materials in this subgroup. The conditions on the cohesive energy of guest and host metals (very stable bulk guest metal and less stable bulk host metal) are reversed with respect to SE, i.e., adsorption of hydrogen affects strongly the systems where guest atom is unstable at the surface. This increases the reactivity of the guest atom towards an H atom. The condition (EH* ≥ -2.35 eV) selects materials for which interaction of H with a host atom is not too strong, so that H can bond with the guest atom and stabilize it at the surface. The condition (EH* ≤ -2.32 eV) makes the subgroup narrower, which further decreases median difference SE H -SE but has no additional physical meaning. The condition (H ≥ -5.12 eV) has a minor effect on the subgroup.
The corresponding subgroup contains 1974 samples (38% of the whole population), and the distribution of E b within the subgroup is shown in Figure S10. The selector implies that systems providing low barrier for H 2 dissociation and at the same time balanced binding of H atoms to the surface are characterized by (i) d-band center of the bulk guest metal around the Fermi level and (ii) d-band center of the host surface top layer below the Fermi level. This can be understood as follows. Condition (i) implies that there is a significant d-electron density that can be donated to the adsorbed H 2 molecule, facilitating its dissociation. A very similar (apart from slightly different numerical values) condition appears in the selector for the best subgroup for E b target property alone [(-2.05 eV ≤ DC ≤ 1.46 eV) AND (EC* ≥ -6.33 eV)]. Condition (ii) implies that the surface d-band center is more than half filled, which provides additional electrons for transferring to the H 2 molecule, but without excessive binding, to minimize |∆ | in accordance with Sabatier principle. Indeed, several subgroups of strongly bound H atoms (minimizing BE H ) are described by selectors including condition DT* > -0.17, which is exactly opposite to condition (ii). Analysis of BE H and |∆ | also shows that the strong and intermediate binding of H atoms to the surface is fully controlled by the features of host material.
We note that SGD is capable of finding several alternative subgroups, corresponding to different mechanisms of actuating interesting changes in target properties. These subgroups have a lower quality according to the chosen quality function, but they still contain useful information about a particular mechanism. In fact, they can be rigorously defined as top subgroups under additional constraint of zero overlap (in terms of data points) with previously found top subgroups. Analysis of such subgroups can be a subject of future work. We also note that quality function used in SGD is a parameter and can affect the found subgroups. It should be chosen based on the physical context of the problem. Exploring the role of different factors in the quality function and taking into account proposition degeneracy (no or minor effect of different conditions in the selectors due to correlation between the features) allows us to develop an understanding that may not be possible without the SGD analysis." 3) Finally, then the result of this study is that using their model they can rapidly predict the results of DFT calculations and use that data to make predictions about activity and stability based on simple energetic parameters such as presented in Figure 4. In my opinion this is the most important plot in the whole paper and the authors did not really deal with its ramifications very well. The wisdom in single atom catalysts (particularly for hydrogenation) is that the more active the species the less stable if will be-hence the scarcity of single atoms (dilute alloys) that are reported. If the authors are correct there is a large abundance of materials far in the lower right-hand corner (active and stable) that should break this trend whereas those that do exist are mostly in the upper right-hand corner (active but less stable). This is the most significant discovery/prediction in the paper as far as I am concerned, and the authors barely comment on it. Sadly, a follow-on experimental study making targets and validating the prediction would be a breakthrough and this is also not done.
Response: There seems to be a misunderstanding regarding Fig. 4. The most active and stable materials are in the lower LEFT-hand corner. Just as the referee points out, this corner is scarcely populated compared to the whole area covered by all calculated materials. However, this does not mean there are no materials that can be better than the experimentally tested ones. To clarify this aspect, we have now added a discussion to the main text and new Figure S9 in the revised supporting information, which is reproduced as Figure R1 below. Figure R1. Stability vs. activity map for flat SAACs surfaces at T=298 K and p=1 atm. The SE on y-axis represents stability and activity parameter ∆ + is shown on x-axis.
Changes made: 1) We have added the sentences: "As expected, stability and activity are inversely related, which can be seen from the negative slope of the general trend in Figure 8 (showing selected materials) and Figure S9 (showing all explored materials), as well as a cut-off in population of the lower left-hand corner of these plots. Nevertheless, there are several materials that are predicted to be better SAACs than the so-far reported ones." on page 11 of the revised manuscript. 2) We added Figure S9 in the revised supporting materials. 4) Sadly, a follow-on experimental study making targets and validating the prediction would be a breakthrough and this is also not done.
Response: This work was conceived as a theoretical one. We are happy to share methodology and predictions with the community as soon as possible. We very much hope that our findings will encourage experimental groups to validate our predictions.
The authors have greatly expanded their work based on the reviewer comments. Importantly, they now utilize a data mining algorithm called Subgroup Discovery to analyze their SAAC dataset in combination with their SISSO model. This added analysis enables the authors to give much more satisfying and general insights regarding the stability and activity of the SAACs, which should prove useful for the catalysis community. Additionally, Subgroup Discovery has not been used yet in the catalysis/surface science fields (and SISSO algorithm has only been used once before in catalysis field to my knowledge), thus this work also introduces cutting-edge data science tools to the broader scientific community. Therefore, this paper should be of broad interest to multiple communities. I believe the work is suitable for publication.
Reviewer #2 (Remarks to the Author): Authors addressed most of the comments. However, the physical insights by subgroup discovery is rather limited. I stick to my opinion that this work is not a significant step toward ML method itself or SAAC discovery. It might be appropriate to a more specialized catalysis journal. 1. The SISSO machine learning method employed in this study is not new. With the same set of features, a regular neural network can be more easily trained and coupled with active learning. With existing alloy database published in community, a convolutional neural net can also be used since the local environment of single atom alloys is analogous to the traditional fcc-type alloys, e.g. A3B, in the first coordination shell. In term of physical interpretation, they are all black-box models. SISSO can give a formula instead, although its direct understanding by a catalysis expert is still not there. The formula can be considered as symbolic regression rather than physical models. Interpreting black-box models are not necessarily providing physical insights that can be translated to design. 2. Subgroup discovery is a half-way approach to extract conditions of features optimizing a defined quality function. It is monte carlo based algorithm. The identified boundary values will depend on runs and hyperparameters. The approach has been used in materials science and catalysis. It is overstated in terms of novelty in the context. The rule identified by the method is convoluted rather than being insightful. 3. The design space of SAAs is relatively small compared to complex alloys. The indication in abstract for hundreds of thousands is misleading. 4. It says the energy BEH and the d-band center and (b) the H2 dissociation energy barrier Eb and the H2 dissociation reaction energy for Pt(111) based SAACs. But the (b) panel is missing. 5. It claimed a step away from the d-band theory, BEP, and scaling relations. While machine learning models can be considered as a further step away from the d-band center type of theory level, it is not fair to say that for the original d-band theory since machine learning models are regression based only. It is not close to go beyond BEP and scaling relations in this work since it simply does not consider full reaction pathways. The claim is irrelevant. 6. The d-band center of the bonding guest atom is obvious choice for atop adsorption, but not quite for hollow, bridge. The averaged d-band center of a collection of atoms in the revision is not the right since the coupling strength decays rapidly with distance.
Reviewer #3 (Remarks to the Author): After carefully considering the previous reviewers' comments and the revised manuscript I can say most of the technical concerns I have about this work are resolved and I may have even softened (but not changed) my stance about not really bringing new understanding. I still do not like these screening/data analytics papers for the sake of data analytics but in this case the decision point for me is that the authors predict many new catalysts so, in principle, the way to test and validate this model is on the table.
IF the authors are right then this is a breakthrough, if they are wrong … I think this may well be worth publishing in Nature Comm and I look forward to seeing this work validated (or not). The text does require significant proof reading and improving on the English, particularly the new parts and should be proof read careful before it is published. | 13,553.4 | 2020-05-29T00:00:00.000 | [
"Chemistry"
] |
An analysis of the literature on humanitarian logistics and supply chain management: paving the way for future studies
The area of disaster management has become increasingly prominent in a context of frequent political, religious change and conflict, and within it, the field of knowledge on humanitarian logistics and supply chain management (HLSCM) has attracted attention from a variety of stakeholders, such as scholars, practitioners and policy makers. Consequently, humanitarian logistics and supply chain research has seen a significant increase in the quan-B tity of works emerging, particularly journal articles. In this context, we aim to systematize the selected contemporary literature on humanitarian logistics and supply chain management. After identifying the relevant literature on Scopus and Web of Science, we chart a systematization of this body of knowledge by applying a system of codes and classifications to it. Based on research gaps found, we propose an original research agenda for further developing the humanitarian logistics and supply chain management field, as suggested avenues for future research.
tity of works emerging, particularly journal articles. In this context, we aim to systematize the selected contemporary literature on humanitarian logistics and supply chain management. After identifying the relevant literature on Scopus and Web of Science, we chart a systematization of this body of knowledge by applying a system of codes and classifications to it. Based on research gaps found, we propose an original research agenda for further developing the humanitarian logistics and supply chain management field, as suggested avenues for future research.
Keywords Humanitarian logistics · Humanitarian supply chain · Humanitarian operations management · Sustainable operations · Disaster relief · Sustainable supply chain
Introduction
In this article, we aim to systematize selected contemporary literature on humanitarian logistics and humanitarian supply chain management (HLSCM), which has attracted a considerable amount of attention from scholars, practitioners and policy makers alike (Kovacs and Spens 2010). Rising interest in this field has been justified by a myriad of humanitarian challenges that society has faced over the past few years (Dubey and Gunasekaran 2015), examples of which include natural disasters and armed conflicts among others. As the subject of humanitarian logistics and supply chain gains more relevance the literature surrounding it has significantly increased, meaning a systematization of this literature now seems appropriate, as does identifying research gaps for future studies.
Consequently, the contribution of this article is to provide a systematization of selected contemporary works in the relevant literature on humanitarian logistics and supply chain that have been published by journals indexed in Scopus and Web of Science. Inspired by procedures adopted by highly cited literature reviews (e.g. Lage Junior and Godinho Filho 2010), this paper delivers: • An identification of the main articles on the field of humanitarian logistics and supply chain indexed in Scopus and Web of Science; • A classification of the relevant literature above based on a variety of characteristics; and • An original research agenda for future studies, based on gaps found in the current stateof-the-art body of knowledge.
This article is organized as follows. After our introduction (Sect. 1), we present the research methods we used (Sect. 2) to frame the classification and coding system we used to scrutinize the relevant literature (Sect. 3). We then briefly detail our conceptual background in Sect. 4, while Sect. 5 sheds light on our results and subsequent discussion. We conclude in Sect. 6 by presenting an original, new research agenda forward in this field.
Research methods
A literature review has as its main objective to show the central structures of a subject or topic, with the aim of identifying research progress that has been made, as well as literature gaps that remain (Hart 1999;Baker 2000). In this context, we use and apply the methodology and steps proposed by Lage Junior and Godinho Filho (2010) and subsequently tested by Jabbour (2013) and Mariano et al. (2015) in our literature review. As such, we observe the following steps: • First: Identifying the main articles available on the subject in academic databases and considering the principal keywords related to the topic; • Second: Screening the articles found in the first step in order to eliminate articles outside the subject area; • Third: Developing and applying a classification system to identify central structures of the subject or topic considered; • Fourth: Providing a literature review using the classification system elaborated in the third step above; and • Fifth: Identifying gaps, opportunities and challenges regarding future research studies in this area.
For our first step, the main articles about humanitarian logistics we identified during August and September 2016 contained the keywords of "Logistics", "Supply Chain Management" and "Humanitarian" in the academic databases of Scopus and Web of Science. We chose the two databases because they both compile data about abstracts and citations of scientific journals, books and conference proceedings from fourteen of the largest publishers in the world. Here, we used different combinations of keywords to increase the scope and reach of our search. After this first step, we performed screening with the objective of identifying all articles outside of the scope of our identified topics. Consequently, our final database comprised 87 articles that, in turn, were classified based on the coding presented in Table 1, and described in the following section. Finally, taking into account our proposed coding, a descriptive statistic was used to identify the main gaps remaining from the literature, as shown in Sect. 5.
Classification and coding
Considering the method proposed by Lage Junior and Godinho Filho (2010) and Jabbour (2013), we defined a set of classifications to organize the identified articles into specifics groups. This classification set included eight categories numbered from 1 to 8, and for each of them a group of coding was defined and used via letters from A to K. For example, a code of 2B means that this article is classified in section B of category 2, and such articles could be classified in one or more our categories above. Taking such points into account, our eight classifications are briefly defined below as: • Classification 1-Economic context: The degree of economic maturity of the countries in which the study occurred, coded from Ato D; • Classification 2-Focus: The main theme considered in any study, coded from A to C; • Classification 3-Method: The method used in a study, classified and coded from A to K; • Classification 4-Type of disaster: The different forms and durations of disasters considered in the study analysed, coded from A to E; • Classification 5-Phase of the disaster relief: The more important phase of disaster relief addressed in a study, coded from A to D; • Classification 6-Type of humanitarian organization: The types of humanitarian organization addressed by the author of the study analysed, coded from A to E; • Classification 7-Region of authorship: The region of authorship, coded from A to E.
This category also considers whether there is a significant volume of academic output for a specific region by the authors; and All the descriptions of our classifications and codes are shown in Table 1, and the research articles considered in our literature review are detailed in Table 2.
A brief conceptual foundation of humanitarian logistics and supply chains
Humanitarian supply chain management (HLSCM) is intimately tied to the broader context of disaster management which itself is a subject of much contemporary popularity. For example, recent works such as Yang et al. (2014), have used Data Envelopment Analysis to build an emergency response network for earthquakes, Anparasan and Lejeune (2017) have proposed a model of emergency responses to epidemics that can be used in countries that have limited resources, and Sushil (2017) has proposed the use of a qualitative and interpretative framework called SAP-LAP in the context of disaster management. Two subjects in HLSCM can be considered very important as they have been widely studied, namely: humanitarian supply chains (HSC) and humanitarian logistics (HL). During the past decade, HSC has received greater attention among academics and practitioners (Kovacs and Spens 2010), and many HSC works are trying to better explore this subject. Examples include: coordination of HSC (Balcik et al. 2010;Akhtar et al. 2012), specifically studying the drivers and barriers of the coordination of HSCs (Kabra and Ramesh 2015b; , and developing frameworks to improve HSC implementation (John et al. 2012).
However, it has been observed from the literature, that most HSCs are unstable, unpredictable, and slow to respond to the needs of affected people (Yadav and Barve 2015), especially when related to those disasters. Such disasters not only disturb the normal functioning of society, but can also leave huge and negative impacts on the people directly or indirectly impacted by them. It is not possible to predict natural disasters, but actions can be taken to deal with such complex crises and reduce the impact of natural disasters on people and society (Kovács and Spens 2007;Kabra and Ramesh 2015b;. Wassenhove (2006) defines a disaster as a "disruption that physically affects a system as a whole and threatens its priorities and goals", and considers HSC a central point for at least three reasons, as HSC: (i) serves as a bridge between disaster preparedness and response, between procurement and distribution; (ii) is crucial to the effectiveness and speed of response for major humanitarian programs, such as health, food, shelter, water and sanitation; and, (iii) can be one of the most expensive parts of relief efforts and operations, and thus deserves special attention.
In this sense, a very important concept for HSC is resilience, which is the ability of a supply chain to absorb the impacts of any rupture caused by a disaster and to recover from it. Here, DuHadway et al. (2017) caused by intentional or unintentional breaks in HSCs, and Kaur and Singh (2016), in turn, have studied relationships between such resilient supply chains and sustainability outcomes. In a disaster situation, logistics can be considered a critical activity that differentiates between a successful and a failed relief operation (Cozzolino et al. 2012). Indeed, the impact of a disaster is mostly seen on human mortality and their livelihood perspective, but a huge loss to economies is also associated with such disasters too (see Yadav and Barve 2015). From this perspective, disaster management and relief aid require complex logistical activities, as the resources they need are rarely available at the location of the disaster. These logistical activities are generally referred to as HL (Kunz et al. 2014).
An important point raised by Oloruntoba et al. (2016) is the fact that HLSCM still lacks theoretical development, and they suggest using the theories of behavioral and organizational economic internationalization to further progress it. Coles et al. (2017) emphasize the gap between theory and practice in the area of disaster management, and to remedy such failure have conducted a review of the International Federation of Red Cross and Red Crescent Societies (IFRC) to do so.
Results and discussion
The purpose of this section is to understand the research results obtained from our classification of the articles described in Table 2. Our results are presented through use of the defined categories detailed in Table 1 in the following subsections of: economic context, focus, methods, disaster type, relief phase, organizational type, author region, article purpose and interesting research gaps. We begin with economic context, and then detail each of the subsections above in turn.
Economic context
Based on the articles that were classified in categories 1A, 1B, 1C and 1D shown in Fig. 1, most such studies did not analyse a specific region (52%), and those which did investigate a region primarily analysed non-mature economies (33%). According to The United Nations Office for Disaster Risk Reduction (UNISDR 2016), the main countries affected by disasters in terms of people killed between 1992 and 2012 were Haiti, Indonesia and Myanmar. Considering that a natural disaster, in particular, would occur again due to geographic conditions, a research gap in this topic is: GAP 1 Which lessons would be learnt from non-mature economies in order to foresee, and be prepared for natural disasters?
Focus
Our second category identifies the main focus addressed in the previously considered articles in our literature review on HL and HSC. Here, relevant articles are classified in category Fig. 2, results indicate that most of these studies (54.02%) focus mainly on logistics rather than considering the area of Supply Chain Management, or the interface between these two knowledge areas. One probable explanation for this result is that after a disaster, the transportation of injured people and moving supplies to devastated areas is one of the main tasks in this situation (Habib et al. 2016, p. 1). Implications arising are that researchers have studied more immediate responses than preparation and/or prevention events, and that preparation and prevention activities, which include a supply chain perspective, have been neglected by the relevant authorities due to the fact that there are few studies which analyse preventive actions to deal with such disasters. Supply chain perspective in a context of humanitarian operations means planning supply and demand issues considering volume and location of inventories in order to be available for an immediate response, alternative routes for transportation of goods and negotiation with suppliers to propose emergency plans (Holgun-Veras et al. 2012b). As such, we propose the following: GAP 2 How are public and private sectors supply chains involved and organized to support the preparation and prevention of situations like natural and man-made disasters?
Methods
In this category, we identify the main research methods used in articles that address HLSCM. From our results presented in Table 2, most of the relevant studies are conceptual (26.44%), as shown in Fig. 3.
However, when we analyse each category in isolation, in category 3A articles dealing with the topic using qualitative methods are the majority (19.05%), whereas articles that approach the topic quantitatively (9.52%), or those that use mixed qualitative and quantitative methods together (3.17%), are very few. Consequently, it seems plausible to argue that there is a need for researchers in the HLSCM area to use more quantitative methods or mixed methodologies in their work. In order to do so, it is necessary that data on different aspects of disasters are collected and made available on an open access basis, such as The International Disaster Database. This is because the more data or information regarding calamitous events are available, the greater the chances of developing a strategy of prevention or assistance to victims. Additionally, opening up databases on natural and man-made disasters could enable us to develop simulations of potential impacts, and to forecast potential disasters in order to develop efficient action plans regarding them. of actions. From the points above, we propose: GAP 3 Which barriers exist to make quantitative studies feasible in the field of HLSCM? How is big data being used in the context of HLSCM?
Type of disasters
In order to deal with situations of disaster in a preventive way or by helping its victims, it is necessary to be aware of the two main types of disasters: natural or man-made, and also the speed of these events, i.e., if occasions are of a slow start (slow-onset) or a sudden start (sudden-onset). In this sense, as shown in Fig. 4, not even half of the studies (43.16%) 1 consider such type and speed aspects, and when they do, they are mostly directed at natural disasters of a sudden occurrence which, in turn, are explained by the need to operate efficiently to assist victims. This finding appears logical given the fact that the majority of the articles analysed are theoretical, so the distinction between different kinds of disasters is not always discussed in them. However, different kinds of disasters and their respective pace would require specific resources and capabilities in the operations of humanitarian logistics. Therefore, a research gap emerging is: GAP 4 Which resources and capabilities could be developed by organizations in order to deal with the different kinds, and pace of, disasters?
Phases of the disaster relief
In order to complement the finding of the Type of Disaster category, we identified how humanitarian logistics studies consider the Phases of Disaster Relief. Consequently, as shown in Fig. 5, most such works (41.38%) did not observe or discuss these phases of disaster which, in turn, could be easily understood, since most of the articles on this subject are conceptual, as already explained in Sect. 5.3. However, regarding the articles that do consider this phase element, such works concentrate on the perspective of how the difficulties of immediate response (5B-20.69%) can be overcome. This finding highlights that the main focus of such phase research is in how to deal with disasters rather than how to foresee or to be prepared for them. This result aligns to item 5.2, which may mean that either researchers have studied more emergency situations than preparation situations, or planning and preparation activities have been neglected by the relevant authorities. Therefore, it could be interesting to identify: GAP 5 Which initiatives or plans of prevention to natural and man-made disasters are developed in countries devastated by these disasters? Which kind of approach has been adopted after facing such disasters?
Type of humanitarian organization
Category 6 identifies which organizations, national or global, are observed in the work of humanitarian logistics, since these organizations in various situations play a fundamental role in helping disaster victims. Based on results in Table 2 and Fig. 6, the relevant articles here mostly do not comment on such organizations (6E 65.52%) or, when they do, present brief comments on supranational aid agencies (6A-10.34%) and governmental organizations (6B-9.20%). Looking at these notes, we ask: GAP 6 How can humanitarian organizations coordinate with each other in order to support the preparation/prevention, immediate response, and reconstruction phases of disaster relief?
Region of authorship and disasters
Categories 7 and 8 sought to geographically locate the main areas or regions of authors on the theme of HLSCM in their studies and, consequently, the main continents considered in such studies. Most authors are from the United States (18.39%) or from Europe, and specifically, from Finland (13.79%). An explanation why most European work originates in Finland stems from the significant works of Prof. Gyöngyi Kovács and Prof. Karen M. Spens from the Hanken School of Economics in Finland. Regarding region of disasters, the majority of articles (55.17%) do not allow this identification, since they are conceptual works, as indicated in Sect. 5.3. However, by observing the articles that explain this information, it was possible to identify that there is a great deal of interest from researchers in the Asian and American continents. One possible explanation for this finding lies in the fact that in recent years' serious disasters have occurred in these localities.
Purposes of articles analysed
The articles selected were also analysed in terms of their objectives in order to synthesize the main streams of the HLSCM field, which are presented in Table 2 using content analysis, and a conceptual map was developed as shown in Fig. 7.
Five streams of research interest were identified in the field of HLSCM namely: logistical coordination, framework, traditional logistics and SCM versus HLSCM, performance measurement, and model, based on the number of topics found.
The logistics and SCM coordination stream is composed of understanding the lack of coordination between aid members, the necessity of developing relationships between the players involved in humanitarian operations and identification of challenges to promote coordination within humanitarian logistics activities. The framework suggests researchers investigating the topic of HLSCM through particular theoretical lenses, and consequently, proposing new future avenues of research interest. The traditional logistics and SCM vs HLSCM stream addresses which similarities exist between the activities and decisions of traditional ones, while the performance measurement stream discusses developing indicators in the HLSCM field. Lastly here, the model stream proposes mathematical models to plan routes of localisation of inventories, and to support decision making on resource allocation and recovery after disasters.
Main gaps and fields of interest
Based on our analysis, we identified six research gaps as well as the main fields of interest and trends for the HLSCM area. To summarize, the six research gaps identified during our literature review process are detailed below.
GAP 1 Which lessons would be learnt from non-mature economies in order to foresee, and be prepared for natural disasters? GAP 2 How are public and private sectors supply chains involved and organized to support the preparation and prevention of situations like natural and man-made disasters? GAP 3 Which barriers exist to make quantitative studies feasible in the field? How is big data being used in the context? GAP 4 Which resources and capabilities could be developed by organizations in order to deal with the different kinds, and pace of, disasters? GAP 5 Which initiatives or plans of prevention to natural and man-made disasters are developed in countries devastated by these disasters? Which kind of approach has been adopted after facing such disasters? GAP 6 How can humanitarian organizations coordinate with each other in order to support the preparation/prevention, immediate response, and reconstruction phases of disaster relief?
The main streams of research interest in the field are: coordination, frameworks, traditional versus HLSCM, performance measurement, and model.
Combining the information from our results and research gaps and streams, it could be argued that that more practitioner-focussed research is needed in the field of HLSCM, that preparation and prevention should be addressed either by academics and/or relevant authorities, and that supply chain context needs analysing in order to discuss coordination between aid members. Additionally, understanding the resources and capabilities of the players and agents involved of humanitarian operations seems pivotal to comprehending our proposed research gaps above.
Conclusions
This article synthesizes the research literature on HLSCM in order to organize it under a conceptual map and integrate existing ideas to create new ways of thinking and understanding this theme, using articles identified in Scopus and Web of Science using the terms "Humanitarian Supply Chain" or "Humanitarian Logistics". As a result of our initial search, 155 articles were refined using the filters "Article", "Review", "Article in Press", "Source Type Journal" and "English Language", which produced 87 articles available for full analysis and review.
The main results of our analysis are that the majority of our reviewed articles were theoretical, and as a consequence, few of them discussed issues related to localisation of disaster, type of disaster, phase of disaster relief, and type of humanitarian organization. Their focus was mainly on logistics.
This article contributes to the literature of the HLSCM field by proving a synthesis of this theme and highlighting new perspectives on how it has been addressed, along with potential development areas to further guide future research. The limitations of this article relate to the cognitive process of analysing the identified articles herein, and the filters selected to choose the articles reviewed, which we discussed.
Nonetheless, based on our findings we have proposed six new research gaps and developed an original conceptual map which charts five streams on how to further integrate the humanitarian logistics and supply chain management field, which both represent a new way to further understand this theme.
Further research is specifically needed to apply the concepts of HLSCM in different contexts. Here, wider geographical perspectives could empirically test the global validity of theories used in HLSCM research and understand context dependency in HLSCM. A requirement for further empirical and theoretical work exists regarding international humanitarian operations, as well as not-for-profit organizations.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 5,527.2 | 2019-12-01T00:00:00.000 | [
"Business",
"Engineering",
"Environmental Science"
] |
Control of electronic band profiles through depletion layer engineering in core–shell nanocrystals
Fermi level pinning in doped metal oxide (MO) nanocrystals (NCs) results in the formation of depletion layers, which affect their optical and electronic properties, and ultimately their application in smart optoelectronics, photocatalysis, or energy storage. For a precise control over functionality, it is important to understand and control their electronic bands at the nanoscale. Here, we show that depletion layer engineering allows designing the energetic band profiles and predicting the optoelectronic properties of MO NCs. This is achieved by shell thickness tuning of core–shell Sn:In2O3–In2O3 NCs, resulting in multiple band bending and multi-modal plasmonic response. We identify the modification of the band profiles after the light-induced accumulation of extra electrons as the main mechanism of photodoping and enhance the charge storage capability up to hundreds of electrons per NC through depletion layer engineering. Our experimental results are supported by theoretical models and are transferable to other core-multishell systems as well. Surface states, and the combination of suitable materials, induce spatial gradients in the carrier density of doped metal oxide nanocrystals, affecting their electronic structure and plasmonic behavior. Here the authors demonstrate depletion layer engineering and control in ITO/In2O3 core–shell nanocrystals by tuning the shell thickness or by photodoping.
D oped metal oxide (MO) nanocrystals (NCs) are gaining the attention of the scientific community thanks to their unique properties, such as high electron mobility 1 , the tuneability of their carrier density over several orders of magnitude 2 , chemical stability 3 , and low toxicity 3 , as well as suitable operating temperature 1 , which makes them appropriate for a large plethora of applications, ranging from nanoelectronics and plasmonics to the next-generation energy storage [3][4][5][6][7][8][9][10][11] . In doped MO NCs, surface states, such as surface trap states, defects, vacancies, as well as surface ligands and other bound molecules induce Fermi level pinning causing an upward bending of the energetic bands 2,4,[12][13][14][15][16][17][18] . The spatially varying conduction band translates into a gradient in the carrier density (n e ), sufficient to suppress entirely the metallic behavior of carriers close to the nanocrystal surface. This depletion region effectively acts as a dielectric 2,12,17,19 . Hence, the homogeneous flat-band model, which neglects Fermi level pinning, is not sufficient to accurately describe the behavior of free carriers in MO NCs, as introduced by other groups 2,17,19,20 . In fact, the depletion layer formation considerably affects the conductivity of NC films and their plasmonic behavior, with direct implications on the electric field enhancement, the localized surface plasmon resonance (LSPR) modulation and its sensitivity to the surroundings 2,17,19 . Furthermore, the presence of a surface depletion region induces an important alteration to the particle dielectric function 2 .
Given the strong impact of depletion layers on the optoelectronic properties of nanoscale oxide materials, in this work, we aimed at exploiting the depletion layer formation to control energetic band profiles as a means to understand and improve material characteristics. We explore depletion layer engineering beyond surface states by introducing additional electronic interfaces and by dynamically modulating the carrier density via post-synthetic approaches. We experimentally exemplify this scheme with Sn-doped Indium Oxide (ITO)-In 2 O 3 core-shell NCs and the fine-tuning of the shell thickness (t s ) as well as capacitive charge injection with light (i.e., photodoping). Numerical simulations on both cases serve as a framework to describe in detail the nanoscale evolution of their electronic structure supported by an empirical model that describes the experimental optical properties of all NCs before and after photodoping. The empirical fit model together with electron counting experiments support the band structure calculations well. Through this combined theoretical and experimental work, we unveil that double band bending is a key characteristic of ITO-In 2 O 3 core-shell NCs, describing well also the dynamic introduction of extra electrons via photodoping, a process not fully explained yet. We found that the photo-induced band bending results in an increase in n e predominantly in shell, contradicting the previously reported explanation of a uniform rise of the flatband Fermi level as main mechanism for photodoping 11,16,21,22 . Furthermore, the observed band bending supports charge separation towards the NC interface and avoids possible recombination. We finally exploit depletion layer engineering to improve the capacitive charging process in doped metal oxide nanocrystals upon photodoping, resulting in an accumulation of more than 600 stored charges per nanocrystal of the same size.
Results
We performed numerical simulations based on the solution of Poisson's equation 2 within the parabolic band approximation to illustrate the band structure of NCs and their depletion layer formation (extended details on the calculations are reported in the Supplementary Information) 23,24 . Here, the depletion layer is defined as the region of the NC where n e drops below 10 26 m −3 (threshold value at which we can detect plasmonic features) 19 . In Fig. 1 we show the spatially dependent profile of the conduction band as a result of the upward band bending and its effect on the depletion layer width (W) for different parameters, such as surface potentials (E S ) (Fig. 1a), different materials (Fig. 1b) and the introduction of additional electronic interfaces ( Fig. 1c and Fig. 1d). In the first case, we consider the effect of surface states on the depletion layer formation. The effect of Fermi level pinning is modeled by a fixed surface potential (E S ¼ 0; 0:5; 1; 1:5; 2 eV), from which the band bending profile is derived. An ITO/surface electronic interface is formed. The value of E S can be found experimentally and it is a peculiar parameter for each material interface. It depends on several factors, such as specific densities of trap states, presence of defects and vacancies as well as surface ligands 2 . For increasing E S , we observe an increase in the depletion layer width, which affects a larger fraction of the NC volume (Fig. 1a). These results indicate the importance of surface control to engineer the band structure of NCs. Fig. 1b reports the effects of changing the composition of the NC while keeping E S at a fixed energy. The choice of material, the elemental composition, permittivity (ε), bandgap energy (E g ) and the control over doping levels are of fundamental importance. Different materials, as in this case ZnO, In 2 O 3 , ITO and CdO, have a specific impact on the band bending, showing that W is a unique feature of each system. Another powerful parameter to control the depletion layer and the energy level profile is the introduction of additional electronic interfaces beyond the surface of the nanoparticle. One example is the ITO-In 2 O 3 core-shell nanocrystal system (Fig. 1c).
By surrounding the core ITO NC with In 2 O 3 , two electronic interfaces are formed: ITO/In 2 O 3 and In 2 O 3 /surface. In this case, E S is approximately 0.2 eV below the conduction band minimum of In 2 O 3 , as reported in literature 2 . While in uniform NCs (ITO core only) the band profile is determined by the radial depletion region near the NC surface, the addition of shell layers with thickness t s strongly affects the band's profile ultimately resulting into a double bending of the conduction band. Hence, shell tuning is an effective tool to control W and the shape of the electronic bands inside the NC volume. This effect can be further extended when combining multiple materials sequentially together in coremultishell NC architectures. Figure 1d reports a heterostructure based on three different materials, introducing three electronic interfaces (other combinations of materials and structures are reported in the Supplementary Information, Supplementary Fig. 1). This leads to non-trivial bending and highlights that it is possible to design targeted band structures at the synthesis stage by combining two or more materials in core-shell or core-multishell heterostructures with varying width.
Effective control over NC geometry, size and doping level is crucial to make reliable quantitative assessment of W. To experimentally investigate depletion layer engineering predicted by numerical calculations, we synthetized ITO-In 2 O 3 core-shell NCs with varying shell thickness t s and induced a dynamic variation of their carrier density via photodoping (see Supplementary Information for further details on synthesis methods). Figure 2a shows the TEM images illustrating the progressive growth of the NC due to the formation of the In 2 O 3 shell around the ITO core. The crystalline integrity of the samples was confirmed by XRD measurements (Supplementary Fig. 2). We collected multiple aliquots during the synthesis at different stages of the growth resulting in a set of samples with the same physical core size (R core = 5.5 nm -C0) and various shell sizes (S1-S5, with t s = 1.15 nm, 1.9 nm, 2.9 nm, 4.25 nm). The successful achievement of core-shell structures was confirmed by a comparison of the Sn-dopant concentrations obtained by two different techniques: inductively coupled plasma mass spectrometry (ICP-OES) as volume sensitive technique and X-ray Photoelectron Spectroscopy (XPS) as a surface sensitive technique (Fig. 2b). These techniques probe the volumetric and surface content of Sn atoms, respectively, and have been shown to be effective methods in elucidating nanocrystal dopant distributions 25 . We observe a higher Sn-concentration from the volume-sensitive measurements (black curve in Fig. 2b) as compared to the surfacesensitive measurements (red curve in Fig. 2b) in all samples with shell. This indicates that the Sn dopants are localized in the core of the NCs without significant diffusion of Sn atoms into the shell (further analysis on diffusion effects can be found in Supplementary Fig. 3). The absorption spectra of the representative samples, normalized to the maximum are reported in Fig. 2c (dotted curves). The spectra are governed by intense resonances in the near-infrared (NIR) that are assigned to localized surface plasmon resonances (LSPRs) as a result of free electrons in the highly doped semiconductor (typically in the range of 10 27 m −3 ) 2,4,25-27 . The LSPR peak position ω LSPR and its peak profile are correlated to several factors, such as the NC geometrical features (e.g., R core ; t s ), n e , the depletion layer width (W), the dielectric constant of the surrounding medium (ε m ), as well as the structural defects and dopant concentration, providing a unique spectral signature of such parameters 21,28-30 . We will come back to this point later when describing the empirical fit model. From the modulation of the LSPR upon shell growth, we observe an initial blue shift of the LSPR (see Supplementary Information, Supplementary Fig. 4). This is ascribed to the activation of surface dopants with the growth of a thin In 2 O 3 layer, which results in an increased carrier density 25 . The following continuous red shift of the LSPR is due to the presence of an increasing shell thickness t s that modifies the dielectric surrounding of the NC 25 . Notably, in particles with a critical thickness t à s = 2.7 nm, a second shoulder appears in the spectrum. This indicates a more complex carrier density profile within the core-shell nanocrystals which induces an independent resonating mode, generated by a sufficiently high carrier density in the shell of the nanoparticle 31 .
To further study the electronic structure of core-shell NCs out of the equilibrium conditions, we post-synthetically alter the number of free carriers via photodoping 4,11,21,32 . Photodoping consists of introducing multiple free charge carriers via light absorption and suppressing carrier recombination by the quenching of the holes with hole scavengers 11,21,33 . The photodoping process in colloidal NCs has been recently investigated with optical and electrochemical (e.g., potentiometric titration) measurements 4,11,21 . Here, we induce the photodoping by exposing our colloidal NCs to light beyond the ITO band-gap in the ultraviolet (UV) region (300 nm-4.1 eV, FWHM = 20 nm) and an intensity of 36.8 mW cm −2 . Figure 2c shows the normalized absorbance of three representative examples before (dotted curves) and after (solid curves) the exposure to 20 min of UV light. After the introduction of extra photocarriers into the system the LSPR absorption increases in intensity (ΔI ¼ I photodoped À I as synthesized ) and its energy shifts ). The photoinduced effects progressively appear with the amount of light Increasing the E S results in the expansion of the depletion width, W (progressively from blue to red). b Impact of different materials on W at fixed E S . c Expansion of W and double bending of the depletion layer in a core-shell structure of ITO-In 2 O 3 with a core radius (R core ) of 5.5 nm and varying shell thickness (t s ¼ 0;1;2;3;4 nm, i.e., blue, light blue, orange, red and dark red, respectively). d Multiple shell system by combining an ITO core (R core = 5.5 nm) with a In 2 O 3 and ZnO shell, with total radius R = 9.5 nm. The band shows a complex profile with a triple bending (green curve). The gray curves illustrate the previously reported case of a uniform ITO NC (dark gray) and an ITO-In 2 O 3 core-shell NC (light gray) with total radius R = 9.5 nm for comparison.
absorbed ( Supplementary Fig. 5) in agreement with previous reports in the literature 4,16,21,34 . The introduced photoelectrons add to the initial free carrier density leading to a stronger interaction with the incoming radiation and hence an increased LSPR absorption. The impact of the photodoping on the LSPR modulation is extremely sensitive to t s , with ΔI almost doubling in the case of the biggest NCs. In Fig. 2d, the normalized absorption spectra for the sample having a t s = 4.25 nm are shown before (black dotted curve) and after (black continue curve) photodoping. In this case, it is possible to note a particularly strong splitting of the LSPR into two major contributions. These results display an enhanced sensitivity of the LSPR peak to photodoping by increasing t s and indirectly hint towards an increased number of stored photoelectrons for higher t s (since the LSPR absorption is proportional to n 2=3 e ) 19 . We now investigate the same system of ITO-In 2 O 3 NCs with varying thickness t s with numerical methods as introduced above. The values for t s were chosen equivalent to the size of synthesized NCs. To further investigate also the photodoping process, we numerically calculate the effects of additional free electrons in the system as a function of t s . To this aim, we introduced a generation function G R ð Þ ¼ I 0 αβe ÀαR , which extends the Poisson's equation by an additional term that represents the spatial distribution of the extra free carriers introduced into the system via photodoping. The intensity of incident photon flux is reported as I 0 , α denotes the photon absorption coefficient, and β denotes the quantum efficiency, respectively 35 . We target to identify how their presence modifies the energy bands and carrier density distribution of the system. In this way, we go beyond the results introduced in Fig. 1 and we assess the dynamic, post-synthetic variation of the electronic band profiles via light-induced charge injection, i.e., photodoping. A comparative study reporting electronic structures and carrier density profiles both before and after photodoping are shown in Fig. 3a and Fig. 3b. We first discuss the effects of shell formation on the electronic structure of the NCs (black curves in Fig. 3a). The Fermi level pinning anchors the depleted region to the surface of the nanocrystal at the same energy, irrespective of t s . However, with increasing t s it affects more strongly the In 2 O 3 shell region, which effectively shields the ITO core from depletion. Consequently, even if W increases, the depletion layer progressively shifts towards the outer region of the NC. An intermediate region between core and surface states is, thus, introduced, resulting into the expansion of the active core region (R active ), i.e., the region of the NC volume not affected by W, which is typically larger than R core . In fact, the spatial extent of these electronic features does not correspond to the as synthesized structural parameters (i.e.,R core , t s ). This expansion is not due to an introduction of extra donor atoms, nor to diffusion effects (more details in Supplementary Information, Supplementary Fig. 3). With increasing t s , a more pronounced bending of the bands occurs, and it extends for nanometers into the NC. The corresponding carrier density distribution (black curves in Fig. 3b) shows a non-trivial profile. The double bending can be explained by a leakage of carriers into the shell. The carrier density in extended regions of the undoped shell reaches values beyond~1 Á 10 26 m −3 . The presence of carrier density in this range in the undoped In 2 O 3 region indicates that for t s >t à s ¼ 2:7 nm it is not appropriate to approximate the ITO-In 2 O 3 system as a doped core-dielectric shell. Instead, it must be considered as a dual-plasmonic material with a specific carrier density in the core (n core = 1.1 Á 10 27 m −3 ) and enhanced carrier density in the shell with n shell < n core . This explains the experimentally observed double features in the LSPR (see Fig. 2c), which are reproduced by our simulated absorption spectra (Fig. 2d) when implementing the carrier density profile extracted from Fig. 3b. The observed double bending of the energetic bands becomes more pronounced upon photodoping in all samples (blue curves in Fig. 3a) with an immediate impact on the carrier density distribution (Fig. 3b). Indeed, after photodoping for samples beyond t à s the band profile approaches a step function with two distinct energy levels: the conduction band (CB) level in the core and an energetic level approx. 0.45 eV above the CB in the shell. This effect is observed in the carrier density profile as a (close to) two-step profile. In the samples with t s > t à s ¼ 2:7 nm the maximum n e reached in the NC shell differs from the one in the core, reaching values of around~4 Á 10 26 m −3 , while the core carrier density remains nearly constant. The light-induced modulation of the depletion layer width (ΔW ¼ W photodoped À W as synthesized ) increases with a ΔW $ t s 3 law (Supplementary Fig. 6). Since the photo-generated extra carriers tend to fill W, larger ΔW values of the assynthesized NCs justify the possibility to store more electrons in NCs with bigger shells. Hence, from our simulations we conclude that the filling of electronically depleted regions is the main mechanism behind the photodoping process of metal oxide NCs. These findings seem to be in contradiction with the literature reports on the experimentally observed uniform rise of the Fermi energy level as a result of the photo-induced accumulation of multiple photoelectrons, as shown by Schimpf et al. 11,16,21,22 . However, the plasmonic effects of double features observed after photodoping of core-shell nanocrystals are not explainable with a simple Fermi level rise. In a flat-band scenario a uniform rise of Fermi level would necessary imply a blue-shift of ω LSPR , while we experimentally observed photodoped NCs with no blue-shift, a red shift (see Fig. 4b, below) or even a splitting of the ω LSPR To further test our theory, we approach the photodoping process by applying an empirical model to fit the spectra of each sample. Representative fits are shown as orange curves in Fig. 2d. The plasmonic properties of doped MO NCs can be well described within the framework of the Mie scattering theory in the quasistatic approximation. For our samples, we found that the classical Drude model is sufficient to accurately describe the plasmon response, with quantum effects representing only a minor correction. However, for small NCs (e.g., R < 4 nm), and low free charge regime (e.g., N e <100), quantum mechanical effects cannot be neglected 36 (further discussion in the Supplementary Information). The optical response of metals and heavily-doped semiconductors is characterized by the polarizability of the free electrons, depicted in the Drude-Lorentz model with the complex dielectric permittivity ε ω ð Þ ¼ ε 1 À ω 2 P = ω 2 þ iωΓ À Á : Here, the bulk plasma frequency ω p ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n e e 2 =ε 0 m à p is a function of the free carrier density (n e ) and the effective electron mass (m à ), Γ is a damping parameter accounting for electron-electron scattering and ε 1 is the high frequency dielectric constant. Within this picture, ω LSPR is directly linked to ω p of the material. The tuneability of the LSPR is provided by the proportionality to n e , which is related to the number of free charges over the active volume (n e $ N e =R 3 active ). Hence, we can link the absorption, which is our physical observable, to the electronic structure of the system. In previous works, the effect of depletion layers was addressed by Zandi et al., who introduced an effective dielectric function using a Maxwell-Garnett effective medium approximation (EMA) 2 . This approach shows that accumulating charges in the NC as a result of electrochemical doping has the effect of increasing the intensity and shifting the position of the LSPR peak as a result of the varying W 2 . We adapt this model by implementing a core-shell structure with frequency dependent core dielectric function ε core ω ð Þ (and constant carrier density, n core ) surrounded by a dielectric shell with ε DL ¼ 4 in the depletion layer. Outside the sphere a dielectric medium with fixed ε m ¼ 2:09 is present. Within this picture, we approximate the continuous carrier density profile n e ðRÞ with discrete regions of uniform density, while we define n e ¼ 0 inside the depletion region (Fig. 3c). We found that the two-layer model describes well the optical spectra when t s <t à s ¼ 2:7 nm (Fig. 3c, i and ii). Importantly, for t s >t à s and most photodoping cases, we found that it was necessary to extend this model in order to fit the spectra. To this aim, we developed a threelayer model based on the Maxwell-Garnett EMA with three concentric regions. In the first two regions the inner core and first shell region, which sum up to R active , have a frequency dependent dielectric constant of ε core ω ð Þ and ε shell ω ð Þ with constant carrier densities of n core R ð Þ and n shell ðRÞ, respectively. Surrounding the frequency dependent core and shell dielectric functions is an additional layer that accounts for the depletion of carriers in the shell, which was not previously considered in models found in literature 2,28,31 . Hence, these two concentric regions are surrounded by a third depleted layer of thickness W with fixed ε DL ¼ 4 and zero carrier density. The surrounding dielectric medium is ε m ¼ 2:09. By taking into account the formation of an additional depletion layer due to the electronic interface between shell and surface, our model goes beyond what has been implemented so far to describe capacitive charges in MO NCs 2,28,31 . In our study, for all values of t s , the most notable changes in n e after photodoping are observed to effectively increase R active and decrease W 28 . The core carrier density n core remains nearly constant with variations of less than 14%, while a significant variation occurs in the shell regions, with n shell around $ 5:4 Á 10 26 m −3 .
We give a quantitative comparison between the numerical and empirical approach by plotting the amount of stored carriers in the NC (ΔN e ), defined as the difference between the free carriers of the photodoped NC and the as-synthesized NC (ΔN e ¼ N photodoped e À N as synthesized e ). We observe a good agreement between both approaches, finding that ΔN e increases with t s with a ΔN e $ t s 3 trend, reaching values as high as 600 extra electrons (Fig. 4a). We advanced the studies of the NC stored carriers by using titration on photodoped NCs to count the number of stored electrons (further details in the Supplementary Information) 4,11,37 . By using molecular oxidants (F4TCNQ) to titrate the electrons, we directly measure the average number of electrons extracted per NC. F4TCNQ in this study acts as an electron acceptor. The optical features of F4TCNQ hold as a signature to quantify the extracted electrons. We observe an increase of the number of extracted Experimental comparison between the optical response of two samples with same size and doping concentrations but different electronic structure, before (dotted line) and after (solid line) photodoping (homogeneous ITO in blue, core-shell ITO-In 2 O 3 in red). The sensitivity of the LSPR modulation via photodoping is enhanced in the core-shell case. We highlight that the peak position of the LSPR after photodoping blueshifts in the homogeneous case, while it redshifts in the core-shell case indicating that depletion layer modulation is the main process of photodoping (see discussion above). photocarriers with increasing t s , in agreement with the trend reported for numerical simulations and empirical modeling (Fig. 4a). The discrepancy, up to a factor 2 in the case of large core-shell NCs, is most probably related to a reduced efficiency in the carrier extraction process. Nevertheless, the ΔN e $ t s 3 trend is reproduced displaying that the electron counting experiments together with the empirical fit model support the band structure calculations well.
Finally, we aim at isolating the impact of depletion layer engineering from the volume dependence of ΔN e as shown from Schimpf et al. 16 . From numerical simulations, we found that the number of stored carriers in NCs with a core-shell architecture are significantly larger than in the pure ITO case ( Supplementary Fig. 7a and Supplementary Fig. 7b). To confirm this result experimentally, we perform a quantitative analysis of one specific t s and compare it to a similar NC without shell (only core) with all other parameters unchanged (i.e., total NC radius R, doping density N d , experimental conditions). The optical absorption spectra before and after photodoping are depicted in Fig. 4b. By applying our empirical model to this case, we obtain that core-shell NCs can accumulatẽ 40% more carriers than uniform ITO NCs of the same size. Our numerical simulations predict that this enhancement increases with increasing shell thickness. Hence, we demonstrate that depletion layer engineering can improve charge storage capacity and more generally that the band bending delivers an additional degree of freedom to artificially engineer the optoelectronic properties of MO NCs, both during synthesis and post-synthetically.
Discussion
In this work, we demonstrate that depletion layer engineering is an important tool to design and control energetic band profiles in metal oxide NCs. Our results are based on a combination of theory and experiment: we implement a numerical model that is able to account for additional free carriers in the MO NCs, we developed an empirical three-layer model that describes the optical response of the (photodoped) core-shell ITO-In 2 O 3 NCs, and confirmed our results with experimental electron counting experiments through reaction with F4TCNQ. From this combined theoretical and experimental approach, we found that, first, double bending of the bands dominates the electronic structure of (photodoped) core-shell ITO-In 2 O 3 NCs and that the depletion layer predominantly affects the In 2 O 3 shell. Second, the electronic rearrangement of energy bands and the filling of electronically depleted regions resulting in the evolution of different levels of carrier density in core and shell are the main mechanism behind the photodoping process of metal oxides NCs. Third, depletion layer engineering allows enhancing the charge storage capability of ITO NCs of the same size. We can extend this model to other systems as well demonstrating the validity of our approach. Our results show that the modulation of the depletion layer represents an interesting avenue to design and improve the properties of MO NCs and their core-shell or core-multishell structures. We foresee multiple practical applications ranging from energy storage to sensing for ITO-and similar metal oxide nanocrystalsbased devices that will benefit from the control of electronic band profiles through depletion layer engineering.
Methods
Core-shell nanocrystal synthesis. ITO/In 2 O 3 core/shell nanocrystals were synthesized in a continuous growth approach with the following step-by-step procedure 19,25,38 . A precursor solution was prepared mixing in a flask tin(IV) acetate and indium(III) acetate in a 1:9 Sn:In ratio. Subsequently, oleic acid was added in a 1:6 metal to acid ratio to yield a 10% Sn doped ITO precursor solution. The flask was stirred and left at 150°C under N 2 for 3 h for degassing The ITO nanocrystals (core) were first prepared by adding the ITO precursor solution via a syringe pump (drop-by-drop at a rate of 0.35 mL/min) to 13.0 mL of oleyl alcohol at 290°C. During the slow-injection procedure a flow of 130 mL/min of N 2 gas was kept in the reaction flask to quickly remove any water vapor formed during the reaction. The ITO cores, stabilized with oleic acid ligands, were continuously grown to a size of 5.5 nm (radius) and isolated by precipitating with 12 mL ethanol. The solid part was collected by centrifugation at 7300 rpm (5540×g) for 10 min, washed twice more with ethanol and dispersed in hexane.
Then, part of the cores was kept for analysis and the rest of the solution reintroduced in fresh oleyl alcohol. For shelling, a second precursor solution was prepared by following the same procedure. In order to yield an undoped indium oleate precursor solution, indium(III) acetate was mixed with oleic acid in a 1:6 molar ratio. Undoped indium oleate was added with the same slow-injection procedure described above. Core-shell samples were then washed with ethanol and the process repeated several times until a final size of~10 nm (radius) was reached. All experiments were performed on samples collected at different stages of the shell growth, and hence sharing the very same ITO core.
Structural characterization of core-shell NCs. The structural characterization of the samples with different shell thickness were analyzed by transmission electron microscopy (TEM) to determine the size and confirm the successful formation of nanocrystals. TEM measurements were performed with a JEOL JEM-1400Plus operating at 120 kV and using lacey carbon grids supported by a copper mesh. The size distribution of the NCs was extracted using ImageJ tools on the images collected 39 .
X-ray Diffraction (XRD) analyses were carried out on a PANanalytical Empyrean X-ray diffractometer equipped with a 1.8 kW Cu Kα ceramic X-ray tube and a PIXcel3D 2×2 area detector, operating at 45 kV and 40 mA. Specimens for the XRD measurements were prepared by dropping a concentrated NCs solution onto a zero-diffraction silicon substrate. The diffraction patterns were collected under ambient conditions using a parallel beam geometry and the symmetric reflection mode. XRD data analysis was carried out using the HighScore 4.1 software from PANalytical.
X-ray Photoemission Spectroscopy (XPS) measurements were performed on a Kratos Axis Ultra DLD spectrometer, using a monochromatic Al Kα source (15 kV, 20 mA). Specimens were prepared by dropping a concentrated NCs solution onto a highly ordered pyrolytic graphite (HOPG, ZYA grade) substrate. High resolution spectra of the Sn 3d and In 3d regions were acquired at pass energy of 10 eV, and energy step of 0.1 eV, over a 300 ×700 microns area. The photoelectrons were detected at a take-off angle of ϕ = 0°with respect to the surface normal. The pressure in the analysis chamber was maintained below 7 × 10 −9 Torr for data acquisition. The data was converted to the VAMAS format and processed using the CasaXPS software, version 2.3.24 40 . The binding energy (BE) scale was internally referenced to C 1 s peak (BE for C-C = 284.8 eV). For the quantitative analysis, the areas of In 3d and Sn 3d peaks were calculated after applying the appropriate background correction across the binding energy range of the peaks of interest. The relative atomic concentrations were then estimated, using the so-called relative sensitivity factors (RSF) provided by Kratos (RSF In 3d = 7.265, RSF Sn 3d = 7.875).
Inductively coupled plasma mass spectrometry (ICP-OES) was performed on all samples to estimate the doping levels and concentrations of the ITO NCs. The elemental analysis was carried out via inductively coupled plasma optical emission spectroscopy (ICP-OES) on an iCAP 6000 Series ICP-OES spectrometer (Thermo Scientific). In a volumetric flask, each sample was dissolved in aqua regia [HCl/ HNO 3 3:1 (v/v)] and left overnight at RT to completely digest the NCs. Afterward, Milli-Q grade water (18.3 MΩ cm) was added to the sample. The solution was then filtered using a polytetrafluorethylene membrane filter with 0.45 μm pore size. All chemical analyses performed by ICP-OES were affected by a systematic error of about 5%.
Optical measurements. Optical measurements were carried out on a Cary5000 UV−vis−NIR Spectrometer. Spectra were collected in anhydrous toluene in the range 280-3200 nm with a scan resolution of 1 nm. After drying the solvent, ITO NCs were transferred in anhydrous toluene (Sigma-Aldrich) in a nitrogen filled glove box. Rectangular anaerobic cuvettes with a sealed screw cap (Starna Scientific) were used for photodoping and titration experiments.
Photodoping process. Before photodoping, the ITO-In 2 O 3 NCs were dissolved in anhydrous toluene, as described above. Subsequently, the photodoping process on the NCs is achieved by illuminating the quartz cuvette containing the solution with the NCs with a UV LED (central wavelength: 300 nm, bandwidth: 20 nm). The cuvette was placed at a distance of 12 mm from the cuvette window (Thorlabs M300L4). UV power density at the front window of the cuvette was 36.8 mW cm −2 .
Redox titration. The titrant was prepared by dissolving 0.34 mg of F4TCNQ (2,3,5,6-tetrafluoro-7,7,8,8-tetracyanoquinodimethane) in 30 mL of anhydrous toluene. The NCs solution was prepared in anhydrous toluene and photodoped in the same manner as described above (typical concentration~0.1• 10 −9 mol/L). The titrant addition steps were carried out in the inert environment of a nitrogen-filled glove box to avoid any contact with ambient oxygen. Electron counting was performed after photodoping by spectroscopic analysis of the neutral, anion and dianion forms of the F4TCNQ molecules 37 . In details, the amount of F4TCNQ (n F4TCNQ , in moles) added at each step of the experiment was calculated from volume of titrant introduced in the cuvette (V), the titrant concentration (C ¼ 0.085 mg/mL), and the titrant molecular mass (276.15 g/mol): n F4TCNQ ðVÞ ¼ CV=276:15. The number of NCs present in solution (n NC ) was calculated from the mean NC size (from TEM images) and the average weight of a NC (from ICP-OES measurements). Thus, the number of F4TCNQ molecules reacted per NC was calculated as: n reacted ðVÞ ¼ n F4TCNQ ðVÞ=n NC . To calculate the number of extracted electrons, n reacted was then multiplied to a factor one or two according to the kind of reaction involved, corresponding to the formation of dianion or anion species, respectively. Two volumes were identified: V 1 , corresponding to the saturation of the dianion reactions (with exclusively two-electron transfers occurring), and V 2 , corresponding to the appearance of neutral peaks signatures (signaling the presence of non-reacted titrants). The growth of anion peaks (i.e., transfer of one electron) between V 1 and V 2 indicate that electron transfer reactions keep occurring after V 1 . In this study, two-electron transfers were considered up to the midpoint between V 1 and V 2 (V mid ¼ ðV 1 þ V 2 Þ=2 and the transfer of one extra electron was considered between V mid and V 2 . The total number of extracted charges was estimated as 2e À Á n reacted V mid À Á þ 1e À Á n reacted V 2 À Á À n reacted V mid À Á À Á . Error bars are representative to the distance between V 1 and V 2 . The effects of titrants on assynthesized ITO-In 2 O 3 samples were tested, showing no sign of interaction in the spectrum. Multi-layers fitting model for LSPRs. The distinct dielectric response of the core-shell NCs is implemented as an effective dielectric function ε eff ω ð Þ based on the Maxwell-Garnett effective medium approximation (EMA). This model is further extended to consider multiple shell regions and corresponding dielectric environments. We fit the experimental data with a particles warm optimization algorithm in MATLAB (R2020a. Natick, Massachusetts: The MathWorks Inc.) and we extract the carrier densities n e;core and n e;shell and spatial extensions (R core , R shell ) of the core and shell regions, respectively, for each NC of increasing shell thickness before and after photodoping.
COMSOL simulations. Simulations for the energy band and carrier density profiles were solved numerically for spherical NCs using a finite-element method. Poisson's equation was solved with the software COMSOL Multiphysics v5.6 (Comsol Inc., Burlinghton MA USA) using a finite-element scheme (see Supplementary Information for details). | 8,741.6 | 2022-01-27T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Lesion Eccentricity Plays a Key Role in Determining the Pressure Gradient of Serial Stenotic Lesions: Results from a Computational Hemodynamics Study
Purpose In arterial disease, the presence of two or more serial stenotic lesions is common. For mild lesions, it is difficult to predict whether their combined effect is hemodynamically significant. This study assessed the hemodynamic significance of idealized serial stenotic lesions by simulating their hemodynamic interaction in a computational flow model. Materials and Methods Flow was simulated with SimVascular software in 34 serial lesions, using moderate (15 mL/s) and high (30 mL/s) flow rates. Combinations of one concentric and two eccentric lesions, all 50% area reduction, were designed with variations in interstenotic distance and in relative direction of eccentricity. Fluid and fluid–structure simulations were performed to quantify the combined pressure gradient. Results At a moderate flow rate, the combined pressure gradient of two lesions ranged from 3.8 to 7.7 mmHg, which increased to a range of 12.5–24.3 mmHg for a high flow rate. Eccentricity caused an up to two-fold increase in pressure gradient relative to concentric lesions. At a high flow rate, the combined pressure gradient for serial eccentric lesions often exceeded the sum of the individual lesions. The relative direction of eccentricity altered the pressure gradient by 15–25%. The impact of flow pulsatility and wall deformability was minor. Conclusion This flow simulation study revealed that lesion eccentricity is an adverse factor in the hemodynamic significance of isolated stenotic lesions and in serial stenotic lesions. Two 50% lesions that are individually non-significant can combine more often than thought to hemodynamic significance in hyperemic conditions. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1007/s00270-024-03708-x.
Introduction
In clinical practice, the severity of arterial stenotic lesions is quantified non-invasively by anatomic grading through imaging or by measuring blood velocities using duplex ultrasonography [1].These approaches have a reasonable correspondence with invasively measured pressure gradients over a stenosis [2], but their application for defining significance of multilevel or tandem stenotic lesions is limited [3].For calculating systolic velocity ratios, a reference upstream peak systolic velocity for the downstream stenosis is unreliable, as this velocity may be elevated by the upstream stenosis [4].Moreover, the stenoses may hemodynamically interact if they are in close proximity [5], making it unclear if and when two stenoses of borderline significance add up to a combined hemodynamic significance.
Serial stenotic lesions are common and have been reported in 41% of femoropopliteal arteries [4] subject to endovascular treatment and in 29% of coronary arteries subject to angiography [6].For coronary stenoses, a measurement of the translesional pressure gradient during hyperemia is commonly taken.From this measurement, the fractional flow reserve (FFR) can be derived, which in combination with the measurement of wedge pressure can be used to estimate the individual severity of both lesions [7].FFR measurements are invasive, however, and for peripheral regions, the materials and equipment required for these measurements are often not available.It would therefore be beneficial to have an understanding for when two lesions combined lead to hemodynamic significance.
Previous studies have addressed the theoretical [8] and practical aspects of pressure gradients of single concentric and eccentric stenosis [9,10], as well as the combined pressure gradient of serial concentric lesions [5,11,12].The pressure gradient of a single lesion is determined primarily by the stenosis shape, the flow velocity, and the
Lesion Eccentricity Plays A Key Role In Determining The Pressure Gradient Of Serial Stenotic Lesions -Results From A Computational Hemodynamics Study
All lesions are 50% area reduction Eccentricity has a complex but major effect on the pressure gradient of serial arterial lesions.Two mild 50% eccentric lesions in series can be hemodynamically significant, in contrast to two concentric lesions.
Impact on pressure gradient Lesion
Pressure gradient
Vessel wall motion
Reynolds number, defined as the ratio of the vessel diameter and flow velocity relative to the fluid viscosity [8].
Whether the individual gradients of two concentric lesions are mutually additive depends on whether the distal stenosis is close enough to the proximal stenosis to interfere with normalization of the post-stenotic jet [8,11].This normalization distance will depend on the Reynolds number and stenotic shape and severity of the proximal stenosis.For interstenotic distances smaller than this normalization length, e.g., less than 10 diameters for nonturbulent flow [5], the total pressure gradient becomes less than the sum of the two individual stenoses, due to the convective energy losses of the first stenosis being limited by the second.For severe stenotic lesions ([ 90%) with turbulent flow, the effect of two isolated regions has been reported to linearly add up when the interstenotic distance is more than four diameters [8].Drawback of these studies is the fact that only serial concentric lesions were assessed and steady flow and rigid walls were assumed.The convective energy loss of eccentric stenoses is more complex, which may produce unforeseen interactions between multiple eccentric stenoses.The purpose of this study is to evaluate hemodynamic significance of serial stenotic lesions, by simulating a range of combinations of both eccentric and concentric stenoses.Variety in eccentric shape and distance between the proximal and distal lesions, in addition to differences in flow rate and the effect of flow pulsatility and wall deformability, will be investigated.
Stenotic Flow Models
Blood flow was simulated with the open-source SimVascular software [13], capable of simulating pulsatile flow and vessel wall motion.Flow was simulated through three shapes of stenotic lesions (Fig. 1A): a concentric lesion (C), an eccentric circular lesion (E1), and an eccentric semicircular lesion (E2).These geometries were investigated as isolated lesions in a previous study [9].A nominal 6-mm diameter was chosen for the models, which reflects the average human diameter of the superficial femoral artery.All three lesion cross-sections corresponded to a 50% area reduction, and the C lesion was slightly (0.01 mm) offset from the centerline to exclude an unrealistic purely axisymmetric solution.The lesions were investigated as a single lesion and through combinations of two stenoses with varying interstenotic length: three non-stenotic reference diameters (3D = 18 mm) and six diameters (6D = 36 mm) (Fig. 1B).For the 3D and 6D scenarios, nine combinations of the first and the second stenoses shape were possible.In addition, in case of two eccentric stenoses, the rotation angle can be varied from 0 to 180°.For these cases, rotation angles of 0, 90, and 180 degrees were chosen, yielding 17 unique combinations for the 3D and 6D distances each.In combination with the single stenoses, this resulted in a total of 37 geometries.The geometries were designed in SolidWorks 2022 (Dassault Syste `mes, Ve ´lizy-Villacoublay, France).The lofting operation was applied to smoothly bridge healthy and stenotic parts, with start and end constraints set as 'normal to profile' with a direction vector length of 1 mm.
Rigid Wall Simulation
The open-source finite element-based SimVascular software [13] was used to mesh the inner geometry with tetrahedral elements, combined with a prismatic boundary layer.Blood was considered as a Newtonian fluid with a dynamic viscosity of 3.5 mPa.s and a density of 1059 kg.m 3 , and the vessel walls were modeled as mechanically rigid.A parabolic profile at a steady flow rates of 15 mL/s and 30 mL/s was set.The 15 mL/s corresponded to the mean flow rate in the superficial femoral artery that we measured in seven healthy volunteers (four males, age 20-30 years).This flow rate in a 6-mm artery is characterized by a Reynolds number of 933.The 50%-area stenoses doubled the mean velocity, yielding a Reynolds number downstream of the lesions of roughly 1800.For this flow rate, the stenoses under investigation are individually of subclinical significance (pressure gradient \ 5 mmHg), and turbulent effects play a minor role, if any.The higher 30 mL/s flow rate was simulated to investigate whether the stenotic interactions would appreciably change during increased blood flow.At the outlet, a resistance boundary condition was prescribed that yielded a distal pressure of 97 mmHg.
To assess the effect of flow pulsatility, unsteady simulations were performed for which a sine wave with an amplitude of 5 mL/s and a frequency of 1 Hz was added to the steady 15-mL/s flow rate, yielding a pulsatile flow between 10 and 20 mL/s.For this flow rate, a Windkessel RCR outflow condition [13] was set with the capacitance value tuned to reproduce a pulse pressure of 40 mmHg.
Deformable wall simulation
To investigate whether wall motion could meaningfully impact the flow dynamics in serial stenoses, simulations of fluid-structure interactions for a subset of the models were performed.Two methodologies were employed: the Coupled Momentum Method (CMM) [14] and Arbitrary Lagrangian-Eulerian (ALE) method [15].The CMM method approximates the vessel wall as a thin linearly elastic membrane, which constitutes an efficient approach for fluid-structure interaction (FSI) when significant bending is absent [14].For the ALE method, the vessel wall was separately meshed, and its nonlinear structural mechanics was fully evaluated using a monolithic approach implemented in the separate SimVascular FSI solver [15].
Elasticity of the vessel wall was set to 40 MPa with a constant wall thickness of 0.3 mm (10% of the lumen radius).These values correspond to a physiologic wall displacement by pulse pressure of about 10% of the radius in a healthy artery [14].For the ALE method, the vessel outer wall was pre-stressed [16] and kept in place with external tissue support with parameters that mimicked Fig. 2 Velocity contours for single stenotic lesions for a flow rate of 15 mL/s.Left: longitudinal view, right: crosssectional view at the location of the black line.Upper: concentric (C); middle: eccentric circular (E1); lower: eccentric semicircular (E2).The secondary flow vectors are displayed in the right crosssections, with a maximum of 9.2 cm/s for the E2 model 123 external tissue support of the superficial femoral artery by its surrounding muscle tissue [17].
The online supplemental information provides further details on the mesh convergence, the vessel wall mesh, the constitutive model used for the vessel wall, and the initialization and boundary conditions for the ALE simulation.
Results
For the single stenosis, the pressure gradient at 15 mL/s equaled 2.3 mmHg for the concentric lesion (C), 2.9 mmHg for the circular eccentric lesion (E1), and 4.1 mmHg for the semicircular eccentric lesion (E2).The velocity contours (Fig. 2) show that peak velocity for all three shapes were of similar order (140 cm/s).The E1 stenosis generated one recirculation area in the post-stenotic region, downstream of which parabolic flow reestablished.The E2 stenosis, in contrast, induced a second area of stagnant flow further downstream at the opposite wall of the first recirculation zone.
For the serial stenotic shapes, the observed pressure gradients for the baseline and high flow rate are listed in Table 1.Several trends can be observed.For a moderate flow rate (15 mL/s), the pressure gradient of two stenotic lesions never exceeded the sum of their parts in any of the cases.For the high flow rate (30 mL/s), there were many combinations where the combined pressure gradient exceeded the sum of the individual lesions by up to 10%.This was only the case for combinations involving an E2 stenosis.Second, an increase in the distance between the two stenoses was in most cases associated with an increased pressure gradient, except for some geometries with a distal eccentric stenosis, especially those with a 90°r otation angle.For the E1-E2 cases with a 90°rotation angle, the pressure gradient was higher for a 3D interstenotic compared to a 6D distance.Furthermore, a 90°r otation between two eccentric stenoses caused higher pressure gradients compared to 0°and 180°for the baseline flow rate, with some exceptions to this rule for the high flow rate.
The velocity contours for a selection of double semicircular eccentric lesions are plotted in Fig. 3.In the upper plot with a 3D interstenotic distance, only one distinct area of flow recirculation was present in the interstenotic region.
In the larger 6D interstenotic area for the middle plot, two distinct areas of flow recirculation were present.The lower plot highlights the altered flow dynamics for a 90°rotation model, where strong secondary flow structures were present in the distal stenosis that caused a more abrupt and chaotic breakdown of the post-stenotic jet with an associated increase in the pressure gradient.
The impact of pulsatile flow and wall deformability on the pressure gradient was investigated for several models and is listed in Table 2. Flow pulsatility in rigid simulations did not meaningfully alter the recirculation zones, and the mean pressure gradient for pulsatile flow was only slightly higher than the corresponding steady flow rate for most cases.The simulations with wall deformability demonstrated categorically different results depending on the applied simulation method.For most models, a substantial increase in the pressure gradient was present for the CMM method, yet only minor increases were observed for the ALE method.An exception is the E2-E2-3D-90°m odel, which demonstrated wall motion that translated the distal E2 stenosis 4.4 mm from its baseline positioning, straightening the trajectory of the flow and reducing the areas of recirculation and the pressure gradient.The ALE simulations with a thicker wall at the stenosis (constant outer vessel diameter, see online supplementary material) led to slightly less wall motion but had minimal impact on the pressure gradient and velocity field.
The different manifestations of wall motion and the consequences for the velocity field are presented for the C- C-6D model in Video 1.The CMM method showed a highfrequent axial buckling [18] motion of the wall, coupled with a fluttering motion of the post-stenotic jet downstream of the proximal and distal stenoses.This motion was associated with a 48-Hz oscillatory mode in the pressure field and led to a strong increase in the pressure gradient.This high-frequent oscillatory behavior of the wall and the fluid motion was not present in the ALE method, which demonstrated a slightly increased pressure gradient compared to a rigid wall.The fluid motion in the ALE method was largely comparable to the rigid wall simulations, showing asymmetric development of the jet downstream of the distal stenosis associated with a subtle asymmetric wall motion.The pressure wave in the ALE simulation was delayed in phase and reduced in amplitude relative to the rigid wall simulation.This reflected the dampening of the pressure wave by wall compliance.Furthermore, the intraand post-stenotic jet in the rigid simulation demonstrated a slightly broadened spatial profile with a lower peak velocity relative to the ALE simulation.
Discussion
The decision whether to treat two subclinical stenoses is difficult in clinical practice, as it is unclear what the threshold for combined hemodynamic significance is.Traditional measures like the percentual diameter or area reduction or the Doppler peak systolic velocity ratio do not reflect the additive effect but only assess the severity of the most severe stenosis [3].This simulation study demonstrated that eccentricity is a key element in the hemodynamic significance of both single and serial 50%-area lesions.Two unfavorably arranged semicircular eccentric lesions demonstrated a gradient of 24 mmHg, relative to 12 mmHg for two concentric lesions of equal area reduction.Furthermore, for two eccentric lesions, the combined pressure gradient at high flow rates was often found to exceed the sum of its two isolated lesions, highlighting the adverse impact of eccentricity in serial lesions.In symptomatic patients with one or more subcritical (e.g., \ 75% area) eccentric lesions, this suggests that treatment of the lesion(s) may improve symptoms.The role of eccentricity was complex and flow ratedependent.This has also been demonstrated for single stenotic lesions, where eccentricity did not increase the pressure gradient for low flow rates (Reynolds number 10-1000) [9], but with two-fold increases for moderate flow rates (Reynolds number [ 1000) [10].In this study, the impact of eccentricity also increased for higher flow rates, which can explain the 22.7% discordance between resting gradient and hyperemic gradient classification of serial lesions [12].For two lesions that involved the most eccentric E2 lesion, the combined pressure gradient exceeded the sum of its two isolated lesions for a high flow rate, which was never the case for a moderate flow rate.This observation indicates that moderate serial eccentric lesions may combine to hemodynamic significance in exercise conditions.This could have important consequences for symptomatic patients with mild serial eccentric lesions on anatomic or duplex ultrasound evaluation [4], which may currently not be referred for treatment or for a physiologic evaluation.Eccentric lesions are very common in both the femoral (64%) [19] and the coronary arteries (45.6%) [20].For clinical evaluation of stenotic disease, it is therefore important to appreciate the increased likelihood of hemodynamic significance of two mild lesions when they have an eccentric shape.In these cases, diagnostic thresholds are not possible with duplex ultrasound and are hard to make with CTA, and depending on the localization, an exercise ankle-brachial index or invasive pressure measurement might be needed.
Two other geometric effects that were investigated were the interstenotic distance and the relative rotation of the eccentric lesions.For most cases, an increase in the interstenotic distance was associated with increase in the pressure gradient, with some exceptions, notably for cases with a distal E2 stenosis.These exceptions conflict with previous studies where an increase in interstenotic distance was exclusively associated with an increase in the pressure gradient [5,11].This discrepancy is likely explained by the importance of the inflow profile into the distal stenosis.In the E2 cases, the proximal stenosis led to inflow disturbances into the distal stenosis that increased outflow disturbances and the pressure loss of the distal stenosis.This hemodynamic interaction is similar to a previous observation [5] that a proximal 75% concentric stenosis with a distal 50% concentric stenosis caused a higher pressure gradient compared to a reverse configuration.The effects of interstenotic distance were usually below 10%, however, and without a clear trend making the impact of minor relevance for clinical practice for peripheral arteries.For coronary arteries, serial stenotic lesions with an interstenotic distance below three reference vessel diameters are treated as a single lesion in the SYNTAX I and II scores [21], which may underestimate the significance of these lesions when used for individual risk prediction.Particularly when of eccentric shape, such lesions are likely better assessed with invasive or computational physiologic evaluation.
With respect to the relative rotation of eccentric lesions, a consistent trend was present for the baseline flow where a 90°rotation caused 10-20% higher pressure gradients than for a 0°or 180°rotation.For the high flow rate, this trend was still present but less consistent.In clinical practice, the adverse 90°rotational configuration of serial eccentric lesions is difficult to assess with angiography but can be assessed on pre-operative CTA for lesions with an uncertain indication for intervention.
A two-fold increase in flow rate led to a roughly threefold increase in the pressure gradient across the serial stenotic models.The high flow rate was considered representative of the peak flow rate during systole or the mean flow rate during exercise and led to a hemodynamically significant pressure gradient of over 20 mmHg [22] for eight of the 34 models.The three-fold increase is in line with the theoretical relation of the pressure gradient to the sum of a viscous pressure-linearly related with flow rate-and an inertial pressure loss that scales with the square of the flow rate [8].
Flow pulsatility with a single harmonic oscillation did not significantly alter the pressure gradient for non-compliant walls.The addition of wall deformability led to highfrequent flutter in the CMM method with a significantly increased energy loss.In the comparably more realistic ALE, this behavior was absent, and the flow field largely resembled the rigid wall simulations, with an increase in the pressure gradient of about 10%.An exception was formed by the E2-E2 model, in which a lumen straightening was present during systole, limiting flow recirculation and decreasing the pressure gradient.This behavior is likely unrealistic in diseased peripheral arteries, and the application of higher elasticity of external tissue support would limit vessel displacement.
In the ALE method, a thick wall, the nonlinearized kinematics, and external tissue support including viscous energy loss were modeled.These factors, absent in the presently applied CMM method [14], more realistically represent a bounded vessel wall and likely stabilized the fluid-structure interaction from the growth of the spurious oscillations.In calcified lesions, the wall motion will be decreased [23], limiting the observed effects of the simulated healthy vessel wall.For lipid-filled plaques [23] and vessel diseases that structurally weaken the vessel wall, such as fibromuscular dysplasia, the effects of wall motion may be amplified, and perhaps oscillatory fluid-structure interaction modes can be present.For multifocal fibromuscular dysplasia with a typical string-of-beads appearance, such an effect may contribute to the unexpectedly high pressure gradients that have been described in a few patient cases [24,25].
The effects described in this study were observed for computational models of 6-mm arteries.They can reasonably be generalized to similarly shaped 50%-area stenoses in arteries of other diameters if similar velocities are present (spatiotemporal mean velocity of 53 cm/s for resting flow).This is because in the present Reynolds numbers (Re = 1000), inertial effects dominate (inertial effect accounts for [ 70% of pressure gradient for a 50% stenosis at a Reynolds number of 1000), in which case the pressure gradient is mostly influenced by the stenotic area reduction and flow velocity, and less so by the Reynolds number [8].Other factors such as systemic blood pressure have no direct effect on the pressure gradient but can have an impact through changes in the flow velocity.
Study Limitations
An important limitation for translating the results to clinical practice of this study is that only three smooth stenotic shapes were assessed.Stenotic morphology in patients is highly variable, and especially calcified plaques are characterized by surface irregularity.For assessing flow mechanics in the variety of stenotic shapes in patients, image-based computational fluid dynamic simulations are an attractive and validated method for coronary lesions [26].For peripheral arteries, computational fluid dynamic simulations of a patient's geometry can furthermore be informed with a patient's temporal flow profile obtained from duplex ultrasound [27].Calcified plaques are difficult to quantify accurately using non-invasive imaging and may require intra-vascular ultrasonic or optical imaging [28] for accurate simulations.It would be of interest to investigate whether the observed adverse effect of eccentricity also holds for other stenotic degrees and whether a correlation between eccentricity index [20] and pressure gradient is present in patients.
Further simplifications of this study were the single harmonic flow waveform and the assumption of a Newtonian fluid model.For the infrarenal aorta and its peripheral arteries, the biphasic or triphasic flow will likely lead to more complex flow phenomena, although the mean pressure gradient may not be strongly affected.The inclusion of power-law viscosity model was previously shown to minimally alter the pressure gradient in serial stenotic lesions [11].
Conclusions
The hemodynamic interaction between two stenotic lesions in proximity was complex, especially in case of eccentric lesions.Specific configurations of two 50% eccentric stenotic lesions led to surprisingly high pressure gradients.The pressure gradient of two eccentric lesions was up to twice as high as two similar concentric stenotic lesions and commonly exceeded the sum of the individual eccentric at a high flow rate.For most cases, the effects of pulsatile flow and wall motion were minor in comparison with lesion eccentricity.These findings suggest that symptomatic patients with two or more subcritical eccentric lesions may benefit from treatment and should ideally be evaluated with a hyperemic pressure measurement.
Table 1
Pressure gradients for steady flow E1 eccentric circular, E2 eccentric semicircular, D diameter (6 mm) 123 L. van de Velde et al.: Lesion Eccentricity Plays a Key Role in Determining the Pressure Gradient...
Table 2
Pressure gradients (mmHg) for pulsatile flow in rigid wall and deformable wall models CMM Coupled momentum method, ALE Arbitrary Lagrangian-Eulerian method | 5,430.6 | 2024-04-02T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Simple Recursion Relations for General Field Theories
On-shell methods offer an alternative definition of quantum field theory at tree-level, replacing Feynman diagrams with recursion relations and interaction vertices with a handful of seed scattering amplitudes. In this paper we determine the simplest recursion relations needed to construct a general four-dimensional quantum field theory of massless particles. For this purpose we define a covering space of recursion relations which naturally generalizes all existing constructions, including those of BCFW and Risager. The validity of each recursion relation hinges on the large momentum behavior of an n-point scattering amplitude under an m-line momentum shift, which we determine solely from dimensional analysis, Lorentz invariance, and locality. We show that all amplitudes in a renormalizable theory are 5-line constructible. Amplitudes are 3-line constructible if an external particle carries spin or if the scalars in the theory carry equal charge under a global or gauge symmetry. Remarkably, this implies the 3-line constructibility of all gauge theories with fermions and complex scalars in arbitrary representations, all supersymmetric theories, and the standard model. Moreover, all amplitudes in non-renormalizable theories without derivative interactions are constructible; with derivative interactions, a subset of amplitudes is constructible. We illustrate our results with examples from both renormalizable and non-renormalizable theories. Our study demonstrates both the power and limitations of recursion relations as a self-contained formulation of quantum field theory.
Introduction
On-shell recursion relations are a powerful tool for calculating tree-level scattering amplitudes in quantum field theory. Practically, they are far more efficient than Feynman diagrams. Formally, they offer hints of an alternative boundary formulation of quantum field theory grounded solely in on-shell quantities. To date, there has been enormous progress in computing tree-level scattering amplitudes in various gauge and gravity theories with and without supersymmetry.
In this paper we ask: to what extent do on-shell recursion relations define quantum field theory? Conversely, for a given quantum field theory, what is the minimal recursion relation, if any, that constructs all of its amplitudes? Here an amplitude is "constructible" if it can be recursed down to lower point amplitudes, while a theory is "constructible" if all of its amplitudes are either constructible or one of a finite set of seed amplitudes which initialize the recursion.
For our analysis we define a "covering space" of recursion relations, shown in Eq. (2), which includes natural generalizations of the BCFW [1] and Risager [2] recursion relations. These generalizations, defined in Eq. (12) and Eq. (13), intersect at a new "soft" recursion relation, defined in Eq. (14), that probes the infrared structure of the amplitude. As usual, these recursion relations rely on a complex deformation of the external momenta parameterized by a complex number z. By applying Cauchy's theorem to the complexified amplitude, M(z), one relates the original amplitude to the residues of poles at complex factorization channels, plus a boundary term at z = ∞ which is in general incalculable. Consequently, an amplitude can be recursed down to lower point amplitudes if it vanishes at large z and no boundary term exists.
The central aim of this paper is to determine the conditions for on-shell constructibility by determining when the boundary term vanishes for a given amplitude. We define the large z behavior, γ, of an amplitude by for an n-point amplitude under a general m-line momentum shift, where m ≤ n. Inspired by Ref. [3], we rely crucially on the fact that the large z limit describes the scattering of m Table 1: Summary of the minimal m-line recursion relation needed to construct all scattering amplitudes in various renormalizable theories: Yang-Mills with matter of diverse spins and arbitrary representations, Yukawa theory, scalar theory, supersymmetric theories, and the standard model. The values in parentheses apply if every scalar has equal charge under a U (1) symmetry.
Here φ and ψ denote scalars and fermions, respectively.
n-point amplitude for any given theory. If every amplitude, modulo the seeds, are constructible, then we define the theory to be m-line constructible.
Our results apply to a general quantum field theory of massless particles in four dimensions, which we now summarize as follows:
Renormalizable Theories
• Amplitudes with arbitrary external states are 5-line constructible.
• Amplitudes with any external vectors or fermions are 3-line constructible.
• Amplitudes with only external scalars are 3-line constructible if there is a U (1) symmetry under which every scalar has equal charge.
• The above claims imply 5-line constructibility of all renormalizable quantum field theories and 3-line constructibility of all gauge theories with fermions or complex scalars in arbitrary representations, all supersymmetric theories, and last but not least the standard model. The associated recursion relations are defined in Eq. (12) and Eq. (13).
• Amplitudes are constructible for interactions with derivatives up to a certain order in the derivative expansion.
• The above claims imply m-line constructibility of all scalar and fermion φ m 1 ψ m 2 theories for m 1 + m 2 = m − 1, and of certain amplitudes in higher derivative gauge and gravity theories. The associated recursion relations are defined in Eq. (2).
span the space of all renormalizable theories.
As we will see, our covering space of recursion relations naturally bifurcates according to the number of z poles in each factorization channel: one or two. For the former, the recursion relations take the form of standard shifts such as BCFW and Risager, which is the case for the 5-line and 3-line shifts employed for renormalizable theories. For the latter, the recursion relations take a more complicated form which is more cumbersome in practice, but necessary for some of the non-renormalizable theories.
The remainder of our paper is as follows. In Sec. 2, we present a covering space of recursion relations for an m-line shift of an n-point amplitude, taking note of the generalizations of the BCFW and Risager momentum shifts. Next, we compute the large z behavior for these momentum shifts in Sec. 3. Afterwards, in Sec. 4 we present our main result, which is a classification of the minimal recursion relations needed to construct various renormalizable and non-renormalizable theories. Finally, we discuss examples in Sec. 5 and conclude in Sec. 6.
Definition
Let us now define a broad covering space of recursion relations subject to a loose set of criteria.
In particular, we demand that the external momenta remain on-shell and conserve momenta for all values of z. In four dimensions, these conditions are automatically satisfied if the momentum deformation is a complex shift of the holomorphic and anti-holomorphic spinors of external legs 1 , where η i and η i are reference spinors that may or may not be identified with those of external legs, and I and I are disjoint subsets of the external legs. As shorthand, we will refer to the shift in Eq. 1 There is a more general class of shifts in which both λ i and λ i are shifted for every particle. However, in the case momentum conservation imposes complicated non-linear relations among reference spinors which makes the study of large z behavior difficult.
As we will see, the efficacy of recursion relations depend sensitively on the correlation between the helicity of a particle and whether its holomorphic or anti-holomorphic spinor is shifted.
Throughout, we will define "good" and "bad" shifts according to the choices For example, the bad shift for the case of BCFW yields a non-vanishing contribution at large z in non-supersymmetric gauge theories. The resulting tree amplitude, M(z), is then complexified, but the original amplitude, M(0) is obtained by evaluating the contour integral dz M(z)/z for a contour encircling z = 0. An on-shell recursion relation is then obtained by applying Cauchy's theorem to deform the contour out to z = ∞, in the process picking up all the residues of M(z) in the complex plane. As noted earlier, the momentum conservation must apply for arbitrary values of z, implying which should be considered as four constraints on η i and η i which are easily satisfied provided the number of reference spinors is sufficient.
Factorization
Next, consider a factorization channel of a subset of particles F. The complex deformation of the momenta in Eq. (2) sends P → P (z) = P + zQ, where P is the original momentum flowing through the factorization channel and Q is the net momentum shift, so where F λ and F λ are intersection of F with I and I. As we will see, the physics depends crucially on whether Q 2 vanishes for all factorization channels or not. First of all, the large z behavior is affected because propagators in the complexified amplitude scale as for a given factorization channel. Second, there is a very important difference in the structure of the recursion relation depending on whether Q 2 vanishes in all channels. If so, then each factorization channel has a simple pole at and the on-shell recursion relation takes the usual form, where the sum is over all factorization channels and intermediate states, and M F and MF are on-shell amplitudes corresponding to each side of the factorization channel. However, if Q 2 does not vanish, then each propagator is a quadratic in z and thus carries conjugate poles at Summing over both of these roots, we find a new recursion relation, Under conjugation of the roots, z + ↔ z − , the summand is symmetric, so crucially, square roots always cancel in the final expression in the recursion relation. Of course, the intermediate steps in the recursion are nevertheless quite cumbersome in this case.
Recursion Relations
All known recursion relations can be constructed by imposing additional constraints on the momentum shift in Eq. (2) beyond the condition of momentum conservation in Eq. (4). In the absence of extra constraints, the reference spinors η i and η i are arbitrary so by Eq. (6), Q 2 = 0 generically. In this case the recursion relation will have square roots in intermediate steps.
On the other hand, if Q 2 = 0, then Q must factorized into the product of two spinors. If where we have chosen a form such that momentum conservation is automatically satisfied. Note that the case m = 2 corresponds precisely to BCFW, so these shifts are a generalization of BCFW to arbitrary m ≤ n. Note that for m ≤ 3, any momentum shift is necessarily of the form of the first or second possibility, so Q 2 = 0 automatically. Thus, Q 2 = 0 is only possible if m > 3.
Remarkably, while the recursion relations in Eq. (12) and Eq. (13) are naturally the generalizations of Risager and BCFW, they actually overlap for a specific choice of reference variables! In particular, consider the [0, m -line and [m, 0 -line shifts in Eq. (12) for the case of η = λ j and η = λ j , and modifying the constraint from momentum conservation such that i∈I c i λ i = λ j and i∈ I c i λ i = λ j , respectively. In this case the recursion coincides with the form of the [1, m − 1 -line and [m − 1, 1 -line shifts in Eq. (13), with a curious feature that λ j (z) = λ j (1 − z) and λ j (z) = λ j (1 − z). We dub these "soft" shifts for the simple reason that when z = 1 the amplitude approaches a soft limit. For m = 3, the soft shift takes a particularly elegant form, 3-line soft shift: . (14) This shift offers an on-shell prescription for taking a soft limit. We will not make use of this shift in this paper but leave a more thorough analysis of this soft shift for future work [18].
Large z Behavior of Amplitudes
The recursion relations in Eq. (9) and Eq. (11) apply when the amplitude does not have a pole at z = ∞. In this section we determine the conditions under which this boundary term vanishes. Although one could study the boundary term in BCFW or Risager shift instead, as in Ref. [4,5], we will not proceed in this direction. Concretely, take the n-point amplitude, M, deformed by an m-line shift where m ≤ n. At large z, the shifted amplitude describes the physical scattering of m hard particles in a soft background parametrizing the remaining n − m external legs. Thus, we can determine the large z behavior by applying a background field method: we expand the original Lagrangian in terms of soft backgrounds and hard propagating fluctuations, then compute the on-shell m-point "skeleton" amplitude, M . If the skeleton amplitude vanishes at large z, then the boundary term is absent and the recursion relation applies. A similar approach was applied in Ref. [3] for BCFW for the case of a hard particle propagator, i.e. the skeleton amplitude for m = 2.
Crucially, it will not be necessary to explicitly compute the skeleton amplitude. Rather, from Lorentz invariance, dimensional analysis, and the assumption of local poles, we will derive general formulae for the large z behavior of m-line shifts of n-point amplitudes. Hence, our calculation of the large z scaling combines and generalizes two existing proofs in the literature relating to the BCFW [3] and all-line recursion relations [6].
Ansatz
The basis of our calculation is a general ansatz for the m-point skeleton amplitude for m ≤ n, where the sum is over Feynman diagrams F , which are contracted into products over the polarization vectors ε and fermion wavefunctions u of the hard particles 2 . Here g = g × B where g is a product of Lagrangian coupling constants and B is a product of soft field backgrounds and their derivatives. Note that g has free Lorentz indices since it contains insertions of the soft background fields and their derivatives. Crucially, since B is comprised of backgrounds, it is always non-negative in dimension, so [B] ≥ 0 and For the special case of gravitational interactions, each insertion of the background graviton field is accompanied by an additional coupling suppression of by the Planck mass, so [ g] = [g]. This is reasonable because the background metric is naturally dimensionless so insertions of it do not change the dimensions of the overall coupling.
Note the skeleton amplitude receives dimensionful contributions from every term in Eq. (15) except the vector polarizations, so via dimensional analysis. This fact will be crucial for our calculation of the large z scaling of the skeleton amplitude for various momentum shifts and theories.
Large z Behavior
We analyze the large z behavior of Eq. (15). The contribution from each Feynman diagram F can be expressed as a ratio of polynomials in momenta, so F = N/D. Here N arises from interactions while D arises from propagators. We define the large z behavior of the numerator and denominator as γ N and γ D where We now compute the large z behavior of the external wavefunctions, followed by that of the Feynman diagram numerator and denominator, and finally the full amplitude.
External Wavefunctions. First, we study the contributions from external polarization vectors and fermion wavefunctions. For convenience, we define a "weighted" spin, s, for each shifted 2 Note that polarization vectors arise from any particle of spin greater than or equal to one.
leg of +/− helicity, which is simply the spin s multiplied by + if the angle/square bracket is shifted and − if the square/angle bracket is shifted. In mathematical terms, where good and bad shifts denote the correlation between helicity and the shift of spinor indicated in Eq. (3). As we will see, a multiplier of +/− tends to improve/worsen the large z behavior. In terms of the weighted spin, it is now straightforward to determine how the large z scaling of the polarization vectors and fermion wavefunctions, so more positive values of s, corresponding to good shifts, imply better large z convergence.
Numerator and Denominator. The numerator N of each Feynman diagram depends sensitively on the dynamics. However, for a generic shift, we can conservatively assume no cancellation in large z so the numerator scales at most as its own mass dimension, The denominator D comes from propagators which are fully dictated by the topology of the diagram. Each propagator can scale as 1/z 2 or 1/z at large z, depending on the details of shifts.
Thus, the large z behavior of denominator is constrained to be within For the Q 2 = 0 shifts, every propagator scales as 1/z so γ D = [D]/2. On the other hand, for the Q 2 = 0 shifts, we would naively expect that there is a 1/z 2 from each propagator given that the reference spinors are arbitrary. However, this reasoning is flawed due to an important caveat. Since the theory contains soft backgrounds, the Feynman diagram can have 2-point interactions of the hard particle induced by an insertion of the soft background. If the 2-point interactions occur before the hard particle interacts with another hard particle, then Q is simply the momentum shift of a single external leg, so Q 2 = 0 accidentally, and the corresponding propagator scales as 1/z rather than 1/z 2 . It is simple to see that the number of such propagators is [D] − γ D . See Fig. 1 for an illustration of this effect. Thus the large z behavior is constrained within the range of Eq. (22).
where v ≥ 3 is the valency of the interaction vertices in the fundamental theory and the [B] term arises because we have conservatively assumed that every single background field insertion contributes to a 2-point interaction to the amplitude.
Full Amplitude. Combing in the large z scaling of the external wavefunctions in Eq. (20) with that of the numerator and denominator of the the Feynman diagram in Eq. (18), we obtain where in the second line we have plugged in the inequality from Eq. . This is the master formula from which we will derive corresponding large z behaviors in Q 2 = 0 and Q 2 = 0 shifts. As expected, the above bound can be improved for Q 2 = 0 shifts because in this case the product of any two hard momenta only scales as z rather than z 2 . We render the specific derivation in subsequent sections.
The general formula in Eq. (24) can be reduced to more illuminating forms by making the assumption of specific shifts. We consider the large z behavior for the Q 2 = 0 and Q 2 = 0 shifts in turn.
To start, we calculate the large z behavior for a general momentum shift defined in Eq. (2). As noted earlier, for arbitrary reference spinors, Q 2 = 0 as long as m ≥ 3, which we assume here.
The large z behavior is given by Eq. (24). The offset [D] − γ D is the number of propagators with Q 2 = 0 as discussed before. As shown for an example topology in Fig. 1, there is at least one soft background associated with each propagator for which Q 2 = 0. The canonical dimensions of fields leads to The large z convergence is best for the largest possible value for s, which occurs if we only apply good shifts to external legs, so s = s. As we will see, this particular choice has the best large z behavior of any shfit. There is an inherent connection between Q 2 = 0 and improved z behavior of the amplitude, simply because in this case, propagators fall off with z 2 in diagrams.
where n B is the number of holomorphic spinors indices that come from soft background insertions. Again solving for [F ] with Eq. (17), and applying our arguments to both shifts, the large where h denotes helicity and we have defined In a theory with only spin s ≤ 1 fields, soft background insertions contribute at most one holomorphic or anti-holomorphic spinor index to be contracted with. Thus, n B is balanced by the dimension [B], so ∆ ≤ 0 in these theories. On the other hand, for a theory with spin s ≤ 2 fields, e.g., gravitons, then an insertion of a graviton background yields two spinor indices but only with one power of mass dimension. For these two cases we thus find Eqs. (28) and (30) together give our final answer. For an all-line shift, m = n, so ∆ = 0 and this bound reduces to known result from Ref. [6]. Note that in some cases Eq. (26) is stronger than Eq. (28) so we have to consider both bounds at the same time. Applying the reasoning to both shifts, we obtain where h j is the helicity of particle j. We then see that the
Renormalizable Theories
To begin we consider the generic momentum shift defined in Eq. (2), which has large z behavior derived in Eq. (25). Since a renormalizable theory only has marginal and relevant interactions, the mass dimension of the product of couplings in any scattering amplitude is [g] ≥ 0. Plugging this into Eq. (25), we find that a 5-line shift suffices to construct any amplitude. This is also true for the 5-line shifts defined in Eq. (12) and Eq. (13), whose large z scaling is shown in Eq. (28) and Eq. (31) by conservatively plugging in ∆ = 0 for renormalizable theories. Consequently, 5-line recursion relations provide a purely on-shell, tree-level definition of any renormalizable quantum field theory. We must take as input the three and four point on-shell tree amplitudes, but this is quite reasonable, as a renormalizable Lagrangian is itself specified by interactions Note that the charge condition we have assumed is automatically satisfied if every scalar in the theory has equal charge under the scalar U (1) and we shift three same-signed scalars.
It seems impossible for this 3-line recursion to construct all equal-charged U (1) scalar amplitudes, especially with the presence of quartic potential. However, as three same-signed scalars only available from six points, this 3-line recursion still takes three and four point amplitudes as seeds. The information of quartic potential still enters to this special 3-line recursion. We will demonstrate with a simple φ 4 theory in next section.
Putting everything together, we have shown that a 3-line shift can construct any amplitude with a vector or fermion, and any amplitude with only scalars if every scalar carries equal charge under a U (1) symmetry. Immediately, this implies that any theory of solely vectors and fermions-i.e. any gauge theory with arbitrary matter content-is constructible 3 . Moreover, all amplitudes in Yukawa theory necessarily carry an external fermion, so these are likewise constructible. The standard model is also 3-line constructible simply because it has a single scalar-the Higgs boson-which carries hypercharge. Finally, we observe that all supersymmetric theories are constructible. The reason is that without loss of generality, the superpotential for such a theory takes the form W = λ ijk φ i φ j φ k , where we have shifted away Polonyi terms and eliminated quadratic terms to ensure a massless spectrum. For such a potential there is a manifest R-symmetry under which every chiral superfield has charge 2/3. Consequently, all complex scalars in the theory have equal charge under the R-symmetry and all amplitudes are 3-line constructible. This then applies to theories with extended supersymmetry as well. The conditions for on-shell constructibility in some familiar theories is summarized in Tab. 1.
Non-renormalizable Theories
In what follows, we first discuss non-renormalizable theories which are constructible, i.e. for which all amplitudes can be constructed. As we will see, this is only feasible for a subset of non-renormalizable theories, so in general, the covering space of recursion relations does not provide an on-shell formulation of all possible theories. Second, we consider scenarios in which some but not all amplitudes are constructible within a given non-renormalizable theory. In many cases, amplitudes involving a finite number of higher dimension operator insertions can often be constructed by our methods.
Our analysis will depend sensitively on the dimensionality of coupling constants, which we saw earlier have a huge influence on the the large z behavior under momentum shifts. Table 2 Theory It is straightforward to generalize the arguments above to a theory of scalars and fermions interacting via a φ v 1 ψ v 2 . We find that this theory is fully constructible with a general m-line Finally we consider perhaps the most famous constructible non-renormalizable theory: gravity. As is well-known, all tree-level graviton scattering amplitudes can be recursed via BCFW [3], With the Q 2 = 0 shifts, we can always construct an n-point amplitude with m > (n + 2)/3.
Applying the above result to NMHV amplitudes for m = 3, we find M z n−7 under a Risager 3-line shift, consistent with the known behavior z n−12 [9]. Generally, graviton amplitude can be constructed with Q 2 = 0 shifts if m ≥ n/2. Ref. [6] shows amplitudes with total helicity |h| ≤ 2 cannot be constructed from anti-holomorphic/holomorphic all-line shift. We see this can be resolved if we choose to do "good" shift on only plus or negative helicity gravitons. Our large z analysis predicts the scaling grows linearly with n and this is indeed how the real amplitude behaves. From this point of view, the amplitude behaves surprisingly well under BCFW shift because the scaling doesn't grow as n increases.
An interesting comparison of our large z behavior is to use the KLT relations [10]. Consider the large z behavior of n point amplitudes under a (m ≥ 4)-line Q 2 = 0 shift. A n point graviton amplitude M grav can be schematically written as a "square" of gauge amplitudes M 2 gauge by the KLT relation where we neglect all the permutation in particles and details of s-variables 6 . The KLT relation actually predicts a better large z behavior than our dimensional analysis.
Constructible Amplitudes. The above non-renormalizable theories are some limited examples which can be entirely defined by our on-shell recursions. Modifying these theories generally breaks the constructibility! For instrance, a theory of higher dimensional operator ∂ 2 φ v cannot be constructed. This is clear from Feynman diagrams because the derivatives in vertices compensate the large z suppression from propagators. This implies the chiral Lagrangian is not constructible even with the best all-line shift 7 . In gauge theories, we cannot construct amplitudes where all vertices are higher dimensional F v operators either.
Fortunately, we are usually interested in effective theories with some power counting on higher dimensional operators. If the number of operator insertions is fixed, then we can construct amplitudes with generic multiplicity. To illustrate this, consider amplitudes in a renormalizable theory (spin ≤ 1) with a single insertion of a d-dimensional operator. If we apply a general m-line Q 2 = 0 momentum shift, Eq. (25) gives In the worst case scenario, s = 0, we see an (d + 1)-line shift suffices to construct any such amplitude. For [0, m -and [m, 0 -line shifts, the sum of their large z scaling is where we use ∆ = 0 for theories with spin ≤ 1. The amplitude can always be constructed from one of them provided m > d. We see the input for recursion relations are all amplitudes with d points and below. It is not surprising. After all, we need this input for a φ v operator. If the amplitude has higher total spin/helicity, less deformation is needed to construct it. We will demonstrate this with F v operator in next section. The result is similar to the conclusion of Ref. [6], but we can be more economical by choosing (d + 1)-line or less rather than an all-line shift.
In this section, we illustrate the power of our recursion relations in various theories. The calculation is straightforward once the large z behavior is known.
where y and g are the Yukawa and gauge coupling constants, respectively. There are only two non-vanishing factorization channels. Based on these seeds, it's straightforward to write down where a, b, c are fixed SU (3) flavor indices, no summation implied. We apply our recursion relations on the (color-ordered) 6-point scalar amplitude where the superscripts and subscripts denote R-symmetry and flavor indices, respectively. In the massless limit, all scalars in the chiral multiplets carry equal R-charge. Therefore we can shift the three holomorphic scalars, namely, [{1, 2, 3}, 0 . The relevant lower point amplitudes for recursion are where η is the reference spinor and P F denotes the total momentum of the states in the factorization channel F. We have verified numerically that the answer is, as expected, independent of reference η. Since the scalar amplitude is independent of the fermions, this result applies to any theory with the same bosonic sector. When λ = 1, the SU (3) flavor symmetry together with the U (1) R-symmetry combine to form the SU (4) R-symmetry of N = 4 SYM. Our expression agrees with known answer in this limit.
where hatted variable is evaluated at factorization limit and z ±,456 are the two solutions of P 2 456 = 0. The result is summed over permutation of (1,3,5) and (2,4,6) with σ being the number of total permutation. In the last line, we use the fact that 2|P 456 |5] is linear in z and only non-deformed part survives after exchanging z ±,456 . We see the final answer has no square root as claimed before.
Maxwell-Einstein Theory. We discuss the theory where a U (1) photon minimally couples to gravity. The coupling constant has the same dimension as in GR (see Tabel 2). But as a photon has less spin than a graviton, the large z behavior is worse. We focus on the amplitudes with only external photons given that any amplitude with a graviton can be recursed by BCFW shift [7]. Using a m-line Q 2 = 0 shift, we find M z n+2−2m at large z; thus, it's always possible to construct such an amplitude when m > (n + 2)/2. Together with BCFW shift on gravitons, the theory is fully constructible! Using Eqs.
We conclude [v + 1, 0 -and [v − 1, 1 -line shifts suffice to construct the amplitude with the given helicity configuration.
The case of F 3 operator has been studied extensively in Ref. [14]. Given the large z behavior above, the general MHV-like expression in Ref. [15] This agrees with the result in Ref. [15,16].
The case of φ tr(F F ) operator, which is popular for the study of Higgs phenomenology, is very similar to F 3 operator. The MHV-like formula and CSW expansion in Ref. [15] can also be proved analogously. [43] ξ3 2 43 ξ4 2 (53) where |ξ is a reference spinor in 3-line shift. The result in second line is manifest the leading soft factor of particle 4. After canceling the reference spinor, the result in the last line is expressed in is the corresponding amplitude in gauge theory with F 3 operator given in Eq. (50). It agrees with Ref. [14]. It obvious from the answer that any [m, 0 shift cannot construct the amplitude.
Outlook
In this paper we have determined the minimal set of recursion relations needed to construct renormalizable and non-renormalizable field theories of massless particles in four dimensions. We have shown that all renormalizable theories are constructible from a shift of five external momenta. Quite surprisingly, a shift of three external momenta suffices for a more restricted but still enormous class of theories: all renormalizable theories in which the scalars, if present, are charged equally under a U (1) symmetry. Hence, we can construct all scattering amplitudes in any gauge theory with fermion and complex scalar matter, any supersymmetric theory, and the standard model.
Our results suggest several avenues for future work. Because our analysis hinges solely on dimensional analysis, Lorentz invariance, and locality, it should be possible to generalize our approach to a broader class of theories. In particular, there is the question of theories residing outside of four dimensions and involving massive particles. Moreover, one might study an expanded covering space of recursion relations that include multiple complex deformation parameters or simultaneous shifts of holomorphic and anti-holomorphic spinors of the same leg.
The recursion relations presented here might also offer new tools for studying the underlying properties of amplitudes. For example, the enhanced large z behavior of amplitudes at large momenta implies so-called "bonus relations" whose nature remains unclear. In addition, the soft shift defined in Eq. (14) gives a nicely on-shell regulator for the soft limit of the amplitude. Precise knowledge of the soft limit can uniquely fix effective theories [17], and might actually be useful in the recursive construction of amplitudes, as we will discuss in [18]. Finally, given a more complete understanding of on-shell constructibility at tree-level, we are better equipped to attack a much more difficult problem, which is developing a recursive construction for the loop integrands of general quantum field theories. This was accomplished for amplitudes in planar N = 4 SYM [19], but with a procedure not obviously generalizable for less symmetric theories, where standard BCFW recursion induces ill-defined contributions in the forward limit.
In principle, this somewhat technical obstruction might be eliminated by considering alternative momentum shifts. | 7,796.6 | 2015-02-17T00:00:00.000 | [
"Physics"
] |
Silk-Cellulose Acetate Biocomposite Materials Regenerated from Ionic Liquid
The novel use of ionic liquid as a solvent for biodegradable and natural organic biomaterials has increasingly sparked interest in the biomedical field. As compared to more volatile traditional solvents that rapidly degrade the protein molecular weight, the capability of polysaccharides and proteins to dissolve seamlessly in ionic liquid and form fine and tunable biomaterials after regeneration is the key interest of this study. Here, a blended system consisting of Bombyx Mori silk fibroin protein and a cellulose derivative, cellulose acetate (CA), in the ionic liquid 1-ethyl-3-methylimidazolium acetate (EMIMAc) was regenerated and underwent characterization to understand the structure and physical properties of the films. The change in the morphology of the biocomposites (by scanning electron microscope, SEM) and their secondary structure analysis (by Fourier-transform infrared spectroscopy, FTIR) showed that the samples underwent a wavering conformational change on a microscopic level, resulting in strong interactions and changes in their crystalline structures such as the CA crystalline and silk beta-pleated sheets once the different ratios were applied. Differential scanning calorimetry (DSC) results demonstrated that strong molecular interactions were generated between CA and silk chains, providing the blended films lower glass transitions than those of the pure silk or cellulose acetate. All films that were blended had higher thermal stability than the pure cellulose acetate sample but presented gradual changes amongst the changing of ratios, as demonstrated by thermogravimetric analysis (TGA). This study provides the basis for the comprehension of the protein-polysaccharide composites for various biomedical applications.
Introduction
The use of biodegradable polymer or biopolymer materials has been of great interest in the past decades due to the growing environmental problems posed by nonbiodegradable and petroleum-based materials. The depletion of fossil resources such as coal and natural gas and the impact of the energy crisis is becoming more severe, as seen from the ever-fluctuating price of crude oil [1]. As a response, the heightened interest in the search for renewable resources has paved the way for research into biocomposites for their biodegradability and eco-friendliness. A biocomposite is usually made of a natural biopolymer matrix and additional reinforcement element(s) to produce a composite material with enhanced properties [1]. The allure of a more inexpensive and biocompatible option as compared to the manufacturing of cost-exhaustive synthetic polymers are also a welcomed reason for their replacement [2,3]. The various applications for natural biomaterials are superior for use in the medical field as these materials are more commercially attractive and provide enhanced compatibility within the human body [4].
As a naturally occurring biopolymer, silk is a fibrous protein produced from the larvae of Bombyx Mori silkworms. This insect is critical to producing much of the world's supply of silk. The primary protein fibers that silk consists of are fibroin and sericin, and its secondary structure consists almost entirely of beta-pleated sheets [5,6]. Raw silk is usually processed via a degumming procedure involving boiling the cocoons in an alkaline sodium carbonate solution to remove the water-soluble sericin layer. This degummed silk product is extremely pliable, as it can be conformed into various forms, such as gels, scaffolds, nanofibers, and composite materials. Silk is also a very tough natural fiber due to it consisting mainly of the amino acid glycine, which allows for fibers to be tightly packed together and endure less steric hindrance [4,5,7]. In the clinical setting, silk-based biomaterials are widely used due to their ease of processing, remarkable biocompatibility, adjustable degradation rates, and permeability in water and oxygen [3]. Some applications of silk include usage of anticoagulants, prosthetics, hygienic products, and arterial grafting [3,7,8]. Although silk has many applications as green or biological materials, its chemical and physical nature requires the addition of a second soft natural polymer, in this case cellulose acetate for a more structurally stable material [9].
Cellulose possesses strong inter and intra-molecular hydrogen bonding which provides the structural rigidity found in all trees and plants [10][11][12]. This biopolymer is very easy to extract, is sustainable, and has superior biocompatibility, which makes it suitable to be implemented in wound dressings. Its molecular structure contains repeating D-glucose units bonded via glycosidic linkages allowing them to fall into a tightly packed crystalline form [10,12]. Hydrophilic in nature, cellulose's highly crystalline structure renders it insoluble in water and in many commonly used organic solvents [13]. Of the cellulose derivatives, cellulose acetate (CA) is the most studied derivative due to its chemical resistance, stability, and solubility in many organic solvents [10,11]. CA exhibits an excellent biocompatibility as a promising material suitable for immobilization of biological compounds. It has been utilized in the production of eyeglass frames, cigarette filters, and post-burn skin protectants, as well as for cardiac tissue engineering, and as a semi-permeable osmotic pump in drug-delivery systems [7,14]. Due to its low tensile moduli and solubility, CA biomaterials are enhanced when combined with other natural biopolymers, specifically silk in this case. Combining these polymers will ensure the desired products exhibit strength and flexibility; however, the proper ratios of each biomaterial must be adjusted to ensure the desired physical properties are induced [4,5].
The addition of cellulose acetate to silk means that the next palpable step is to find a solvent that will preserve the chemical characteristics of the materials. Studies have shown that ionic liquids (ILs) are adequate compounds that could successfully carry out this task [12,13,15]. Ionic liquids are charged molecules that exist as liquids at room temperatures [16]. ILs are often defined as molten salts or even liquid electrolytes with melting points below 100 • C. ILs consist of cations and anions, which differ in size and possess conformational flexibility. In such salts, crystallization is impeded by a low Gibbs free energy of crystallization, which ultimately translates into low melting points. ILs are a much safer alternative solvent to organic ones since they are thermally stable, non-flammable, and have low volatility [12]. It is also known that ionic liquids can stabilize proteins and maintain their molecular weights. Specifically, ionic liquid mediated hydrogen bonds prevent breakdown of protein structure, even when surpassing extreme temperature thresholds [7,14]. These novel properties of ILs lead to its use as a solvent for several different biomaterial systems based on proteins, polysaccharides, and their composites [12,.
Among the considerable number of varying ILs explored today, only a minority can dissolve cellulose effectively [38]. Specifically, a small number of suitable anions are possible for the dissolution of both protein polymers and cellulose-based polymers. In this study, the ionic liquid 1-ethyl-3-methylimidazolium acetate (EMIMAc) was used to dissolve silk and cellulose-acetate, where the acetate anion is known to act as a catalyst in the ring opening reaction of cellulose [38]. Thin films consisting of silk and cellulose acetate ratios were made and characterized using the Fourier-transform infrared spectroscopy (FTIR). Thermal properties were analyzed using differential scanning calorimetry (DSC) and thermal gravimetric analysis (TGA). A high-resolution image of surface topography and composition of these films, on a microscopic level, was produced via the implementation of a scanning electron microscope (SEM). Results of these different experimental analyses will produce viable information on the ability to deploy silk cellulose-acetate biomaterials into the fields of biomedical and sustainable material engineering.
Raw Materials
Bombyx mori silk cocoons were purchased from Treenway Silks (Lakewood, CO, USA). The silk cocoons were boiled in 0.02 M NaHCO 3 obtained from Sigma Aldrich USA (CAS#: 144-55-8) for 15 min, then washed three times in deionized water baths in order to remove sericin proteins and extract the silk fibroin. Following this, the silk fibers were dried in a fume hood for 48 h. Cellulose acetate powder (CAS#: 9004-35-7) and 1-ethyl-3methylimidazolium acetate (EMIMAc) (CAS#: 143314-17-4) were purchased from Sigma Aldrich Co., LTD (St. Louis, MO, USA). Methanol was purchased from Sigma Aldrich USA (CAS#: 67-56-1). Prior to being used as a solvent, EMIMAc was placed in a vacuum oven at 60 • C for 24 h to fully remove any residual water moisture. All the substances that were used for the chemical analysis were analytical grade.
Film Preparation
For preparing composite film samples, a total of 3 g of the blended ionic liquid EMIMAc with solids (raw materials) was implemented, in which the ratios required 90% ionic liquid to 10% of solid materials. Once the ratio of ionic liquid to solids had been determined, the selected solid materials would be submerged and dissolved into the ionic liquid. A total of seven weight ratios were selected as the solid materials: 100% Cellulose Acetate (CA100), 90% Cellulose Acetate-10% Silk (CA90S10), 75% Cellulose Acetate-25% Silk (CA75S25), 50% Cellulose Acetate-50% Silk (CA50S50), 25% Cellulose Acetate-75% Silk (CA25S75), 10% Cellulose Acetate-90% Silk (CA10S90), and 100% Silk (Silk100). The ionic liquid was then fully submerged into a silicon oil bath, ranging from 70-80 degrees Celsius, prior to adding the solids. This was then followed-up with a 24-h mixing period [1,4]. This process is shown below in Figure 1.
The solids were added in the order of proteins first, followed by carbohydrates. Once the 24-h heating period was completed, all the ionic liquids were removed from the biocomposite films via a coagulation bath. The coagulation bath used in this study was methanol. For the water coagulation baths, the samples had dissolved partially during the process and were not able to be dried to form composite films. Thus, only samples washed in methanol were able to form solid biomaterials and investigated in this study. The samples were continually washed inside of the methanol coagulation baths for 48 h. After 48 h, the film samples were removed from the methanol coagulation baths and placed in a vacuum for 24 h to dry the films.
Fourier Transform Infrared Spectroscopy (FTIR)
FTIR analysis of the silk and cellulose acetate films was conducted by using a Bruker Tensor 27 Fourier Transform Infrared Spectrometer (Billerica, MA, USA). The FTIR spectrometer had an addition of a triglycine sulfate detector and a multiple reflection, horizontal MIRacle ATR attachment (using a Ge crystal, from Pike Tech. (Madison, WI, USA)). For each sample measurement, a total of 64 background scans and 64 sample scans were taken from the 4000 cm −1 to 400 cm −1 range at a resolution 2 cm −1 . To ensure a homogeneous distribution in the films, samples were taken from multiple spots and sides in triplicate at room temperature (~20 • C). The Ge crystal was cleaned with ethanol and dried between samples. To process the results, spectra from each sample were analyzed using the OPUS software [16,39]. The solids were added in the order of proteins first, followed by carbohydrates. Once the 24-h heating period was completed, all the ionic liquids were removed from the biocomposite films via a coagulation bath. The coagulation bath used in this study was methanol. For the water coagulation baths, the samples had dissolved partially during the process and were not able to be dried to form composite films. Thus, only samples washed in methanol were able to form solid biomaterials and investigated in this study. The samples were continually washed inside of the methanol coagulation baths for 48 h. After 48 h, the film samples were removed from the methanol coagulation baths and placed in a vacuum for 24 h to dry the films.
Fourier Transform Infrared Spectroscopy (FTIR)
FTIR analysis of the silk and cellulose acetate films was conducted by using a Bruker Tensor 27 Fourier Transform Infrared Spectrometer (Billerica, MA, USA). The FTIR spectrometer had an addition of a triglycine sulfate detector and a multiple reflection, horizontal MIRacle ATR attachment (using a Ge crystal, from Pike Tech. (Madison, WI, USA)). For each sample measurement, a total of 64 background scans and 64 sample scans were taken from the 4000 cm −1 to 400 cm −1 range at a resolution 2 cm −1 . To ensure a homogeneous distribution in the films, samples were taken from multiple spots and sides in triplicate at room temperature (~20 °C). The Ge crystal was cleaned with ethanol and dried between samples. To process the results, spectra from each sample were analyzed using the OPUS software [16,39].
Differential Scanning Calorimetry (DSC)
Roughly 6 mg of thin film samples were enclosed in an aluminum pan and pressed closed to prepare for DSC analysis. A Q100 DSC (TA Instruments, New Castle, DE, USA) equipped with refrigerated cooling system was used, with 50 mL/min of nitrogen purge gas flowed through the sample chamber. Prior to use, the instrument was calibrated with an indium crystal for heat flow and temperature, while aluminum and sapphire standards calibrated the heat capacity. Temperature-modulated differential scanning calorimetry (TMDSC) measurements were performed at a heating rate of at 2 °C/min with a modulation period of 60 s and temperature amplitude of 0.318 K, from −40 °C to 400 °C. The Lissajous figures of modulated heat flow vs. modulated temperature were also plotted to check the establishment of steady state. This gave data regarding the heat flow and re-
Differential Scanning Calorimetry (DSC)
Roughly 6 mg of thin film samples were enclosed in an aluminum pan and pressed closed to prepare for DSC analysis. A Q100 DSC (TA Instruments, New Castle, DE, USA) equipped with refrigerated cooling system was used, with 50 mL/min of nitrogen purge gas flowed through the sample chamber. Prior to use, the instrument was calibrated with an indium crystal for heat flow and temperature, while aluminum and sapphire standards calibrated the heat capacity. Temperature-modulated differential scanning calorimetry (TMDSC) measurements were performed at a heating rate of at 2 • C/min with a modulation period of 60 s and temperature amplitude of 0.318 K, from −40 • C to 400 • C. The Lissajous figures of modulated heat flow vs. modulated temperature were also plotted to check the establishment of steady state. This gave data regarding the heat flow and reversing heat capacity versus the temperature. This test was then run with the seven different samples that were produced [5,39]. In order to confirm the reliability of the experiment, the samples were tested three times for each condition.
Thermal Gravimetric Analysis (TGA)
Thermogravimetric analysis (TGA) of composite films was investigated with a TA Instruments Q600 SDT instrument (Wilmington, DE, USA). The TGA had a precision balance with a small ceramic pan inside of the furnace. The furnace temperature was controlled to increase the temperature from 25-800 • C at a rate of 10 • C/min. Nitrogen purge gas was used at a rate of 50 mL/min. The mass of the samples was measured over time with regards to changing temperatures in order to measure the thermal stability of the samples [39,40]. In order to confirm the reliability of the experiment, each type of sample was tested three times.
Scanning Electron Microscopy (SEM)
A FEI VolumeScope™ SEM (Hillsboro, OR, USA) was used in assessing the morphology of the bio-composites. The FEI SEM implements four different beam currents that are directed towards the sample of interest. This allows the SEM to show the morphology of the blended films with details on the microscopic level. The samples were placed on SEM holders and held into place with circular conducting tape. They were then coated by a thin layer of gold in the Denton Vacuum Desk sputtering machine for a spell of 10~90 s. The samples were then placed into the SEM and prepared for imaging at room temperature under high vacuum. Experiments were conducted with an accelerating voltage ranging between 10 and 20 kV.
Morphological Analysis
The physical appearance of the bulk silk-cellulose acetate bio-composite films displayed a common trend. The pure silk sample was smooth and unyielding while the pure cellulose acetate sample was thin and flexible, confirming cellulose acetates more pliable properties. The physical properties of composite materials can be adjusted depending on the amount of CA added to the silk film. The topography of the silk dominated composite films (CA10S90, CA25S75) displayed ridges and grooves while the cellulose acetate dominated films (CA90S10, CA75S25) displayed a gradual increase in ridges and grooves when mixed with silk. The silk dominated films were brittle yet also strong and with increasing cellulose acetate showed more flexibility.
To further investigate the surface morphology of composite films at microscale, the methanol-coagulated samples were analyzed by SEM, as shown in Figure 2. For the composite films, significant surface morphology change was observed as compared to the pure samples shown in Figure 2. The pure cellulose acetate sample (CA100) had a homogeneous surface with smooth and continuous ridging. The pure silk sample (Silk100) also displayed a homogenous and smoother surface. In Figure 2, all composite films become rougher generally on the microscopic scale. With just 10% of cellulose acetate added, the 90% silk sample (CA10S90) has a slightly porous and rough topography. The 75% silk (CA25S75) showed a similar appearance and the 50% sample (CA50S50) had intermittent smooth sections formed amongst the roughness. As cellulose acetate began to dominate in the sample (CA75S25), the topography showed a smoother surfaced cobble stone or closely forming bubbled appearance. The 90% CA dominated film (CA90S10) showed a similar topography to its 100% CA (CA100) counterpart with a more continuous and smoother appearance with slight cracking. The trend seems to indicate that with just small increments of CA added, the surface can undergo a drastic change in topography from smooth to rough and porous, leading to potential in applications regarding organic filters and cell culture growth studies.
Structural Analysis
Structural changes in silk-cellulose acetate films after methanol coagulation were confirmed by FTIR and shown in Figure 3. The IR spectral region within 1700-1600 cm −1 is assigned to the peptide backbone of amide I (1700-1600 cm −1 ) and amide II (1600-1500 cm −1 ) absorption ( Figure 3A), which have been commonly used for the analysis of different secondary structures of silk fibroin proteins. The peaks at 1630-1610 cm −1 (amide I)
Structural Analysis
Structural changes in silk-cellulose acetate films after methanol coagulation were confirmed by FTIR and shown in Figure 3. The IR spectral region within 1700-1600 cm −1 is assigned to the peptide backbone of amide I (1700-1600 cm −1 ) and amide II (1600-1500 cm −1 ) absorption ( Figure 3A), which have been commonly used for the analysis of different secondary structures of silk fibroin proteins. The peaks at 1630-1610 cm −1 (amide I) and 1520-1510 cm −1 (amide II) are characteristic of silk II structure (dominated by betasheets) [41,42]. Pure silk film (CA0S100) is dominated by beta-sheet crystalline structure (around 1620 cm −1 ). The addition of cellulose acetate increased the alpha-helical structure (around 1650 cm −1 ) from beta-sheet formation, probably due to the hydrogen bonding of acetate with the protein chains of silk as seen in Figure 3A, while pure cellulose acetate (CA100S0) did not show strong absorbance in this region.
Structural Analysis
Structural changes in silk-cellulose acetate films after methanol coagulation were confirmed by FTIR and shown in Figure 3. The IR spectral region within 1700-1600 cm −1 is assigned to the peptide backbone of amide I (1700-1600 cm −1 ) and amide II (1600-1500 cm −1 ) absorption ( Figure 3A), which have been commonly used for the analysis of different secondary structures of silk fibroin proteins. The peaks at 1630-1610 cm −1 (amide I) and 1520-1510 cm −1 (amide II) are characteristic of silk II structure (dominated by betasheets) [41,42]. Pure silk film (CA0S100) is dominated by beta-sheet crystalline structure (around 1620 cm −1 ). The addition of cellulose acetate increased the alpha-helical structure (around 1650 cm −1 ) from beta-sheet formation, probably due to the hydrogen bonding of acetate with the protein chains of silk as seen in Figure 3A, while pure cellulose acetate (CA100S0) did not show strong absorbance in this region. Further analysis on the structural composition of silk-cellulose acetate films was characterized within the region of 1050-1000 cm −1 , as shown in Figure 3B. This region is often chosen as the reference characteristic peak due to its stable position and intensity even during CA acetylation [43]. Pure cellulose acetate films show a dominant peak at 1031 cm −1 , which is attributed to the C-O-C stretching vibration in the backbone of anhydroglucose units [43]. The addition of silk proteins splits the peak into three peaks around 1054, 1032, and 1020 cm −1 . Meanwhile, with the increase of silk content, the intensity of the left peak at 1054 cm −1 gradually weakened, and the intensity of the right peak around 1020 cm −1 gradually increased. The position of the right peak also shifts to the right from 1020 to 1013 cm −1 as the cellulose acetate content increases in the films. These spectral changes may be another indicator of the silk-CA molecular interactions in the composite films. Although these hydrogen bond interactions mainly occur on the side chain groups, the C-O-C stretching vibration in the CA backbone is also affected.
Thermal Analysis
Temperature-modulated DSC (TMDSC) was performed to further understand thermal properties of the silk-cellulose acetate composite films, and its results are shown in Figure 4 and Table 1. When observing Figure 4A, the solvent release temperature, T s , refers to the Polymers 2021, 13, 2911 7 of 11 release temperature where most of the bound water/solvent molecules evaporated through the heating process. The second label, T d , refers to the degradation temperature, at which the sample begins to thermally degrade because of the heating. It is notable that at the point of degradation (T d ) of the 100% silk sample, there is only one defined peak, but when the cellulose acetate is added to the silk film, that single degradation peak turns into two peaks (T d1 , T d2 ). This indicates that each of the two individual samples still maintained their unique chemical properties of polymer backbones through the mixing in EMIMAc ionic liquid. However, the positions of the two degradation peaks (T d1 , T d2 ) are different from that of the individual pure silk or CA materials ( Table 1), suggestion that the strong molecular interactions (such as hydrogen bonds) exist between the silk and CA side chain groups. Figure 4B shows the reversing heat capacity scans of different silk-CA samples, which clearly demonstrated the glass transitions of the composites. Further analysis of these glass transition regions shows that with the increase of CA content, the glass transition temperature (T g ) of composites increases slightly, from 113.6 • C for CA10S90 film to 200.1 • C for pure cellulose acetate film (CA100). However, the T g of the pure silk films (Silk100) did not follow this trend, with a value of 178.5 • C, which is like the T g values found in silk films regenerated from water or organic solvents [41,42]. As previously mentioned in the FTIR analysis, the pure silk films contained the largest beta-sheet crystallinity, making them overall the most thermally stable of all composite films.
Although these hydrogen bond interactions mainly occur on the side chain groups, the C-O-C stretching vibration in the CA backbone is also affected.
Thermal Analysis
Temperature-modulated DSC (TMDSC) was performed to further understand thermal properties of the silk-cellulose acetate composite films, and its results are shown in Figure 4 and Table 1. When observing Figure 4A, the solvent release temperature, Ts, refers to the release temperature where most of the bound water/solvent molecules evaporated through the heating process. The second label, Td, refers to the degradation temperature, at which the sample begins to thermally degrade because of the heating. It is notable that at the point of degradation (Td) of the 100% silk sample, there is only one defined peak, but when the cellulose acetate is added to the silk film, that single degradation peak turns into two peaks (Td1, Td2). This indicates that each of the two individual samples still maintained their unique chemical properties of polymer backbones through the mixing in EMIMAc ionic liquid. However, the positions of the two degradation peaks (Td1, Td2) are different from that of the individual pure silk or CA materials (Table 1), suggestion that the strong molecular interactions (such as hydrogen bonds) exist between the silk and CA side chain groups. Figure 4B shows the reversing heat capacity scans of different silk-CA samples, which clearly demonstrated the glass transitions of the composites. Further analysis of these glass transition regions shows that with the increase of CA content, the glass transition temperature (Tg) of composites increases slightly, from 113.6 °C for CA10S90 film to 200.1 °C for pure cellulose acetate film (CA100). However, the Tg of the pure silk films (Silk100) did not follow this trend, with a value of 178.5 °C, which is like the Tg values found in silk films regenerated from water or organic solvents [41,42]. As previously mentioned in the FTIR analysis, the pure silk films contained the largest beta-sheet crystallinity, making them overall the most thermally stable of all composite films.
Thermal Stability
Additional analyses of the thermal properties of the silk-cellulose acetate films were conducted by TGA as shown in Figure 5. Table 1 also summarized several typical TGA parameters including onset temperature of decomposition, bound solvent content, degradation middle temperature (T dm ), and remaining mass at 400 • C. Each sample followed the same decomposition trend as the DSC results where the pure silk sample maintained the best thermal stability after undergoing higher temperatures as compared to the blended films. When silk is dominant, the blend samples have a greater onset decomposition temperature, and the degradation middle temperature (T dm ) is also higher. Once the concentration of cellulose acetate is equal to or greater than that of silk, the remaining mass at 400 • C is much lower, indicating that cellulose acetate component greatly reduces the thermal stability of the blended samples at high temperatures. peak temperature, and degradation peak temperature of different silk-CA films, respectively.
Thermal Stability
Additional analyses of the thermal properties of the silk-cellulose acetate films were conducted by TGA as shown in Figure 5. Table 1 also summarized several typical TGA parameters including onset temperature of decomposition, bound solvent content, degradation middle temperature (Tdm), and remaining mass at 400 °C. Each sample followed the same decomposition trend as the DSC results where the pure silk sample maintained the best thermal stability after undergoing higher temperatures as compared to the blended films. When silk is dominant, the blend samples have a greater onset decomposition temperature, and the degradation middle temperature (Tdm) is also higher. Once the concentration of cellulose acetate is equal to or greater than that of silk, the remaining mass at 400 °C is much lower, indicating that cellulose acetate component greatly reduces the thermal stability of the blended samples at high temperatures.
The pure silk and pure cellulose samples tended to be on the opposite ends of the spectrum where silk is most stable and pure cellulose acetate was weakest. This could be due to the hydrophobic nature of cellulose acetate. The combination of cellulose acetate with a small amount of silk leads to a significant shift in the degradation middle temperature (Tdm) peak to a higher degree (as seen in Figure 5B). This suggests more thermal stability can be easily achieved with just 10~25% of B. Mori silk added into the composites. The pure silk and pure cellulose samples tended to be on the opposite ends of the spectrum where silk is most stable and pure cellulose acetate was weakest. This could be due to the hydrophobic nature of cellulose acetate. The combination of cellulose acetate with a small amount of silk leads to a significant shift in the degradation middle temperature (T dm ) peak to a higher degree (as seen in Figure 5B). This suggests more thermal stability can be easily achieved with just 10~25% of B. Mori silk added into the composites.
Mechanism
Based on the results outlined above, the proposed mechanism for silk-cellulose acetate biocomposites can be seen in Figure 6. Immediately after dissolution in ionic liquid, cellulose acetate takes on a more disordered structure and the molecular chains are expanded, ready to interact with silk molecules. Meanwhile silk fibroin natural fibers are also dissolved in the selected ionic liquid (EMIMAc), and the beta sheet crystals are disassembled into soluble structures such as random coils, alpha helix, and beta turns. After washing by methanol, both pure cellulose acetate and silk materials can regain their ordered molecular structure, confirmed by FTIR (e.g., silk molecules will form dominant insoluble beta-sheet crystals in the structure). With the increase of the cellulose acetate content in the composite material, the possibility of the formation of hydrogen bonds between the CA chain and the silk chain increases, so more α-helical structures are formed, which inhibits the formation of β-sheet crystals in the biocomposites. According to the results of FTIR and thermal analysis, during this process, silk and CA molecules were successfully mixed without immiscible phase separation, which significantly improved the stability of the composite structure.
insoluble beta-sheet crystals in the structure). With the increase of the cellulose acetate content in the composite material, the possibility of the formation of hydrogen bonds between the CA chain and the silk chain increases, so more α-helical structures are formed, which inhibits the formation of β-sheet crystals in the biocomposites. According to the results of FTIR and thermal analysis, during this process, silk and CA molecules were successfully mixed without immiscible phase separation, which significantly improved the stability of the composite structure. Figure 6. Proposed structural mechanism for the mixing of silk and cellulose acetate using EMIMAc as the solvent.
Conclusions
Using EMIMAc as an ionic liquid solvent, Bombyx mori silk and cellulose acetate are both able to be dissolved into the same solvent to form composite biomaterials with the benefits of each individual biopolymer. Various ratios of silk to cellulose acetate were able to be fabricated into composites, with tunable structures confirmed by characteristic FTIR peaks of both cellulose acetate and silk in the blended samples. The effects of methanol as a coagulation agent were observed in the structural analysis of the composites. Both FTIR and SEM reveal how the crystallinity and morphology of the composite varies with ratio of silk to cellulose. These effects also affect the thermal properties of the composites, where Figure 6. Proposed structural mechanism for the mixing of silk and cellulose acetate using EMIMAc as the solvent.
Conclusions
Using EMIMAc as an ionic liquid solvent, Bombyx mori silk and cellulose acetate are both able to be dissolved into the same solvent to form composite biomaterials with the benefits of each individual biopolymer. Various ratios of silk to cellulose acetate were able to be fabricated into composites, with tunable structures confirmed by characteristic FTIR peaks of both cellulose acetate and silk in the blended samples. The effects of methanol as a coagulation agent were observed in the structural analysis of the composites. Both FTIR and SEM reveal how the crystallinity and morphology of the composite varies with ratio of silk to cellulose. These effects also affect the thermal properties of the composites, where silk-heavier samples have a higher degradation temperature due to the higher beta-sheet content of silk. Of special note related to the use of ionic liquids, the composite samples exhibit single glass transition temperatures and changing thermal degradation peaks as seen in the DSC analysis, indicating strong interactions between silk and cellulose acetate molecules. This study proves the potential of EMIMAc as a solvent for Bombyx mori silk and cellulose acetate, and how its regeneration in methanol results in practical biocompatible films for various applications.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 7,361.2 | 2021-08-29T00:00:00.000 | [
"Agricultural And Food Sciences",
"Materials Science"
] |
Quadratic variance models for adaptively preprocessing SELDI-TOF mass spectrometry data
Background Surface enhanced laser desorption/ionization time-of-flight mass spectrometry (SELDI) is a proteomics tool for biomarker discovery and other high throughput applications. Previous studies have identified various areas for improvement in preprocessing algorithms used for protein peak detection. Bottom-up approaches to preprocessing that emphasize modeling SELDI data acquisition are promising avenues of research to find the needed improvements in reproducibility. Results We studied the properties of the SELDI detector intensity response to matrix only runs. The intensity fluctuations and noise observed can be characterized by a natural exponential family with quadratic variance function (NEF-QVF) class of distributions. These include as special cases many common distributions arising in practice (e.g.- normal, Poisson). Taking this model into account, we present a modified Antoniadis-Sapatinas wavelet denoising algorithm as the core of our preprocessing program, implemented in MATLAB. The proposed preprocessing approach shows superior peak detection sensitivity compared to MassSpecWavelet for false discovery rate (FDR) values less than 25%. Conclusions The NEF-QVF detector model requires that certain parameters be measured from matrix only spectra, leaving implications for new experiment design at the trade-off of slightly increased cost. These additional measurements allow our preprocessing program to adapt to changing noise characteristics arising from intralaboratory and across-laboratory factors. With further development, this approach may lead to improved peak prediction reproducibility and nearly automated, high throughput preprocessing of SELDI data.
Background
Mass spectrometry is a promising technology for biomarker discovery [1]. There are a wide variety of mass spectrometers from which one could choose from during the design of a biomarker discovery experiment, reviewed in [2]. Matrix assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS, or just MALDI) can ionize whole proteins intact over a wide range of protein mass values, making it suitable for biomarker discovery in complex media such as blood serum, where both protein concentrations and masses vary greatly [3]. Surface-enhanced laser desorption/ionization time-of-flight mass spectrometry (SELDI-TOF MS, or just SELDI) [4] is a variant of MALDI that adds an on-chip chromatographic separation step at the front end of the analysis pipeline. This, combined with robot-automated sample preparation, enables SELDI to be high-throughput, an attractive feature for many laboratories. For a recent review of the application of SELDI in the context of biomarker discovery, see [5].
The typical SELDI work flow involves the collection of samples (e.g.-blood serum) from patients, application of the samples to SELDI ProteinChips® selected for desired physicochemical properties, and analysis in the SELDI mass spectrometer. The raw data must be preprocessed to detect relevant peaks which correspond to proteins in the sample. Typical signal preprocessing steps performed are spectral alignment, denoising/smoothing, peak detection, peak matching, normalization, and quantification (see Figure 1 of [6]). The preprocessing of the raw SELDI spectra is typically accomplished using one of several available software packages (reviewed in [6][7][8]). Artifacts due to insufficient preprocessing of the data have, in the worst case, led to erroneous biological conclusions in early SELDI studies [9][10][11]. This fact inspired several important comparison studies of SELDI preprocessing algorithms [6][7][8]12]. We now briefly summarize a few of the major contributions. For a more detailed overview, see the introduction of [6].
Coombes et al introduced the use of wavelets for denoising SELDI spectra [13], providing a more adaptive approach to denoise compared to moving average filters (e.g., as in [14]). Meanwhile, Morris et al introduced the notion of a mean spectrum, which represents average protein activity of a group of spectra. Under non-restrictive assumptions, the mean spectrum has less noise and allows one to circumvent complicated peak matching algorithms that consolidate peak predictions among individual spectra into a consensus prediction. Malyarenko et al introduced a novel baseline removal algorithm based on a proposed charge accumulation model of the saturation phenomenon of the detector [15]. This was one of the first algorithms that was designed from the "bottom-up", starting with physical considerations of SELDI. Later, deconvolution filters were shown to be a possible approach for improving mass resolution of SELDI [16][17][18].
Sköld et al analyzed single-shot spectra [19], the basic components of a final SELDI spectrum obtained by summing the results of many laser shots. They suggested that the observed counts in the single shot spectra may be proportional to a Poisson random variable, proposing a heteroscedastic model for the data. Meuleman et al also make use of single-shot spectra (sub-spectra) to derive a preprocessing algorithm based on analyzing these components separately [20].
In an attempt to improve on the bottom-up approach to preprocessing, we analyze the statistics of the SELDI signal over a wide range of intensity values. Based on data presented herein, we propose a natural exponential family model with quadratic variance function for the statistics of the detector response for SELDI experiments. We believe this model is a plausible explanation for acquisition of single-shot spectra, summing of single-shot spectra into a final spectrum, and extracting protein estimates from a mean spectrum under a unified framework. Under this framework, we introduce a new preprocessing approach, adaptive to changing noise characteristics per spectrum and per experiment, and show favorable peak prediction performance.
Buffer-only intensity measurements
Electronic measurements exhibit natural random fluctuations [21]. In many cases, these fluctuations are independent of the signal and are modeled as additive white Gaussian noise. In order to understand the nature of the noise fluctuations inherent to SELDI, we study the response of the detector under controlled experiments applying different buffers instead of protein samples under varying laser intensities (as in [22]). This eliminates the complexity introduced by adding serum to the chips while facilitating measurements of ion counts over a wide range of intensity values. In principle, this gives us a set of n repeated experiments from which we can study the statistics of the detector response compounded with noise and interference inherent to SELDI. In this fashion, we have generated two separate buffer + matrix datasets, denoted BUFFER1 and BUFFER2, which represent data generated on the same SELDI PBS IIc machine by different scientists and different machine parameters. BUFFER1/BUFFER2 contain 183/114 spectra, respectively.
We visualize all of the spectra in BUFFER1 and BUF-FER2 in Figure 1. In particular, we are interested in analyzing the region between 3 and 30 kDa, since this is the mass focusing region in our experiments. In this region, the observations across spectra for a fixed time (mass) point represent approximately independent, identicallydistributed measurements in BUFFER1 or BUFFER2, respectively. Figure 1 shows the median, 75% quantile, and 25% quantile of BUFFER1 and BUFFER2. The median spectrum shows the form of an ordinary measurement, with any measurement between the 75% and 25% spectrum lines considered typical as well. Figure 1 shows us the behavior of the typical buffer + baseline signal component seen in all SELDI raw spectra. Indeed, we see that changing different machine settings leads to different response properties. For BUFFER2, the median spectral response is large in the range shown, and the distribution of responses is symmetric about the median, whereas the distribution of detector response values for BUFFER1 are heavily skewed, and thus certainly not normally distributed.
We study the detector response (intensity output) for SELDI under varying input conditions, creating a detector response curve as follows. For each fixed time (mass) point across spectra from BUFFER1 in the mass focused region [3kDa; 30kDa], we estimate the mean intensity observed and the corresponding variance, with the same repeated for BUFFER2. These are displayed as a scatter plot in Figure 2 along with the best fit quadratic curve. Observing Figure 2 we see 1. Intensity fluctuation/variance increases monotonically with the mean. 2. The variance of the detector response is a quadratic function of the mean, to a very good approximation. 3. The detector response curves for BUFFER1 and BUFFER2 are quite different, and thus are dependent on the machine settings.
The detector response statistics thus exhibit a quadratic variance function. Briefly, a random variable X is said to have a quadratic variance function (QVF) if with μ being the mean of X, V(μ) the variance, and υ 0 , υ 1 , υ 2 constants, some of which may be zero.
From these observations, summarized in Figures 1 and 2, it seems unlikely that an algorithm optimized for BUFFER1 would work well on BUFFER2 and vice versa. Further, neither a homoscedastic approach (e.g. -standard wavelet shrinkage [23]) or a simple heteroscedastic approach (e.g. -Poisson regression formulation [24]) to preprocessing the data is likely to be sufficient.
Data for evaluating preprocessing algorithms
We have generated two new datasets for evaluating preprocessing algorithms in order to improve upon purely simulation-based datasets used in previous comparison studies [6,7]. A good comparison dataset should have the following properties (discussed previously in [6]): 1. Exact protein content is known (and thus expectation of where "true" peaks will appear) 2. Analyzed sample is complex containing many proteins/peaks 3. Noise and baseline characteristics should be as close to those of real SELDI data as possible.
If one uses simulated data [6,7,25], complete control can be attained over requirements 1) and 2) at the expense of having noise/baseline characteristics that are overly ideal. If one uses purely real data, the noise, baseline, and artifacts that arise in actual experiments are present. However, this usually accompanies the trade-off of either not knowing the exact protein content (e.g.-complex serum data) or an overly simplified scenario (e.g. -spike-in data). Figure 1 Quantile spectrum visualization of BUFFER1 and BUFFER2 datasets. Quantile spectrum visualizations for all 183/114 spectra from BUFFER1/BUFFER2 datasets respectively. The middle, upper, and lower spectra are the 50% (median), 75%, and 25% quantile spectra respectively, calculated pointwise for each mass point. The results show that different machine settings give rise to different statistical behavior of the intensity values registered at the detector. Preprocessing techniques should be able to adapt to this varying behavior.
We combine the advantages of purely simulated and real data by introducing the notion of a hybrid spectrum. To generate a hybrid spectrum, we use an implementation of the SimSpec 2.1 SELDI simulator [25,26] http://bioinformatics.mdanderson.org/Software/Cromwell/simspec.zip to generate a "clean" SELDI spectrum, shown at the top of Figure 3. This gives an accurate peak shape characteristic as would be seen in low resolution SELDI/MALDI for given mass and ion abundance values, without any electronic noise or baseline present. We then select one of our buffer + matrix spectra (from either BUFFER1 or BUFFER2) and add the two together to produce the hybrid spectrum shown at the bottom of Figure 3. Thus, in a hybrid spectrum we know the exact virtual protein content specified to the simulator a priori while maintaining exactly the same noise, baseline, and other artifacts one encounters with real SELDI data.
Further details on the hybrid spectra can be found in the Methods section and in Additional file 1. The collection of hybrid spectra under different operating conditions results in test sets, denoted HYBRID1 and HYBRID2, with each test set containing thirty datasets of fifty hybrid spectra each. The mean performance of a preprocessing algorithm on HYBRID1 and HYBRID2 can be interpreted as the expected performance of the preprocessing approach in each separate operating condition in a repeated experiment or sampling from a homogeneous population (e.g. -cancer group or control group).
New preprocessing algorithms for SELDI
We have developed a set of MATLAB® scripts for preprocessing SELDI spectra named LibSELDI. For information on how to obtain LibSELDI and the associated Figure 2 SELDI detector intensity response curves. For repeated experiments under homogeneous machine settings, the variance in intensities observed is shown to be quadratic in the mean intensity observed. Thus, peaks occurring in areas of the spectrum affected near the baseline will be more noisy and more difficult to detect. Most algorithms for preprocessing SELDI data assume constant variance, independent of signal intensity. The detector response curve is shown to be dependent on machine settings, as it is different for BUFFER1 and BUFFER2. scripts used to produce the figures in this paper, contact the authors. We compare our preprocessing package to the MassSpecWavelet package from the Bioconductor project [27]. MassSpecWavelet has been established as one of the best approaches in terms of peak finding in recent comparison studies [6,7], and has been downloaded > 6000 times in the past two years as of March 2010 http://bioconductor.org/packages/stats/bioc/Mass-SpecWavelet.html. Both packages have the advantage of having only one main user-adjusted parameter.
In order to compare the performance of each preprocessing program, we generate operating characteristic curves (OC curves) [6,20], one for each of the 30 datasets of HYBRID1 and HYBRID2, by varying the Peak Area threshold (LibSELDI) and signal-to-noise ratio threshold (Snr.Th in MassSpecWavelet) parameters in the programs. Code snippets showing how MassSpecWavelet was tested can be found in Additional file 1. This allows us to understand the trade-offs between false discovery rate (FDR) and sensitivity (TPR) achieved by each algorithm. The results for both the HYBRID1 and HYBRID2 collections are shown in Figure 4, where we have plotted the FDR-axis in log scale to emphasize the low FDR region which is usually of most interest in biomarker discovery applications. Note that, since both HYBRID1 and HYBRID2 are collections of datasets representing repeated trials (or equivalently a homogeneous population), the OC curves we show in Figure 4 are the mean OC curves across the 30 datasets for each.
The results show that LibSELDI tends to have a considerable advantage in the low FDR region, while MassSpec-Wavelet tends to have higher sensitivity for FDR > 25%. One way to summarize the performance of the algorithms is using the area under the OC curve for the FDR region Table 1, where we have normalized each score separately so that a perfect PAUC25 (likewise, PAUC50) score is 100.
In Figure 5, we show the specific operating characteristics for LibSELDI and MassSpecWavelet for Dataset 2 of HYBRID1. While both algorithms perform well, Lib-SELDI resolves more than 90 proteins correctly before making a mistake. Since operating characteristics show false discovery rate along the x-axis rather than false positive rate (as in the traditional ROC curves), they tend to penalize more when false predictions are made with very few true proteins found. Indeed, in this case MassSpecWavelet got its first protein prediction correct but its second prediction wrong, leading to the point at FDR = 50%, TPR = 7%. Thus, operating characteristics Figure 4 Trade off between sensitivity and false discovery rate for LibSELDI and MassSpecWavelet. Average loess-smoothed operating characteristics show the trade-offs between sensitivity (TPR) and false discovery rate (FDR) for HYBRID1 and HYBRID2. The mean loess-smoothed curve is indicated by the solid line, while the upper and lower dashed lines indicate the 75% and 25% quartile curves. The FDR axis is shown in log-scale to emphasize lower FDR values. LibSELDI demonstrates superior sensitivity compared to MassSpecWavelet on both datasets for FDR values less than about 25%. MassSpecWavelet has the advantage for FDR values greater than 25%. with false discovery rate along the x-axis enforce the principle of conservative decision making, rewarding approaches that are successful with their initial large threshold (conservative) predictions and penalizing those that make mistakes early.
At FDR values greater than 30%, MassSpecWavelet outperforms LibSELDI. However, this is at the expense of generally more promiscuous predictions, since Mass-SpecWavelet generates 586 potential protein predictions compared to 250 for LibSELDI.
Discussion
We posit that the detector response is a member of the Natural Exponential Family with Quadratic Variance Function (NEF-QVF), which is a proper subset of the exponential family of distributions [28]. Figures 1 and 2 show that assuming the detector response takes the form of a specific distribution is impractical, but that the detector response V(μ) has a QVF. The NEF-QVF family of distributions occur often in practice and have the following useful properties, characterized by Morris [28]: 1. If a random variable X NEF-QVF, it is completely specified by its variance function V(μ) 2. If X NEF-QVF, a, b constants then aX + b is also NEF-QVF 3. Additivity: If X 1 ; X 2 NEF-QVF, then X 1 + X 2 is NEF-QVF 4. Affine combinations of normal, Poisson, gamma, binomial, negative binomial, and generalized hyperbolic secant distributed random variables generate all possible distributions in the NEF-QVF family. There are some physical reasons as to why the NEF-QVF assumption could be reasonable as well. Some plausible justifications for the first two terms in Eq. (1) are: 1. Constant Term: This is possibly due to thermal noise (additive Gaussian noise) which is common to all electronic measurement devices [21]. 2. Linear Term: The ability to detect an ion in a multiple stage electron multiplier, a common type of detector in MALDI-like instruments, is described by compound Poisson statistics [29].
The existence of a plausible physical explanation for the quadratic variance term remains an open question. However its effect is measured in both BUFFER1 and BUFFER2 and cannot be neglected. While the QVF model explains the data well in the mass focused region between 3 and 30 kDa, it is likely to break down at lower masses around 2-2.5 kDA where the baseline reaches a maximum. In this region the detector often saturates, introducing a non-linearity into the data that we have not accounted for.
The success of our univariate model for SELDI may indicate that we have selected the most important feature to consider in the preprocessing of the data: namely, the fluctuations in the response of the ion detector subject to different inputs. The analysis of expression values of preprocessed data, on the other hand, requires multivariate methods as there are significant statistical dependencies between the peak heights corresponding to proteins that may be interacting. While these correlations are important in the analysis performed after the data is preprocessed, our results indicated it may be safe to ignore them during the preprocessing. While we have shown LibSELDI to be accurate for estimating peak m/z values, we have not assessed the usefulness approach for estimating peak intensities in this work. The utility of LibSELDI for accurately estimating peak intensities remains an open question and subject of future work.
It is entirely possible that the quadratic variance model could be applicable to other similar technologies such as MALDI and newer SELDI mass spectrometers. This, however, has not been confirmed.
Having buffer only spectra allows one to estimate the parameters of the detector response curve. Knowledge of the detector response curve enables us to apply the modified Antoniadis-Sapatinas denoising scheme described in the methods. Using this approach in our LibSELDI package yields excellent peak detection performance. We have proved this concept on HYBRID1 and HYBRID2 by estimating the QVF parameters of (1) using the buffer-only spectra that were randomly selected from BUFFER1 and BUFFER2 respectively. This implies that spots on SELDI chips should be reserved for buffer-only spectra. Thus, the trade-off for using our approach is increased cost in terms of the number of chips one must use. The modified Antoniadis-Sapatinas denoising is computationally intensive as well, taking approximately seven minutes per spectrum on a highend workstation.
We argue that some of the cost is recovered by the potential for adaptive and accurate preprocessing, but not all. It may be possible to use QC and/or calibration samples to estimate the QVF as well rather than bufferonly spots. However, this would add in some additional variation due to the nature of the medium (serum, plasma, etc).
While LibSELDI outperforms MassSpecWavelet on the HYBRID1 and HYBRID2 test sets, the applicability of this comparison and of these results to purely real data remains an open question. There is some basic biological variability modeled in our test sets (see description in supplement of [6]). However, data from complex biological samples such as serum or plasma likely contains more biological variation and artifacts than we have modeled in HYBRID1 and HYBRID2. The investigation of how biological variation affects the model in QC samples is a work in progress.
In addition to achieving a better mean OC curve at lower FDR values, LibSELDI consistently predicts fewer peaks than MassSpecWavelet, leading to protein predictions closer to the true number of proteins in the data, as shown in Figure 6. This is further evidence that the adaptive modified Antoniadis-Sapatinas denoising approach using the NEF-QVF model for the detector response is smoothing the spectra by close to the right amount.
Conclusions
We have shown that the variance of the intensity of a SELDI spectrum is quadratic in the mean signal strength. We further make the flexible assumption that the underlying distribution of the intensities is from a natural exponential family. From this point of view, we use a modified Antoniadis-Sapatinas wavelet shrinkage approach for denoising SELDI spectra. With this method at the core of our LibSELDI program for preprocessing SELDI data, we demonstrate excellent sensitivity at low false discovery rates. For applications that can tolerate higher false discovery rates, the MassSpec-Wavelet algorithm performs better in this region.
Our work has implications in the design of SELDI experiments. Namely, the modified Antoniadis-Sapatinas denoising technique performs well but requires an estimate of the quadratic variance function (QVF) describing the SELDI detector. This, in turn, is affected by machine settings. We have used buffer-only spectra to estimate the QVF. Thus, buffer-only spots could be interlaced on chips. We are investigating less expensive ways to estimate the QVF in future work.
Protocol for generating buffer-only spectra
Buffer-only spectra were generated by interspersing buffer only samples with protein samples from subjects (e.g. serum samples) and with pooled subject samples (for quality control) on the same chip. The buffer-only samples were spotted with wash buffer that was either PBS (phosphate buffered saline with various concentrations of phosphate and NaCl) based or acetonitrile + TFA (triflouroacetic acid) based, as manufacturer recommended per chip type. These buffer only samples were processed with the same washing steps as the subject samples, as described in [22], and then SPA matrix was applied to all spots.
The samples were analyzed with the Protein Biological System IIc™ SELDI mass spectrometer (Ciphergen Biosystems, Freemont, CA). The machine settings (e.g. laser intensity, detector sensitivity) and precise washing steps varied from buffer only spot to buffer only spot, and were generally different between BUFFER1 and BUF-FER2. Note especially that laser intensities were generally higher for BUFFER2 than for BUFFER1. A detailed list of machine settings is given in the Additional file 1.
Hybrid data
Calculating performance statistics for comparison of MassSpecWavelet and LibSELDI requires a large number of spectra emulating an experiment that was repeated many times. To generate the HYBRID1 dataset, we combine each clean spectrum with one buffer +matrix spectrum from BUFFER1, and similarly we form HYBRID2 from BUFFER2 by combining those spectra with the same clean spectra.
A basic model of repetitive experiments for SELDI is available with SimSpec 2.1 that takes into account fluctuations in protein concentrations, m/z values, and prevalence in the data. Using the SimSpec 2.1 model developed at the MD Anderson Cancer Center [25,26], we generate 30 datasets containing 50 clean (noise and Figure 6 Efficiency of peak/protein predictions. We show boxplots summarizing the number of peaks predicted for each program in the mean spectrum of each dataset from HYBRID1 and HYBRID2 before thresholding. LibSELDI consistently predicts around 250 peaks, while MassSpecWavelet predicts more than 600 peaks consistently. MassSpecWavelet's more promiscuous predictions lead to high sensitivity at the expensive of higher false discovery rate performance. LibSELDI's peak predictions are reproducibly closer to the true number of virtual proteins, 150 of them, present in each dataset. matrix-free) spectra each. Each dataset consists of 150 virtual proteins and each spectrum within the given dataset contains a proper subset of these proteins with fluctuating parameters according to the model described in [25] and its supplement. The goal for the preprocessing programs in our performance evaluation is to reconstruct the master list of 150 virtual proteins characterizing the dataset. Repeated across all 30 datasets, we can calculate useful performance statistics. The properties of the 150 virtual proteins themselves are drawn from a prior distribution that was estimated from real data. See [25], or alternatively, the description in the supplement of [6].
We use sampling to overcome the limitation of having much fewer spectra in BUFFER1 and BUFFER2 than we have clean spectra in preparation for testing the algorithms. In principle the best way to construct the hybrid test sets would be to have one unique spectrum in BUF-FER1 (likewise BUFFER2) for each spectrum in our clean protein-only set. However, this would require 1500 buffer+matrix runs to be performed for both BUF-FER1 and BUFFER2, an impractical amount of blank chips to run. Sampling from BUFFER1 (BUFFER2) provides a cost effective way to introduce variation in the noise/matrix characteristics between the datasets in HYBRID1 (HYBRID2).
Preprocessing the spectra
First we consider a model for a single SELDI spectrum, X(t). We observe X(t), a random process, on a discrete time grid t 1 ,..., t m , where X(t) represents the intensity of the raw SELDI spectrum observed at time (equivalently mass) point t. For all t, we assume that X(t) is distributed according to a natural exponential family (NEF) with quadratic variance function (QVF) equal to V(μ(t)) as in Eq. (1). The variance function V(μ) completely characterizes the NEF-QVF family. The goal of preprocessing in SELDI is to estimate μ(t), the expectation of X(t), which is the signal corresponding to ions that hit the detector. With a good estimate of μ(t), extracting peaks and estimating protein m/z values in a dataset is relatively straightforward.
As a side note we point out that a SELDI spectrum is actually a sum of single shot spectra. However, the additivity property of the NEF-QVF family guarantees the sum is NEF-QVF provided that the single-shot spectra are NEF-QVF, agreeing with our detector response model and experimental observations.
Multiple spectra considerations
Rather than observe a single spectrum, the typical biomarker discovery approach is to generate at least one spectrum for each of n samples from an approximately homogeneous population. For example, one homogeneous population may be a group of early stage prostate cancer patients matched for age, race, etc. Assuming the samples are run on the same SELDI machine with the same operating conditions, we have Our assumption that all n patients have the same underlying μ(t) is equivalent to assuming that the underlying biological condition being observed in each patient is approximately the same. Thus, we wish to estimate the underlying commonality μ(t) related to the biology of their condition expressed through the SELDI signal. We can mitigate some of the effects of the QVF by forming the mean spectrum (first introduced by [25]).
It is straightforward to show that Thus, the mean spectrum concept is valuable under the assumptions of the NEF-QVF model as well.
Modified Antoniadis-Sapatinas denoising
We now discuss estimation of μ(t) from the mean spectrum (3). Since the X k (t) are sampled on a discrete time grid (and thus X • ), we introduce vector notation Antoniadis and Sapatinas proposed a wavelet shrinkage scheme to solve for in (6) in the context of NEF-QVF regression [30]. We summarize their main results. For our denoising, we use the orthogonal discrete wavelet transform with respect to the Symmlet 8 basis [31]. The transform can be represented by an m × m orthogonal matrix W, Let h be a length m vector with entries taking values between 0 and 1. Let H = diag(h) be the m × m matrix defined by placing the entries of h along the main diagonal, all other entries 0. The class of estimators for x • ( ) considered by [30] take the form This is the typical wavelet denoising scenario where each wavelet coefficient is left alone or shrunk towards zero according to some criterion, and is completely defined by the vector h. Antoniadis and Sapatinas showed that a good estimator for data from the NEF-QVF family is given by choosing
The term 2 is estimated aŝ Where V(x • ) is the vector constructed by applying the QVF from (1) to each term of x • . (W · W ) is the matrix whose i, j element is the square of the i, jelement of W. The parameters υ 0 , υ 1 , υ 2 in (1) are measured from the buffer-only spectra, as described in the Results and Discussion section.
We make an intuitive modification to (9) , .
x x x Thus our modified Antoniadis and Sapatinas estimator h uses 2 in (8) rather than 2 . The modification was introduced to account for cases when (9) may underestimate the noise when low amounts of observed signal are detected. Define Then, our modified Antoniadis-Sapatinas estimate of μ is defined as
Peak detection/baseline removal
We consolidate the two preprocessing steps of baseline removal and peak detection typically performed separately into a single step as follows. We assume that the underlying μ(t) shown in (4) is the superposition of protein ions, s(t), and energy-absorbing matrix ions, b (t) striking the detector. It is well known that the distribution of the isotopes in our analyte of interest gives rise to a roughly Gaussian peak shape. Thus, we propose where, ( , ) t j j denotes a Gaussian kernel function centered at t j with standard deviation s j and zero outside the interval [t j -a, t j + a].
Typically, s(t) is very sparse in the sense that it is mostly zero over the domain of the observed signal. Therefore, the local minima of our estimated baseline + noise signal are points we may assume touch the baseline. From this point of view, once we have detected all the local minima in , the baseline curve estimation problem reduces to an interpolation problem amongst these points. We have found through experimentation that piecewise cubic Hermite interpolating polynomials [32] are excellent interpolation functions. The minima and maxima in are found in one pass using the extrema function downloadable from MATLAB® central file exchange. The maxima are the peaks in the mean spectrum potentially indicating proteins represented in our sample population while the minima correspond to samples from the baseline signal. Each detected peak is quantified using peak area and a threshold is chosen based on the peak area measurement to generate the final prediction set.
Operating characteristics
The peaks we detect in represent the initial set from which we choose our final estimates of proteins that are active in the population of interest. The choice of final estimate is accomplished using a peak area threshold (LibSELDI) or signal-to-noise ratio measurement (Snr. Th in MassSpecWavelet). From each prediction, we calculate the observed false discovery rate (FDR) and true positive rate (TPR, also called sensitivity) Where TP (the number of true positives) is the number of the 150 virtual protein m/z values having at least one predicted m/z value within 0.3% relative error. The FP is defined as the number of predicted m/z values not within 0.3% of any of the 150 virtual protein m/z values for this dataset. Similarly, FN is the number of the 150 virtual protein values without any predicted m/z value within 0.3% relative error.
For each dataset, a curve is fit to the operating points. Each operating curve is averaged to produce a mean operating characteristic, as shown in Figure 4. From this curve, the calculation of the area-under-the curve is straightforward. For more details, see sections 2.2 and 2.2.1 of [6].
Additional material
Additional file 1: Experiment and simulation settings. This file contains additional details about how simulations and experiments were carried out. | 7,366.8 | 2010-10-13T00:00:00.000 | [
"Computer Science"
] |
Mineralogy of The Beach Sands Along the Mediterranean Coast from Benghazi to Bin-Jawwad, NE Libya
The present work aims to characterize the mineralogy of the beach sands along the Mediterranean Coast from Benghazi to Bin Jawwad, NE Libya. The microscopic and SEM examinations indicate an abundance of carbonates, quartz, feldspars, and evaporites. The detected heavy minerals are zircon, tourmaline, pistachite, hornblende, garnet, monazite, rutile, titanite, augite, biotite, kyanite, chromian spinel, magnetite, ilmenite
Introduction
The present work describes the mineralogy of the beach sands along the Mediterranean Coast from Benghazi to Bin Jawwad, NE Libya.The study area is the coastal area of a part of the Sirte Basin (Figure 1).The Sirte Basin ranks the 13 th among the world's petroleum provinces, having proven oil reserves estimated at 43.1 billion barrels of oil equivalent, an amount that constitutes 1.7% of the world's known oil reserves.The basin consists of one dominant total petroleum system, known as the Sirte-Zelten.According to Mresah (1993), the Sirte Basin of Libya is a Mesozoic-Tertiary rift basin comprising a series of horsts and grabens, which were formed as a result of the collapse of an N-S trending arch in the Upper Cretaceous time.The major structural elements of Libya suggest that the Caledonian uplift is trending N-S to NW-SE while the Hercynian structural elements are trending E-W to NE-SW (Figure 2).The collapse of the Hercynian Tibesti-Sirte uplift during the Triassic-Jurassic led to the development of Sirte basin during the Early Cretaceous.The exposed stratigraphic section in the study area consists of sedimentary rocks, ranging in age from Tertiary to Quaternary (Figure 3).The most abundant sedimentary facies are carbonates with lesser amounts of mudrocks, sandstones, and evaporites.Quaternary deposits disseminate around study area along the beach line.They occur as sand dunes, red soil (terra-rossa), conglomerates, calcarenites and sabkha sediments.All sampled stations in the study area are open beaches.There are two beach types; rocky and sandy beaches (e.g., Bin Jawwad and Aqaylah, respectively, Figure 4).Figure 3. Geological map of the area (scale 1:1,000,000) between Benghazi and Bin Jawwad, NE Libya (modified after Francis and Issawi, 1977;Innocent and Pertusati, 1984;and Mresah, 1998)
Methodology
Samples were collected from the beach sands along the Mediterranean Coast from Benghazi to Bin Jawwad, NE Libya, from 25 stations (four samples of each station) at sampling interval between 10 and 20 km, depending on accessibility to the studied beach.The traverse is parallel to the studied coast.Samples were essentially taken from the surface sands to represent the uppermost 30 cm of the beach sands.
The present study is a detailed mineralogical investigation on the very fine and fine sand size fractions (125-63 μm, and 250-125 μm) of the beach sands under consideration.The two size fractions subjected to gravitational heavy mineral separation using bromoform (specific gravity 2.87).The fractions of both the light-and heavy-minerals mounted in Canada balsam for transmitted light microscopy.
Scanning electron microscope with an energy dispersive X-ray attachment (SEM-EDX) used to shed light on the geochemical characteristics of the mineral composition.In spite of the fact that EDX microanalysis extensively used in the present work, its output was not quoted herein because of their standard-less and semi-quantitative nature.The weight or molecular ratios of major oxides used to express the possible changes in composition.The X-ray powder diffraction of whole samples used to estimate the relative abundance of calcite and aragonite as main partners of the carbonate fraction.
Light Minerals
The distribution and morphological properties of light minerals have persuaded many geoscientists to study them with respect to depositional environment and provenance (Trevena and Nash, 1979;Carranza-Edwards et al., 1998;and Margineanu et al., 2014).The studied beach sands are a mixture of carbonate and non-carbonate materials.The microscopic examination and the X-ray powder diffraction of whole-sediment samples indicate an abundance of carbonates, quartz, feldspars and evaporites with other minor minerals.Figures ( 5 and 6) demonstrate the relative frequency of the light minerals in both the fine and very fine sand fractions.
Carbonates
According to Schwartz (2005), carbonate beaches have a significant proportion of the sediment fabric of biogenic origin, and carbonate in composition.Carbonate beaches are, therefore, wave deposited accumulations of sediment (sand to boulder in size) deposited on shores where a near shore supply of biogenic debris is available.Carbonate beaches exist in tropical and temperate locations, including some in relatively high latitudes.The main prerequisite for a carbonate beach is a source of carbonate-producing detritus, and a mechanism to erode and/or transport it to the shore.In the studied samples, the recorded carbonate grains include biogenic grains made of aragonite and/or calcite with sporadic dolomite.Calcite is usually colorless and exhibits gleaming interference colors of very high orders (Figure 7).It occurs in form of rounded to subrounded monocrystalline grains (Figure 8) and rounded polycrystalline grains (micrite lumps).Disseminated rhombohedral calcite grains are rarely encountered in the studied samples.The micrite calcite grains are thought to be the product of the mechanical breakdown of the micrite envelopes of recent shell fragments, while rhombohedral calcite grains are thought to be of detrital origin (e.g., Anirudhan and Thiruvikramji, 1991;and Luzar-Oberiter et al., 2008).Biogenic grains are composed of whole shell fragments of macrofauna (mollusks) and microfaunal shells, mostly of foraminifera and algae and other, not clearly classifiable biogenic fragments.The EDX microanalysis of a large number of aragonitic and calcitic specimens suggests that the latter is more capable of hosting more inclusions than the former.These inclusions have different composition but those of barium sulfate and phosphates are abundant (Table 1).The difference in the composition of the inclusions may suggest derivation from various provenances.According to Tucker (2001), the requisite Mg/Ca ratios are < 1.2 for low-Mg calcite, 1.2 -5.5 for high-Mg calcite, and > 2 for aragonite.Generally, high-Mg calcite and aragonite are the predominant mineralogies of organisms in modern seas (Mg/Ca ratio=5.2, as quoted by Zankl, 1993).The EDX microanalysis shows that all analyzed grains are low-Mg calcite (see Table 1).Pyokari (1997) and El-Werfalli (2016) stated that low-Mg calcite is the most common mineral in the carbonate beach sands.That speciation is rather due to diagenetic transformations of the carbonate material (Preda and Cox, 2005).In the western part of the study area, sands display green color most probably due to algal activity (Figure 9).The XRD analysis (Table 2) shows that in the central and eastern sides of the studied beach, aragonite dominates over calcite.This feature is possibly related to a number of regional characteristics such as biological production.In the western section calcite dominants over aragonite.That speciation is rather due to diagenetic transformations of the carbonate material.In agreement with Preda and Cox (2005), carbonate speciation suggests a difference in sediment age; the central and eastern sides consist of younger sediment, while older, reworked, and diagenetically transformed materials dominate in the western section.
Quartz
In the studied beach sands, quartz is commonly colorless and contains inclusions, namely, tourmaline (see Figure 7), rutile, and apatite.It is mostly monocrystalline with uniform and undulatory extinction.However, some samples contain polycrystalline quartz grains.It is important to notice that abrasion during transportation may affect the relative abundance of the polycrystalline and undulatory quartz because they destroy preferentially relative to nonundulatory monocrystals (Lewis, 1984).According to Cherian et al., (2004), the reduced
E-10
ISSN: 2413-5267 percentage of the polycrystalline quartz is probably due to dilution by a fresh supply of monocrystalline quartz.Moreover, the polycrystalline quartz grains may disintegrate during the course of transportation from source.The detected quartz grains vary from very angular to well-rounded with relative preponderance of the former type.According to Shine (2006), the rounded and well-rounded quartz grains owe their shape largely to a longer distance of transportation and/or the multi-cycle origin of clastic sediments.On the other hand, moderate to high degree of sphericity of the quartz grains are an indication of derivation from crystalline and older sedimentary rocks exposed in regions far from the basin of deposition (Rahman et al., 2004).The presence of dust rims in most quartz grains as an indicator for sediment recycling has long been recognized (e.g., Dickinson and Milliken, 1995).Some quartz grains show cracks, which could be either due to inheritance from source material or due to long transportation distance.
Feldspars
In the present study, microcline and members of plagioclase series are common feldspars.Microcline is mainly colorless rectangular, in few cases, the microcline grains are altered and turbid (see Figure 7).Plagioclase feldspars occur mainly as fresh grains.
Evaporites
In the studied sands, the detected evaporites are gypsum and halite (Figure 10).Gypsum occurs as prismatic, nodular, tabular and lenticular grains.The fabric and texture of gypsum grains suggest their authigenic crystals.Halite crystallizes together with gypsum.It is frequently recorded some zircon and Ti-minerals as tiny inclusions.
Heavy minerals
The microscopic investigation documents that some minerals such as kyanite and titanite occur exclusively in the western part of the study area.Hornblende, pistachite and garnet
E-11
ISSN: 2413-5267 disappear in the eastern part, whereas zircon, tourmaline, monazite, augite, biotite and rutile exist in the whole area under consideration.
Zircon
Zircon represents the most abundant heavy mineral among the non-opaques.Microscopic observations of zircon enabled the recognition of different shapes of zircon, such as; the oval with zoned structure, elongated, elliptical, outgrown, overgrown, rounded and broken (Figure 11).It generally contains different amounts of inclusions as well as vacuoles or bubbles.
Well-developed fractures crisscrossed the zircon grains might probably be due to their sustained transportation by waves and currents (e.g., Angusamy and Rajamanickam, 2000).It is important to notice that the outgrown and broken zircon grains are copiously present in the very fine sand fraction.According to Angusamy et al., (2004), the prolific presence of broken zircon in the beach sand of southern coast of Tamil Nadu, east coast of India, indicate rigorous energy conditions by waves, due to repeated swash and backwash and by littoral currents.The same authors added that outgrown zircon might be due to the longer stay of sediments in the depositional environment.Some of the examined zircon displays evident metamictized nature (Figure 12).Based on the semi-quantitative EDX microanalysis data (Table 3), there are two types of zircon in the studied samples; the first type is uranium and thorium enriched whereas the second is yttrium and heavy rare earth elements enriched.This suggests the possibility of zircon from different provenances.
Tourmaline
It is the second common non-opaque mineral.Zircon is present in higher frequencies in very fine-grained sediments, whilst tourmaline dominates in fine-grained sands.This observation coincides with that quoted by Sallun and Suguio (2008) in their study on Quaternary deposits from Sao Paulo State, Brazil.This can be attributed to differences in the hardiness of these minerals.Most of the tourmaline grains are surrounded to well-rounded but sometimes prismatic in shape.The authors believe that the occurrence of a rounded to well-rounded tourmaline variety suggests recycling from the older sedimentary precursor and the high frequency of tourmaline in the studied sediments may indicate derivation from metasomatized-rich tourmaline source rock.The majority of the encountered grains are zoneless.Zoning, if present, is visible as a faint change in color intensity.Tourmaline grains display a wide range of colors and pleochroism represented by pale yellowish brown, pale green and pink.
Hornblende
The frequency of hornblende is lower in the size fraction 250-125µm than the finer size fraction 125-63µm.It is commonly brown, light green and bluish green to dark green (see Figure 11).It displays diagnostic perfect cleavage that appears imperfect in some cases, depending on the optical orientation of the mounted grains.They are mostly prismatic in shape but occasionally sub-rounded.
Pistachite
It is only detected in the grain size; 125-63µm.This result is similar to the study of the beach sands along the eastern side of the Gulf of Suez, Egypt as quoted by El-Kammar et al., (2007).According to Anfuso et al., (1999), in the coastal sand between Sanlucar de
E-14
ISSN: 2413-5267 Barrameda and Rota, Cadiz, southwest Iberian Peninsula, pistachite accumulates in the finer fraction.In the present study, pistachite is commonly pale yellowish green in color; sometimes it is turbid due to alteration with weak pleochroism and high birefringence as a diagnostic character.
Garnet
According to Anfuso et al., (1999) garnet is more abundant in the coarsest fractions.In the studied samples, it is slightly abundant in the size fraction 250-125µm than in the size fraction 125-63µm.The encountered garnet grains are angular to subangular but occasionally subrounded with characteristic conchoidal fracture (Figure 13).The most common grains are colorless (see Figure 11) but few pinkish grains occur also.In some cases, garnet contains opaque and quartz inclusions.The EDX microanalysis shows that detected garnet is mostly of almandite variety (Table 4).
Monazite
It is pale yellow in color with high relief and very weak birefringence (see Fig. 11).It occurs as rounded grains of spherical and ellipsoid shapes, sometimes with strongly pitted surface.
Rutile
It either occurs as subangular grains or rounded grains (Fig. 14) with a relative preponderance of the former.The yellowish-brown and deep reddish brown colors characterize the encountered rutile (see Fig. 12).It is a well-known fact that rutile can contain highly charged elements (V, Cr, Fe, Al, Nb, Sn, Sb, Ta, W) up to the %-level (Smith and Perseil, 1997;Zack, et al., 2004).The EDX microanalysis shows that the studied rutile is a good accumulator of chromium (Table 5).The rutile composition is not the same in the studied sediments, suggesting derivation from various provenances.
Titanite
The reddish brown titanite is the only detected type (see Figure 11).It is commonly prismatic to sub-rounded grains in shape, occasionally displaying imperfect cleavage.
Augite
Its frequency is slightly lower in the size fraction 250-125µm than in the finer size fraction 125-63µm.It has yellowish brown color (see Figure 11) and exhibits perfect cleavage.
Biotite
It dominates in the 250-125µm size fraction.It belongs to the green variety of biotite (see Figure 11) and seems to be fresh with diagnostic bright yellowish peripheries.Fine inclusions of opaque minerals are sometimes frequent.
Kyanite
It is only detected in the grain size 125-63µm.It occurs as colorless short prismatic subrounded grains with marked right-angled cleavage (see Figure 11).
Opaque minerals
The SEM examination indicates that the detected opaque minerals are mainly magnetite, ilmenite, and goethite (Figure 16).According to Mohapatra et al., (2016), ilmenite is commonly present in the very fine sand fraction.
Sediments Type
Mixes of carbonate and clastic sediments, which are commonly encountered in the Recent coastal sediments, require careful analysis if they are to be correctly interpreted (Carter, 1982).Various methods can identify and quantify the mixing of carbonate and clastic sediments.Mount (1985) pointed out that carbonate sediments incorporate more than 10% terrigenous constituents considered to be of a mixed carbonate-clastic character.On the other hand, varied levels of CaCO3 content propose transition region between the terrigenous and carbonate provinces, commonly ranging from 25 to 75% (Hernandez Arana et al., 2005).In agreement with Gomez-Pujol et al., (2013), the authors believe that cluster analysis can classify the sampled stations (Figure 17) as follows:
Province One (from Bin Jawwad to Ras Al Alas)
The sediments of this province contain mixtures of clastics related to different sedimentary rocks of limestones, evaporites, sandstones and mudrocks, along with green sediments.Mineralogically, in this province quartz and carbonates are present in roughly equal abundances, with feldspars and evaporites contents in the range of 7 to 17% and 6 to 13%, respectively.Medium to high concentration of heavy minerals characterizes also this province.The detected heavy minerals are zircon, tourmaline, pistachite, hornblende, garnet, monazite, rutile, titanite, augite, biotite, and kyanite.
Province Two (from South Ras Al Alas to Shatt Qabis)
The sediments of this province consist of a mixture of clastics derived mostly from sandstones and limestones.Mineralogically, in this province carbonates dominate over quartz, with minor feldspars and evaporites in the range of 0.24 -3%, and 1 -5%, respectively, with a complete absence of titanite and kyanite.
Province Three (from North Shatt Qabis to Benghazi)
Mineralogically, the clastics of this province are carbonate-dominated, with subordinate quartz but very poor in feldspars and evaporites.The heavy minerals in this province are extremely rare.Zircon, tourmaline, monazite, augite, biotite and rutile are the only detected heavy minerals.
Based on the difference in minerals concentration among the three provinces, and considering the geology of the surrounding areas, the authors believe that the studied beach sands in provinces one and two were derived from the sea accumulation (carbonates and evaporates), surrounding carbonate rocks (carbonates), terra-rossa soil and sandstones (quartz, feldspars and heavy minerals).In province three, the sands are derived from the sea accumulation (carbonates and evaporates) and surrounding carbonate rocks (carbonates).The heavy minerals and some light minerals (quartz and feldspars) originated from the aeolian Faculty of Marine Resources, Alasmarya Islamic University, Libya.
E-19
ISSN: 2413-5267 sands related to the widely distributed igneous and metamorphic rocks in many parts of Libya such as Jabal Al Tibisti, Jabal Al Haruj Al Swad, etc.
Conclusions
The present study provides genuine data on the "very fine" and "fine" sand size fractions (125-63μm and 250-125μm) of the beach sands along the Mediterranean Coast from Benghazi to Bin Jawwad, Northeast Libya.These sediments represent a mixture of carbonate and noncarbonate materials at different quotients.The detected light minerals are calcite, aragonite,
E-20
ISSN: 2413-5267 quartz, feldspars, and evaporites, whereas the recorded heavy minerals are zircon, tourmaline, pistachite, hornblende, garnet, monazite, rutile, titanite, augite, biotite, kyanite, chromian spinel, magnetite, ilmenite, and goethite.The heavy minerals in the eastern side are extremely rare, while medium to high concentration of heavy minerals characterizes the sands of the western and central parts.The XRD analysis shows that aragonite dominates over calcite, in the central and eastern sides of the studied beach, whereas calcite dominants over aragonite in the western side, suggesting a difference in sediment age.The central and eastern sides consist of younger sediment, while older, reworked and diagenetically transformed material dominates in the western section.The cluster analysis extracted three distinct groups of sediments.The ZTR index suggests that the studied beach sands are mineralogically submature sediments.
Figure 1 .Figure 2 .
Figure 1.Landsat images showing the location of the study area and the location of the sampled stations
Figure 5 .Figure 6 .
Figure 5. Frequency distribution of the light minerals in the study area (size range: 125-63µm)
Figure 10 .
Figure 10.BSE images showing.a) prismatic crystals of gypsum on Mg-silicate matrix where halite crystals and tiny zircon inclusions are recorded, and b) well cubic halite grain (size: 125-63µm)
Figure 17 .
Figure 17.Dendrogram from cluster analysis (Ward method) of the sampling stations Shaltami et al, 2016Faculty of Marine Resources, Alasmarya Islamic University, Libya.
Table 2 .
Calcite/aragonite ratio on basis of XRD analyses | 4,240.6 | 2016-12-31T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Chemical Composition and Antimicrobial Activity of New Honey Varietals
Due to a widespread occurrence of multidrug-resistant pathogenic strains of bacteria, there is an urgent need to look for antimicrobial substances, and honey with its antimicrobial properties is a very promising substance. In this study, we examined for the first time antimicrobial properties of novel varietal honeys, i.e., plum, rapeseed, Lime, Phacelia, honeydew, sunflower, willow, and multifloral-P (Prunus spinosa L.), multifloral-AP (Acer negundo L., Prunus spinosa L.), multifloral-Sa (Salix sp.), multifloral-Br (Brassica napus L.). Their antimicrobial activity was tested against bacteria (such as Escherichia coli, Bacillus circulans, Staphylococcus aureus, Pseudomonas aeruginosa), yeasts (such as Saccharomyces cerevisiae and Candida albicans) and mold fungi (such as Aspergillus niger). In tested honeys, phenolic acids constituted one of the most important groups of compounds with antimicrobial properties. Our study found phenolic acids to occur in greatest amount in honeydew honey (808.05 µg GAE/g), with the highest antifungal activity aiming at A. niger. It was caffeic acid that was discovered in the greatest amount (in comparison with all phenolic acids tested). It was found in the highest amount in such honeys as phacelia—356.72 µg/g, multifloral (MSa) and multifloral (MBr)—318.9 µg/g. The highest bactericidal activity against S. aureus was found in multifloral honeys MSa and MBr. Additionally, the highest amount of syringic acid and cinnamic acid was identified in rapeseed honey. Multifloral honey (MAP) showed the highest bactericidal activity against E. coli, and multifloral honey (MSa) against S. aureus. Additionally, multifloral honey (MBr) was effective against E. coli and S. aureus. Compounds in honeys, such as lysozyme-like and phenolic acids, i.e., coumaric, caffeic, cinnamic and syringic acids, played key roles in the health-benefit properties of honeys tested in our study.
Introduction
Recently, due to the growing resistance of microorganisms to many antibiotics, attention has been paid to agents of natural origin with antimicrobial effects. Honey can Solidago type (46.48%) dominant pollen; and sunflower honey (He) with the Helianthus type (73.35%) dominant pollen (Supplementary Materials Tables S1 and S2).
Physicochemical Properties of Honey
The phenolic compounds present in honey come from honeydew or pollen. It was observed that the content of phenolic compounds in dark honeys, e.g., honeydew (So) 808.05 ± 7.20 µg GAEs/g, was higher than the content of these compounds in light honeys, e.g., multifloral (MBr) 404.74 ± 9.12 µg GAEs/g and sunflower (He) 431.27 ± 5.45 µg GAEs/g. Rapeseed honey (Br) 378.27 ± 7.3 µg GAEs/g was characterized by the lowest content of phenolic compounds.
Principal component analysis (PCA) of the physicochemical properties of honeys tested synthetically showed their diversity depending on type. The eigenvalues of the first two axes were 2.47 and 1.13. The first axis explained over 49% and second axis over 22% of the variability of the analyzed data/physicochemical properties of studied honeys, and all four axes over 98%. This proves the major role axes 1 and 2 play in ordering the variables and determining the factors responsible for the distribution of honey types in the ordination diagram ( Figure 1). All the variables analyzed, except for protein content for axis 2, were statistically significant at the level of p < 0.05. The ordination diagram showed two main trends of the variation in the physicochemical properties of the tested honey ( Figure 1). The first one was related to the first axis and positively correlated with all variables tested, except the water content. The strongest correlation with this axis was shown by total phenolics, pH and protein content. This axis determined the gradient of the content of the analyzed properties in the honey types. Group I of the studied honeys (right side of the ordination diagram) represents an increasing content of total phenolics, pH, and protein content, starting from honeydew (So), multifloral (MAP), multifloral (MP), plum (P) and willow (Sa) honey. Group II (left side of the PCA diagram) was negatively correlated with the first axis and characterized by a high water content and lower content of the phenolics, proteins and pH values. This group of honeys includes rapeseed (Br), sunflower (He), lime (Tc), multifloral (MBr) and phacelia (Ph). The second axis of the PCA ordination diagram was strongly and positively correlated with electrical conductivity and water content, and the second axis determined the gradient of replaced variables in the studied honeys (Group III). The water content increased from the multifloral (MP) (14.67%) and willow (Sa), multifloral (MSa) and sunflower (He) (located under the second axis and negatively correlated with it) to multifloral (MAP), multifloral (MBr), and lime (Tc), positively correlated with the discussed axis, where the highest content of water was recorded. Honeys exhibiting highest conductivity are: honeydew (So), multifloral (MSa), lime (Tc) and phacelia (Ph) (Figure 1). Sugars are main components of honey (Table 2). In order not to miss any of the sugars, we used 16 sugar standards, including mono-, di-and trisaccharides. In Table 2 we presented only the sugars that were determined by the HPLC analysis. A representative HPLC profile of honey number 11 is shown in Supplementary materials ( Figure S10). Simple sugars, i.e., glucose and fructose, were identified in the highest amount in all the tested Sugars are main components of honey (Table 2). In order not to miss any of the sugars, we used 16 sugar standards, including mono-, di-and trisaccharides. In Table 2 we presented only the sugars that were determined by the HPLC analysis. A representative HPLC profile of honey number 11 is shown in Supplementary Materials ( Figure S10). Simple sugars, i.e., glucose and fructose, were identified in the highest amount in all the tested honey samples. The PCA ordination analysis shows relationships between the honey type and diversity of sugars and their content ( Figure 2). All analyzed sugar types were statistically significant at the level of p < 0.05 for the two first axes of the ordination PCA diagram, except glucose for axis 2. The first axis explains ca. 53% (eigenvalue 3.18) and the second axis ca. 32% (eigenvalue 1.9) of the data variability. All four axes explain over 97% of the data variability. Axis 1 is positively correlated with all the variables tested, except the glucose content. The first axis determines the falling share of glucose in the honey types form the right side of the PCA diagram for sunflower (He), willow (Sa), multifloral (MP) and plum (P) to multifloral (MAP) in the middle, and lime (Tc), phacelia (Ph) on the left side. In relation to the content of other sugars, axis 1 is positively correlated with them. The strongest correlation with sucrose and rhamnose is observed and depicted by the axis showing a rising gradient of these sugars from rapeseed (Br), willow (Sa) and multifloral (MSa) to lime (Tc) and phacelia (Ph).
Axis 2 of the diagram is strongly, positively correlated with the erlose content and strongly negatively correlated with the fructose and fucose content. The erlose content decreases form rapeseed Br (8.26 g/100 g), plum P (3.02 g/100 g) and willow Sa (2.92 g/100 g) through honeydew So, multifloral MSa, multifloral MBr and multifloral MAP honey, where the erlose content ranges from 1.77 g/100 g to 0.66 g/100 g respectively to lime (Tc) and phacelia (Ph) honey in which no erlose was found. The high fucose content in sunflower (He) and multifloral (MAP) honey is positively correlated with fructose. The ratio of fructose to glucose was typical for honey. The more glucose a honey has, the faster it tends to crystallize. In honey, the ratio of fructose to glucose should ideally range from 0.9 to 1.35. A fructose to glucose ratio below 1.0 leads to faster honey crystallization, whereas crystallization becomes slower when this ratio is more than 1.0 [18][19][20]. Axis 1 is positively correlated with all the variables tested, except the glucose content. The first axis determines the falling share of glucose in the honey types form the right side of the PCA diagram for sunflower (He), willow (Sa), multifloral (MP) and plum (P) to multifloral (MAP) in the middle, and lime (Tc), phacelia (Ph) on the left side. In relation to the content of other sugars, axis 1 is positively correlated with them. The strongest correlation with sucrose and rhamnose is observed and depicted by the axis showing a rising gradient of these sugars from rapeseed (Br), willow (Sa) and multifloral (MSa) to lime (Tc) and phacelia (Ph).
Axis 2 of the diagram is strongly, positively correlated with the erlose content and strongly negatively correlated with the fructose and fucose content. The erlose content decreases form rapeseed Br (8.26 g/100 g), plum P (3.02 g/100 g) and willow Sa (2.92 g/100 g) through honeydew So, multifloral MSa, multifloral MBr and multifloral MAP honey, where the erlose content ranges from 1.77 g/100 g to 0.66 g/100 g respectively to lime (Tc) and phacelia (Ph) honey in which no erlose was found. The high fucose content in sunflower (He) and multifloral (MAP) honey is positively correlated with fructose.
The ratio of fructose to glucose was typical for honey. The more glucose a honey has, the faster it tends to crystallize. In honey, the ratio of fructose to glucose should ideally range from 0.9 to 1.35. A fructose to glucose ratio below 1.0 leads to faster honey crystallization, whereas crystallization becomes slower when this ratio is more than 1.0 [18][19][20]. In the present study, the average ratio of fructose to glucose was around 1. However, two tested honeys (Tc and Ph) had a ratio well below 1.0 (0.85 and 0.71, respectively), which indicates greater chances of honey crystallization ( Table 2).
Antimicrobial Activity of Honey
The antimicrobial activity of the honey samples was expressed by the inhibition of the growth of the tested bacteria around the wells on the agar medium, and it varied (Supplementary Materials Figure S1). The Gram-positive bacteria B. circulans proved to be the most sensitive to the activity of the honeys. The inhibition zones of bacterial growth were observed in all concentrations (62.5-500 mg/mL) in seven honey samples: plum (P), rapeseed (Br), lime (Tc) and multifloral (MBr, MAP, MP, MSa). In the case of three honeys-willow (Sa), phacelia (Ph), and sunflower (He)-no activity against B. circulans was found at concentrations of 125 and 62.5 mg/mL. On the other hand, honeydew honey (So) did not inhibit bacterial growth only in the concentration of 62.5 mg/mL. Taking into account the antibacterial activity observed after the use of the lowest concentration of honeys (62.5 mg/mL), it should be stated that the following honeys proved most effective against B. circulans: multifloral (MSa), plum (P) and rapeseed (Br) (growth inhibition zones of 15.22, 14.17, 13.55 mm respectively).
The results of analysis of the significance of difference test showed that the type of honey and its concentration are the factors influencing the antibacterial activity of honeys against B. circulans. The highest activity, expressed by the size of the inhibition zone, was observed for rapeseed honey (Br). This result is significantly different from plum (P) and multifloral (MAP) honey, which show similar activity, and willow (Sa), phacelia (Ph) and sunflower (He) (Figure 3). At a lower concentration (Figure 3), rapeseed (Br), lime (Tc), multifloral (MSa), plum (P) and multifloral (MBr) are less efficient, but retain their antibacterial properties, which significantly differs from willow honeys (Sa), phacelia (Ph) and sunflower (He), which show no such activity. At the lowest concentration, rapeseed (Br), multifloral (MSa), plum (P) and smaller multifloral (MBr) honeys show high activity, which significantly differs from the others, which have lost their properties ( Figure 3).
For all tested honey types, inhibition of bacterial growth in all tested microorganisms at the highest concentration of 500 mg/mL was visible (Figures 4-6).
It should be noted that the tested honey varieties showed significantly lower activity against other Gram-positive bacteria used in the experiments, i.e., S. aureus. In this case, the inhibition zones of bacterial growth were observed only after the application of 50% honey concentrations. The growth of S. aureus was most strongly inhibited by these honeys: multifloral MBr (8.42 mm) and multifloral MSa (9.97 mm) ( Figure 4). multifloral (MAP) honey, which show similar activity, and willow (Sa), phacelia (P sunflower (He) (Figure 3). At a lower concentration (Figure 3), rapeseed (Br), lim multifloral (MSa), plum (P) and multifloral (MBr) are less efficient, but retain thei bacterial properties, which significantly differs from willow honeys (Sa), phacelia (P sunflower (He), which show no such activity. At the lowest concentration, rapesee multifloral (MSa), plum (P) and smaller multifloral (MBr) honeys show high ac which significantly differs from the others, which have lost their properties ( Figure For all tested honey types, inhibition of bacterial growth in all tested microorga at the highest concentration of 500 mg/mL was visible (Figures 4, 5, 6). It should be noted that the tested honey varieties showed significantly lower activity against other Gram-positive bacteria used in the experiments, i.e., S. aureus. In this case, the inhibition zones of bacterial growth were observed only after the application of 50% honey concentrations. The growth of S. aureus was most strongly inhibited by these honeys: multifloral MBr (8.42 mm) and multifloral MSa (9.97 mm) ( Figure 4).
Similarly, only at the concentration of 500 mg/mL did the tested honeys inhibit the growth of Gram-negative bacteria E. coli. The largest zones of growth inhibition (9.6-11.9 mm), and thus the highest activity, were recorded for the following honeys: multifloral MBR and MAP. The exception was the sunflower honey (He), which showed no activity against this bacteria ( Figure 5).
At a lower concentration (250, 125, 62.5 mg/mL), no tested honeys caused a decrease in E. coli and Staphylococcus aureus growth. A broader effect was evident when testing was done with honey at lower concentrations in relation to B. circulans and A. niger (Figures 3and 7). In addition, based on the results obtained by the diffusion method, it was found that P. aeruginosa bacteria, both standard and clinical strains, was the microorganism completely insensitive to the honey varieties being tested. The analyzed honeys show inhibitory activity against E. coli and S. aureus only in the highest concentration (Figures 5 and 6). Multifloral MAP, MP, and MBr honeys have the highest activity against E. coli and statistically significantly differ in this respect from multifloral (MSa) and plum (P) (Figure 6) honeys. However, the activity of honeys multifloral (MSa) and (MBr) against S. aureus is statistically significantly different from the properties of multifloral honeys MAP and MP and lime (Tc) (Figure 6). Similarly, only at the concentration of 500 mg/mL did the tested honeys inhibit the growth of Gram-negative bacteria E. coli. The largest zones of growth inhibition (9.6-11.9 mm), and thus the highest activity, were recorded for the following honeys: multifloral MBR and MAP. The exception was the sunflower honey (He), which showed no activity against this bacteria (Figure 5).
At a lower concentration (250, 125, 62.5 mg/mL), no tested honeys caused a decrease in E. coli and Staphylococcus aureus growth. A broader effect was evident when testing was done with honey at lower concentrations in relation to B. circulans and A. niger (Figures 3 and 7). In addition, based on the results obtained by the diffusion method, it was found that P. aeruginosa bacteria, both standard and clinical strains, was the microorganism completely insensitive to the honey varieties being tested.
Antifungal Activity of Honey
The antifungal activity of the honey samples used at concentrations ranging from 62.5 to 500 mg/mL was tested against A. niger, C. albicans and S. cerevisiae using the radial diffusion method. On the basis of the obtained results, it was found that C. albicans and S. cerevisiae showed resistance to the tested honey samples at all concentrations.
On the other hand, the tested honey varieties effectively inhibited the growth of A. niger (Supplementary Materials Figure S2). The maximum antifungal activity was found in all honey samples at the concentration of 500 mg/mL in the range from 62 to 99.25 µg/mL based on the properties of amphotericin B (µg/mL). At this concentration, the most active honeys were multifloral (MSa), plum (P) and honeydew (So), which differed significantly from multifloral (MBr) and phacelia (Ph) honeys (Figure 7). At a lower concentration (Figure 7), the properties of multifloral (MP), multifloral (MSa) and willow (Sa) honeys are comparable and significantly different from phacelia (Ph). At the next concentration, i.e., 125 mg/mL (Figure 7), multifloral (Msa) and honeydew (So) honeys retained antifungal properties, being significantly different from rapeseed (Br) and phacelia (Ph). At the lowest concentration ( Figure 7) willow (Sa) and multifloral (MBr) honeys were most active, differing from plum (P) and rapeseed (Br), which showed lowest antifungal activity. The analyzed honeys show inhibitory activity against E. coli and S. aureus only in the highest concentration (Figures 5 and 6). Multifloral MAP, MP, and MBr honeys have the highest activity against E. coli and statistically significantly differ in this respect from multifloral (MSa) and plum (P) (Figure 6) honeys. However, the activity of honeys multifloral (MSa) and (MBr) against S. aureus is statistically significantly different from the properties of multifloral honeys MAP and MP and lime (Tc) (Figure 6).
Antifungal Activity of Honey
The antifungal activity of the honey samples used at concentrations ranging from 62.5 to 500 mg/mL was tested against A. niger, C. albicans and S. cerevisiae using the radial diffusion method. On the basis of the obtained results, it was found that C. albicans and S. cerevisiae showed resistance to the tested honey samples at all concentrations.
On the other hand, the tested honey varieties effectively inhibited the growth of A. niger (Supplementary Materials Figure S2). The maximum antifungal activity was found in all honey samples at the concentration of 500 mg/mL in the range from 62 to 99.25 µg/mL based on the properties of amphotericin B (µg/mL). At this concentration, the most active honeys were multifloral (MSa), plum (P) and honeydew (So), which differed significantly from multifloral (MBr) and phacelia (Ph) honeys (Figure 7). At a lower concentration (Figure 7), the properties of multifloral (MP), multifloral (MSa) and willow (Sa) honeys are comparable and significantly different from phacelia (Ph). At the next concentration, i.e., 125 mg/mL (Figure 7), multifloral (Msa) and honeydew (So) honeys retained antifungal properties, being significantly different from rapeseed (Br) and phacelia (Ph). At the lowest concentration ( Figure 7) willow (Sa) and multifloral (MBr) honeys were most active, differing from plum (P) and rapeseed (Br), which showed lowest antifungal activity.
Catalase
All the honey samples with catalase addition had the same or similar growth inhibition zones compared to the control, i.e., honey without catalase. The tested honeys remained active against B. circulans, E. coli and S. aureus, which proves that the activity was related to other factors and that hydrogen peroxide did not affect the antimicrobial activity of these honeys (Supplementary Materials Figures S3-S5). Positive control data with different dilutions of hydrogen peroxide are presented in Supplementary Materials Figure S6 and Table S5.
Lysozyme-like Activity of Honey
In subsequent experiments, lysozyme-like activity was checked by applying the tested honey samples to plates containing M. lysodeikticus according to the procedure of Mohrig and Messner [10]. Lysozyme-like activity was found in all the tested honeys. The highest lysozyme-like activity corresponding to the activity of 447.26 ug/mL and 159.74 ug/mL EWL was measured in multifloral honeys MAP and MP. The other varietal honeys have low lysozyme-like activity. Comparable values were obtained for the following honeys multifloral (MSa), willow (Sa), multifloral (MBr), sunflower (He) and plum (P), rapeseed (Br), lime (Tc), phacelia (Ph), honeydew (So), which were statistically significantly different from other samples (Figure 8).
Lysozyme-like activity level was tested in all samples taken at various steps of honey preparation, i.e., after centrifugation, dialysis and lyophilization ( Table 3). The peptidoglycan digestion zone is shown in Supplementary Materials Figure S7. The highest lysozyme-like activity was observed in honey after centrifugation (2.3 ± 0.47 µg/ml EWL). Our results showed that there was activity against M. lysodeikticus at each step in the honey samples, which is defined as lysozyme-like activity. In order to find out whether there is any lysozyme protein in honey, it is necessary to perform other long-term experiments.
Mohrig and Messner [10]. Lysozyme-like activity was found in all the tested honeys. The highest lysozyme-like activity corresponding to the activity of 447.26 ug/mL and 159.74 ug/mL EWL was measured in multifloral honeys MAP and MP. The other varietal honeys have low lysozyme-like activity. Comparable values were obtained for the following honeys multifloral (MSa), willow (Sa), multifloral (MBr), sunflower (He) and plum (P), rapeseed (Br), lime (Tc), phacelia (Ph), honeydew (So), which were statistically significantly different from other samples (Figure 8).
Figure 8.
Lysozyme-like activity was determined by the radial diffusion assay and presented as an equivalent of EWL activity (µg/mL). Statistical differences are marked with different letters and their significance at p ≤ 0.001 with capital letters, p ≤ 0.05 with lowercase letters.
Lysozyme-like activity level was tested in all samples taken at various steps of honey preparation, i.e., after centrifugation, dialysis and lyophilization ( Table 3). The peptidoglycan digestion zone is shown in Supplementary Materials Figure S7. The highest lysozyme-like activity was observed in honey after centrifugation (2.3 ± 0.47 µg/ml EWL). Table 3. Lysozyme-like activity in honey samples.
HPLC Analysis of Phenolic Compounds in Honey Samples
The findings are presented in Table 4 and Figure 9. The presence of caffeic and syringic acid in various amounts was found in all tested honeys. Some honeys identified coumaric acid (in 45% of samples) and cinnamic acid (in 73% of samples). The highest content of caffeic acid was observed in the following honeys: phacelia (Ph)-356.72 µg/g, multifloral Sa (MSa) and multifloral Br (MBr)-318.9 µg/g, and cinnamic acid in willow honey (Sa)-11.9 µg/g. The content of coumaric and syringic acid in the honey samples did not exceed 10 µg/g. 9. Content of selected phenolic acids (µg/g) in the tested honeys. Figure 9. Content of selected phenolic acids (µg/g) in the tested honeys.
Discussion
Due to a widespread occurrence of multidrug-resistant (MDR) bacterial and fungal strains, there is an urgent need to look for antimicrobial substances. Nosocomial infections make up a very high percentage of postoperative complications and are very difficult to treat. Therefore, honey with its antimicrobial properties is a very promising substance with many valuable properties [21]. In the honeys tested in our study, similarly to earlier publications [22][23][24][25][26], several substances with antimicrobial properties were identified. Although honey has some limitations and cannot be used as a drug, it can still enhance drug treatment against MDR bacterial and fungal strains.
In honey, phenolic acids are one of the most important groups of compounds with antimicrobial activity. Phenolic acids and flavonoids were recognized in the 1990s as important antibacterial substances. In studies of various honeys from Burkina Faso, it was found that honeydew honeys had the highest content of phenolic compounds 113.05 ± 1.10-114.75 ± 1.30 mg GAE/100 g [27]. Moreover, the level of phenolic compound profile can be the marker for authentication of the botanical and geographic origin of the honey [25,28]. The composition of phenolic compounds depends on plant source. Therefore, in this study, the total phenol content was measured and the phenolic compound profile outlined using some of the most important typical compounds was illustrated. The syringic acid and vanillic acid as an example of compounds belonging to general group of benzoic acids and caffeic acid, p-coumaric acid as an example of compounds belonging to general group of hydroxycinnamic acids were selected. In our study, the highest amount of phenolic acids was found in the honeydew honey (808.05 µg GAE/g, Tables 1 and 4, Figure 9) with the highest antifungal properties aiming at A. niger (Figure 7). Among the tested phenolic acids, caffeic acid was the most abundant, which was found in the highest amounts in the following honeys: phacelia (Ph)-356.72 µg/g, multifloral (MSa) and multifloral (MBr)-318.9 µg/g ( Table 1). The highest bactericidal activity against S. aures was found in multifloral honeys MSa and MBr. Moreover, multifloral MSa honey at all concentrations showed high antifungal activity (A. niger). Additionally, the highest amounts of syringic acid and cinnamic acid were identified in rapeseed honey (Br) ( Table 4). In a study by Chong et al. [29], it was shown that caffeic and syringic acid had antibacterial and antifungal activity. In addition, caffeic acid was bactericidal against S. aureus [30]. On the other hand, cinnamic acid shows antifungal properties against A. niger, C. albicans and antibacterial, among others, against Mycobacterium tuberculosis and E. coli [14,31]. The abovementioned compounds are connected with the antimicrobial effect of the most effective honeys tested in our study. At the highest concentration (500 mg/mL), multifloral honey (MAP) showed the highest bactericidal activity against E. coli (inhibition zone: 11.9 mm), and multifloral honey (MSa) against S. aureus (inhibition zone: 9.9 mm). Additionally, multifloral honey (MBr) is effective against both bacteria: E. coli (inhibition zone: 9.6 mm) and S. aureus (inhibition zone: 8.4 mm) (Figures 4-6). The antimicrobial properties against bacteria and fungi are agreed to exist if the inhibition of zone is greater than 6 mm [32,33]. The honeys tested in our study also showed antifungal activity, e.g., on A. niger. However, there was no fungicidal activity against C. albicans and S. cerevisiae. The highest activity against A. niger was observed in multiflower (MSa) and honeydew (So) honeys (Figures 7 and S2). Most likely, due to the high content of phenolic compounds, multiflorous honeys had this high antifungal activity [34]. Furthermore, polyphenolic compounds can interact with other active molecules present in honey and their synergistic effect may be responsible for antibacterial activity of different honeys [35]. The activity against various bacteria, including Bacillus cereus, S. aureus and E. coli, was tested in a 75% and 50% solution of multifloral honey from Turkey [36]. The results indicated that at a higher concentration, multifloral honey showed bactericidal activity against S. aureus (inhibition zone: 0-7 mm) and B. cereus (inhibition zone: 0-6 mm). No activity was demonstrated in either concentration against E. coli [37]. In multifloral honeys from Spain, the activity against S. aureus was checked by the method of agar well diffusion in a 75% honey solution. Osés et al. [36] found that the tested honeys showed an inhibitory effect on S. aureus in the form of inhibition zones of bacterial growth 14.05 ± 2.31 mm. An experiment by Alvarez-Suarez et al. [38] tested the activity against S. aureus of multiflower honey from Cuba produced by two species of bees: Melipona beecheii and Apis mellifera. The authors found that honey produced by M. beeicheii showed about sevenfold the activity against this bacteria of honey produced by A. mellifera [38,39]. Honey in various concentrations (10%, 20%, 30%, and 100%) from Pakistan showed different degrees of activity against A. niger and Penicillum chrysogenum [34]. Moussa et al. [40] showed no activity against C. albicans honey from Algeria. By contrast, Irish et al. [41] found that different honeys inhibit clinical isolates of C. albicans, C. glabrata and C. dubliniensis. Hence, honey is important in combating fungal infections that arise in immunocompromised patients, which may lead to the development of opportunistic infections [5].
Osmosis is an important physical phenomenon connected with antimicrobial properties of honey. High sugar content exerts osmotic pressure on bacterial cells, which results in water loss in bacterial cells. Dehydrated cells are unable to grow and develop in hypertonic sugar solution [21,25,42]. Furthermore, osmotic pressure can affect the ability of bacteria to form biofilms [43]. The presence of sugars in honey can also interfere with bacterial quorum sensing [21]. Wahdan et al. [28] showed that fungi are more tolerant to osmosis compared to bacteria and the sugar solution did not inhibit the growth of C. albicans [39]. Low water content inhibits yeast fermentation and bacterial growth [26]. The composition of the honeys tested in our study consists mainly of sugars and water, and also in smaller amounts phenolic compounds and proteins. Water content in the tested honeys was within the normal ranges accepted for honeys according to the International Honey Commission [44], i.e., from 14.6 to 18.0% (Table 1, Figure 1). The highest content of glucose was recorded in phacelia honey (Ph)-53 ± 0.46 g/100g, while the highest content of fructose was found in multifloral honey (MAP)-43.57 ± 0.28 g/100g (Table 2). Moreover, rhamnose sugar was detected in the highest amount in multifloral honey (MAP), which showed the greatest activity against E. coli. Erlose was a characteristic sugar found in these honeys, which is formed by the action of invertase on sucrose. The presence of erlose in honey was first confirmed by White and Maher in 1953 [45]. Erlose is an intermediate trisaccharide in the metabolism of nectar sugars by honeybees [46]. The highest erlose content (8.26 g/100 g, Table 2) was recorded in rapeseed honey (Br), which showed the highest activity against B. circulans. Additionally, this honey had the highest sucrose content (4.96 g/100 g, Table 2) among the tested honeys. Also, in rapeseed honey from various regions of Poland, the presence of sucrose was identified: 0.5-2.4 g/100 g [47]. On the other hand, in rapeseed honey from Germany, no sucrose or erlose was detected [48].
Another important physical factor that affects the antimicrobial activity of honey is pH. Low pH ranging from 4.08 (lime honey-Tc) to 4.96 (willow honey-Sa) was observed in our study ( Table 1). The low pH in honey is due to the presence of organic acids in honey, which include gluconic acid with antimicrobial activity formed by the oxidation reaction of glucose by glucose oxidase [22,49].
In our study, we showed multifloral honey to work best against E. coli bacteria (MAP). It is also characterized by the highest content of proteins (116.80 mg/mL, Table 1) and lysozyme-like activity (447.26 µg/mL EWL, Figure 8) among all tested honeys. Lysozyme is active against Gram-positive bacteria by acting on peptidoglycan. Gram-negative bacteria, e.g., E. coli are not susceptible to the action of lysozyme due to the presence of the outer membrane. Based on the morphological and immunocytochemical studies by Wild et al. [50], it has been illustrated that lysozyme does not act on membranes but on E. coli cytoplasm, leading to its degradation. In order to clarify the action of lysozyme on E. coli, Wild et al. [50] additionally used cryotechnics. They found that lysozyme can bind to the outer membrane and penetrate the periplasmic space, possibly reaching the inner cell membrane. Moreover, Wild et al. [50] conducted antimicrobial tests which showed that lysozyme is bactericidal against E. coli, but does not completely break down the bacteria. Two years later, Pellegrini et al. [51] showed that lysozyme inhibits DNA and RNA synthesis. In addition, it has been found that lysozyme causes damage to the outer cell membrane and permeabilization of the inner membrane, which results in the death of E. coli bacteria. In contrast, ultrastructural studies showed no effect of lysozyme on bacterial morphology [51]. The mechanism of the bactericidal activity of lysozyme on Gram-negative bacteria requires further research.
There is an urgent need for new substances with antimicrobial capabilities against which pathogenic bacteria and fungi do not develop resistance [1,2,52,53]. Novel varietal honeys tested in our study show a broad spectrum of antibacterial and antifungal activities. This may suggest that the studied honeys may act as natural products that could reduce the effects of fungal and bacterial infections. Compounds in honeys, such as lysozyme-like and phenolic acids, i.e., coumaric, caffeic, cinnamic and syringic acids, played a key role in the health-benefit properties of honeys tested in our study. Furthermore, as in other studies, polyphenolic compounds can interact with other active molecules present in honey, and their synergistic effect may be responsible for antibacterial activity of different honeys [35].
Honey Sample Collection and Classification
The experiments were carried out with 11 honey samples originating in Poland, collected in 2018 and grouped in Table 1. The honeys were classified according to the standard methods recommended by the European Union [54]. Then, the honeys were grouped in terms of the dominant pollen or most common pollen in the honey sample (Supplementary Materials Table S1). The flowering periods of plants from which the pollens originated, were given after the Biolflor Database (Trait Database of the German Flora: http://www.ufz.de/biolflor accessed on 1 July 2022) and the possibility to collect given variety of honey [55,56].
Honey Sample Classification Using Pollen Analysis
Ten rams were weighed from each honey sample, 20 mL of distilled water was poured into them and then heated on a water bath until the honey samples completely dissolved. The obtained solution was subjected to centrifugation in an MPW 341 centrifuge with a horizontal rotor at a speed of 3000 rpm (MPW Med. Instruments, Warsaw, Poland). Next, the liquid was decanted, but about 5 mL of suspension was left. The solution was poured into smaller test tubes and centrifuged again, maintaining the previous parameters. The liquid was then decanted again, leaving 2 mL of suspension above the sediment of pollen grains. Fifty microliters of the suspension was taken and applied to microscope slides. Two preparations were made of each honey sample. The microscopic analysis was carried out with the Olympus CX21 microscope (600×) (Olympus, Shinjuku, Tokyo, Japan). An average of 300 pollen grains of nectariferous plants were counted and classified to the lowest possible taxon.
Pollen grains were classified into dominant pollen ≥ 45%, accompanying pollen between 16% and 45%, single pollen between 3% and 16%, and occasional pollen ≤ 3%. If the share of the leading taxa were more than or equal to 45%, such honey was classified as nectar-varietal honey.
Honey Sample Preparation
Two grams of each honey sample was weighed in sterile beakers and dissolved in 2 mL of sterile water. Samples prepared in such a manner were incubated at 37 • C for about 3 h in the incubator, stirring several times until the honey was dissolved completely. Immediately before use, honey samples were twice diluted with sterile water to obtain the following dilutions: 1:2 (500 mg/mL), 1:4 (250 mg/mL) 1:8 (125 mg/mL) 1:16 (62.5 mg/mL) which were used in further analyses.
Water Content
The water content of honey was checked with the PAL-22S refractometer (Conbest, Cracov, Poland). Each honey sample was thoroughly mixed and a drop of liquid honey was transferred to the prism of a refractometer according to the manufacturer instruction. Each honey sample was checked in triplicate.
Electrical Conductivity
The electrical conductivity in honey was measured with the CC-105 electrical conductivity meter (Elmetron, Zabrze, Poland) at 20 • C. Twenty grams of honey was dissolved in 100 mL of distilled water and in such solution the electrical conductivity of honey sample was measured. Each honey sample was checked in triplicate [44].
pH
The pH of honey sample was measured in a 10% honey solution using an analogue pH meter (HANNA Instruments, Olsztyn, Poland). Each honey sample was checked in triplicate.
Color Intensity
Honey color was determined using the Pfund scale according to the USDA (United States Department of Agriculture, United States Standards for Grades of Extracted Honey) classification [55]. Pure honey samples were heated at 60 • C in a water bath until their complete dissolution. Next, samples were placed in 10 mm cuvettes and the absorbance (λ = 560 nm) was measured, using deionized water as a blank. The absorbance results were multiplied by a 3.15 factor. The obtained results were compared to the values presented in Supplementary Materials Table S3 after [56] and the color of tested honeys was determined and presented in Supplementary Materials Table S4.
Total Phenolic Content
The content of phenolic compounds was determined with a spectrophotometric method using the Folin-Ciocâlteu reagent (Sigma-Aldrich, Saint Louis, MO, USA) [57,58]. One gram of honey sample was dissolved in 20 mL of distilled water. Five mL of 0.2N Folin-Ciocalteu reagent was added to 1 mL of honey solution. Then, after a 5 min incubation, 4 mL of 75% w/v aqueous sodium carbonate solution was added to the solution and incubated for 2 h at room temperature. After this time, the absorbance was measured (λ = 765 nm), using a distilled water as a blank. The total phenolic content was calculated on the basis of a standard curve prepared for known concentrations of gallic acid (5-100 µg/mL) (Sigma, EC 3.2.1.17) and was expressed in µg of gallic acid equivalent (GAE) per g of honey. All analyses were made in triplicate.
Sugar Analysis in Honey Samples
Sugar profiles of 11 honey samples were analyzed by HPLC using the Shimadzu chromatographic system (Kyoto, Japan) with the RID-10A refractive index detector. The mobile phase (Milli-Q water obtained using the Elix ® Essential 3 Water Purification System with Synergy ® UV Water Purification System, Merck Millipore, Darmstadt, Germany) was run at a flow rate of 0.6 mL/min at 75 • C through the REZEX RPM-Monosaccharide Pb 2+ column (300 × 7.8 mm, Phenomenex, Torrence, USA). The column was calibrated using sixteen carbohydrate standards, including mono-, di-and trisaccharides. Standard solutions of mono-, di-and trisaccharides: glucose, fructose, galactose, rhamnose, xylose, mannose, sucrose, turanose, maltose, celobiose, fucose, trehalose, melibiose, erlose, melezitose and raffinose (Sigma-Aldrich, Saint Louis, MO, USA) were used for interpretation and quantification of sugars in the honey samples. Sugar concentrations were expressed in g/100 g honey.
Protein Content
The protein content was determined by the Bradford method [59] in 50% (w/v) honey samples solutions. Twenty microliters of such a solution was added to 1 mL of Bradford's reagent (Bio-Rad, Hercules, CA, USA) (Coomassie Brilliant Blue G-250), using deionized water as a control. After 5 min of incubation, absorbance was measured at 595 nm using bovine serum albumin in deionized water as a standard (0.1-0.9 mg/1 mL).
Microorganisms Used in the Antimicrobial Assays
The antimicrobial activity of honey samples was tested against the following bacteria:
Antifungal Activity Assay
Antifungal activity was detected by a diffusion well assay against A. niger using PDA plates (8 mL) containing about 1.6 × 10 6 spores/mL of the medium. Each well on the petri plates was filled with 5 µL dilutions of honey (1:2-1:16). Agar plates with PDA medium were incubated for 24 h at 28 • C, next the diameters of A. niger growth inhibition zones were measured with a digital caliper (Pro, Bielsko-Biała, Poland). The obtained results in millimeters were calculated on equivalent of amphotericin B (µL/mL).
In the case of C. albicans and S. cerevisiae, 24 h fungi culture was standardized to 0.5 McFarland. After incubation, the reaction mixture was sown in the amount of 100 µL on agar plates (1.6%) with Sabouraud medium (10 mL). The appropriate dilutions of honey (1:2-1:16) were added to the wells in the medium. The plates were incubated at 37 • C for 24 h. After incubation the diameters of growth inhibition zones in millimeters were measured with a digital caliper (Pro, Bielsko-Biała, Poland).
Lysozyme-like Activity of Honey Samples
Lysozyme-like activity of honey samples was checked using agarose plates containing freeze-dried Micrococcus lysodeikticus (Sigma-Aldrich, Saint Louis, MO, USA) [10].
The activity was tested in the following steps of preparation of the honey samples: in the mixture after overnight shaking and centrifuged (sample 1); in the supernatant after dialysis (sample 2); in the lyophilized samples (sample 3) (Supplementary Materials Figure S7).
Each well on the petri plates was filled with 5 µL samples, and next plates were incubated at 28 • C for 24 h. After this time, peptidoglycan digestion zones were measured. The lysozyme-like activity was defined as an equivalent of EWL activity (µg/mL) (Sigma, EC 3.2.1.17). Similarly, for control plates, wells were filled with egg-white lysozyme (EWL) (Supplementary Materials Figure S8). The level of lysozyme-like activity was calculated on the basis of a standard curve prepared for known concentrations of lysozyme EWL (µg/mL) (Sigma-Aldrich, Saint Louis, MO, USA).
Additionally, lysozyme-like activity level was tested in all samples taken at various steps of honey preparation, i.e., after centrifugation, dialysis and lyophilization (Supplementary Materials Table S5). Peptidoglycan digestion zone is shown in Supplementary Materials Figure S6. The level of lysozyme-like activity was calculated on the basis of standard curve prepared for known concentrations of lysozyme EWL (µg/mL) (Sigma-Aldrich, Saint Louis, MO, USA) (Supplementary Materials Figures S7 and S8).
Antimicrobial Activity Connected with Hydrogen Peroxide in Honey Samples
Agar plates (0.7%) with the LB medium (10 mL) (LB; Biocorp, Warszawa, Poland) containing appropriate bacterium (150 µL) in the amount of 1.5-4.2 × 10 6 were used to detect the antimicrobial activity connected with hydrogen peroxide in honey samples. Each well on the petri plates was filled with 5 µL samples containing appropriate honey dilutions as a control and 5 µL samples containing appropriate honey dilutions with catalase (the enzyme degrading hydrogen peroxide) (Sigma-Aldrich, Saint Louis, MO, USA), as a test samples (Supplementary Materials Table S5). Next, plates were incubated at 37 • C for 24 h and the diameters of bacterial growth inhibition zones were measured with a digital caliper (Pro, Bielsko-Biała, Poland).
Positive control samples: Each well on the petri plates was filled with 5 µL of freshly diluted 10%, 5%, 3%, and 1.5% hydrogen peroxide (Chempur, H 2 O 2 -34.01 g/mol 30% pure p.a. CAS: 7722-84-1) diluted in sterile water or honey (number 11), next the agar plates were incubated for 24 h at 37 • C. The diameters of bacteria growth inhibition zones were measured with digital caliper (Pro, Bielsko-Biała, Poland) and expressed in millimeters. The experiment was repeated three times.
Solid Phase Extraction of Honey Samples
Honey samples (5 g) were mixed with 20 mL of deionized water adjusted to pH 2 with HCl and stirred in a magnetic stirrer for 15 min. The samples were then filtered to remove the solid particles. Extraction of phenolic compounds was performed with the Visiprep™ SPE Vacuum Manifold (Sigma-Aldrich, Saint Louis, MO, USA). The SPE cartridges used were Strata-X (500 mg) obtained from Phenomenex (Warsaw, Poland). They were conditioned by washing with 15 mL of methanol, and 20 mL acidified water. Afterwards the filtrated honey sample was passed through a cartridge, which was then washed with 20 mL of deionized water to remove all sugars and other polar constituents of honey. The adsorbed compounds were eluted with 5 mL methanol [60].
HPLC Analysis of Phenolic Compounds in Honey Samples
The concentration of phenolic compounds was quantified by high performance liquid chromatography (HPLC, Agilent Infinity 1260 equipped with DAD detector) (Agilent Technologies, Santa Clara, CA, USA). The HPLC system fitted with Zorbax Eclipse Plus C18 column (100 mm × 4.6 mm × 3.5 µm, Agilent Technologies, Santa Clara, USA) was operated at 40 • C and the flow rate of 1 mL/min. Each 1 µL sample was injected using an autosampler. The mobile phase consisted of 50 mM formate buffer adjusted to pH 4.1 using 1 M NaOH (eluent A) and methanol (eluent B). The elution included an isocratic step with 20% v/v of eluent B for 1 min after injection of the sample; afterwards, a gradient step of elution (10 min) was applied in the range of 20-90% of eluent B. The separation was ended within 3 min of isocratic elution with 90% of eluent B. The total run time of each analysis was 14 min. After each analysis, a 4 min post run was conducted with 20% of eluent B to restore the start conditions of the analysis.
The components peaks were identified by comparison of retention times of the commercially available standards of the following phenolic acids: p-coumaric, caffeic, syringic, vanillic and cinnamic acids. Detection was performed at 280 nm. Agilent OpenLAB CDS ChemStation LC and Ce Drivers (A.02.10 (026) version) software were used for data processing and reporting.
Statistical Analysis
Normal distribution of variables was tested with Shapiro-Wilk tests, and given that not all continuous variables were normally distributed, Kruskal-Wallis H tests (one-way ANOVA on ranks) were performed to compare the mean and standard deviations according to the inhibition abilities of various honey types in four concentrations against selected bacteria and fungi. The results were considered significant at p < 0.05. Statistical differences are marked with different letters and their significance at p ≤ 0.001 with capital letters, p ≤ 0.05 with lower case letters. Statistical analyses were performed using the Statistica 13.2 PL package. To analyze the relationships between species richness inhibition honey activity and physicochemical parameters of honey samples and sugar content, we used multivariate ordination methods in CANOCO version 5.0 package [61,62]. According to the length of the gradient from a preliminary detrended canonical analysis (DCA), a linear model, the principal component analysis (PCA) was used. In the PCA, honey samples were entered as cases and physicochemical parameters and sugar content as dependent variables.
Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/ijerph20032458/s1, Table S1: Honey abbreviations; Table S2: Melissopalynological analysis of different types of honey; Table S3: Color designations of honey; Table S4: Color of tested honey; Table S5: Positive control: diameters of bacteria growth inhibition zones [mm]; Figure S1: Antimicrobial activity of honey; Figure S2: Antimicrobial activity of honey; Figure S3: Catalase. Inhibitory activity of tested honey to B. circulans; Figure S4: Catalase. Inhibitory activity of tested honey to E. coli; Figure S5: Catalase. Inhibitory activity of tested honey to S. aureus; Figure S6: Antimicrobial activity connected with hydrogen peroxide in honey samples; Figure S7: Lysozyme-like activity. Figure S8. Lysozyme-like activity; Figure S9: SDS-PAGE analysis of honey proteins/peptides (sample 3); Figure S10: Sugar analysis in honey samples. References [63,64] are cited in the Supplementary Materials. | 10,560.2 | 2023-01-30T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
Multiagent Routing Simulation with Partial Smart Vehicles Penetration
/e invention and implementation of smart connected cars will change the way how the transportation networks in cities around the world operate. /is technological shift will not happen instantaneously—for many years, both human-driven and smart connected vehicles will coexist. In this paper, using a multiagent simulation framework, wemodel a complex urban transportation system that involves heterogeneous participants. Vehicles are assigned into two groups: the first one consists of smart cars and the second one involves regular ones. Vehicles in the former group are capable of rerouting in response to changes in the observed traffic while regular ones rely on historical information only. /e goal of the paper is to analyze the effect of changing smart cars penetration on system characteristics, in particular, the total travelling time. /e smart car routing algorithm proposed in this paper reduced travelling time up to 30%. Analysis has shown that the behaviour of the system and optimal configuration of underlying algorithms change dynamically with smart vehicles penetration level.
Introduction
Due to an increasing number of traffic participants, especially in urban areas, the current transportation infrastructure becomes insufficient. Big cities suffer great economic losses and resident dissatisfaction from congestion and reliability issues [1]. As a result, Traffic Management Systems (TMS) effectively increasing road network efficiency are in high demand. With the advancing technology, the available solutions become more comprehensive and provide a faster, dynamic feedback. Although the vehicular technologies are in a period of an intense development, innovations require many years to become common in the automotive market. According to Lazard and Roland Berger, the penetration of highly automated vehicles in 2035 will reach between 5% and 26% accordingly in pessimistic and optimistic scenarios [2]. Since the transition from manual to automated vehicles will span many years, computer simulations provide valuable tool for testing the performance and the behaviour of novel systems in various conditions [3,4]. With the increasing smart cars penetration, beneficial effects of the underlying management system are expected to increase as well. However, this important issue is rarely addressed by researchers who tend to assume a full system coverage [5,6]. Smart vehicles penetration rate has already been applied to study particular transportation characteristics, e.g., traffic flow stability for mixture of connected and autonomous vehicles [7] or reaching travelling time optimum in artificial Pigou's and Braess' networks while using social network data [8]. Nevertheless, implementation of penetration rate in real-world urban setting simulation is still not covered in detail in the literature. e contribution of this paper is to examine impact of changing smart cars penetration on various transportation system characteristics, in particular the total travelling time. Experiments are conducted using a multiagent congestion detection and routing simulation SmartTransitionSim.jl developed for this research project. e code is available with Open-Source license on GitHub (https://github.com/KrainskiL/ SmartTransitionSim.jl). e transportation process in traffic models is commonly programmed either on individual participant level, e.g., vehicles, pedestrians (microscopic), as groups of similar individuals (mesoscopic), or on system-wide level (macroscopic). Advanced frameworks combine concepts of multiple layers and model interactions between them [9,10]. e agent-based modelling is well suited for microscopic simulations of traffic patterns since virtual agents naturally and intuitively represent traffic participants [11,12]. Moreover, the multiagent models are becoming more popular also due to increasing availability of dedicated simulation software, e.g., SUMO, MATSim, SMARTS frameworks. However, accurate and detailed simulation of individual's behaviour often requires high computing power and may render big scale simulations not feasible [13]. Researchers address performance problems by developing efficient algorithms using high-performance languages and incorporating modern computing techniques (e.g., distributed and cloud computing) in their implementations [14]. In the introduced framework, we adopted purely microscopic approach and focused heavily on performance optimization (more details can be found in Section 2.3). e concept of agent-based modelling connects well with the current TMS research trend based on two-way communication between an external infrastructure and vehicles (V2I) and between multiple vehicles (V2V). A theoretical design assumes implementation of the system within Vehicular Ad Hoc Network (VANET) consisted of three main components: in-vehicle On-Board Units (OBU) embedded with sensors, processing units and wireless interfaces, Road Side Units (RSU) creating communication infrastructure, and Traffic Management Center providing centralized processing power and storage [15]. Based on components used in a system design, solutions can be classified into infrastructurefree and infrastructure-based. e infrastructure-free systems are decentralized and rely on V2V communication to share information about traffic in close vicinity to vehicle. In contrast, centralized infrastructure-based systems focus on utilizing RSUs and an optional Traffic Management Center (TMC) to provide vehicles with wide area traffic data through V2I communication. De Souza also provides second breakdown level based on delivered service [16]: (1) Infrastructure-free: (a) cooperative congestion detection, (b) congestion avoidance, and (c) accident detection and warning (2) Infrastructure-based: (a) traffic light management, (b) route suggestion, (c) congestion detection, and (d) rerouting and speed adjustment e conceptual VANET design may be adopted in realworld applications using modern communication and computing technologies. Researchers and engineers are preparing technical background for vehicular networks by developing dedicated standards (e.g., IEEE 802.11, IEEE 1609.2) and testing various communication technologies like LTE or DSRC in a transportation environment [17]. e most recent research focuses on 5G compliant technologies which have become a competitive alternative due to the high capacity, ubiquitous coverage, and high reliability. e Next Generation Mobile Alliance propose strict requirements for the 5G-based technology, in particular 100% coverage, 99.99% network availability, and up to 1 ms round-trip delay, which are sufficient for wide range of VANET-based applications [18,19]. In recent report, Crainic et al. pinpoint other key technologies (e.g., cloud computing, smart grids) required for successful development of intelligent transportation infrastructure and smart cities projects in general [20].
Despite solid technical foundations, the majority of TMS projects are on proof-of-concept or experimental level [21,22]. However, initial tests have shown promising results. For example, both dynamic truck platooning system and vehicles routing Eco-Signal Operations provide fuel consumption decrease up to 10% [23,24]. Additionally, already running in a dynamic fashion, traffic light management system called Midtown in Motion reduces overall travel time during rush hours by 10% [25].
Psychological and social aspects of transportation are usually neglected in Traffic Management Systems design which focuses on simple and quantitative measures of performance. From the perspective of the society, TMS should take into consideration overall welfare and happiness. As highlighted at the beginning of this section, spending too much time travelling in congested and uncomfortable conditions reduces well-being of the participating commuters [26]. Importance of personal transportation drives researchers to study determinants of traffic participant decisions and travel satisfaction. Often quantitative notion of the value (the cost) of time is used in research [22,27]. Standard factors breakdown assumes that the travel time unit cost varies according to type of trips, traveler preferences, and travel conditions [28]. e majority of reports classify trips as work/business or nonwork/personal, but depending on the methodology used more detailed structures may be used [29]. For inferring quantitative conclusions about traveler preferences, often questionnairebased approach is used [30,31]. Research has shown that the cost of time rises significantly if the total travelling time surpasses 90 minutes per day [32]. at conclusion aligns with Marchetti's Constant rule, which states that people aim to travel one hour each day and switch attention to other transportation characteristics (e.g., trip conditions) when travelling less than that [33][34][35]. Unfavorable traffic conditions, especially unexpectedly congested roads, further increase the cost of the travelling time. For highly congested traffic, the value of time may grow up to 50% for automobile users and 100% for bus passengers, pedestrians, and cyclists [36]. Analyses of transportation reliability impact by government agencies show that the uncertainty of trip length and occurrence of unexpected delays result in an additional time cost increase [37,38].
Taking the above described factors into account, in this paper we test how the increasing adoption of smart cars, which can adaptively update their routing decisions using information obtained from a TMS, influences the expected congestion and ultimately total travelling time of commuters. In Section 2 we discuss the design of the agent-based simulation we have developed. Next, in Section 3, we present the details of the experiment we have conducted using this simulation and in Section 4 we discuss the obtained results. Finally, Section 5 concludes and presents outlooks for further research.
Simulation Details
In this section we describe our approach to modelling of the traffic system and what our assumptions are about agent behaviour rules and communication capabilities. Finally, we describe the design of the simulation framework we have developed that implements these assumptions.
Traffic System.
We assume that the road network is represented by a directed graph G � (V, E) that consists of the set of n vertices V � v 1 , v 2 , . . . , v n representing junctions and the set of k edges E � e 1 , e 2 , . . . , e k representing roads. Every edge (directed arc) e i ∈ E is defined by two vertices e i � (v s , v e ), v s , v e ∈ V, v s ≠ v e corresponding to junctions between given road segments. Additionally, every edge e i is described with the following parameters: max -maximum vehicles density on the i-th road, that is, the maximum number of vehicles allowed on the i-th road segment, calculated as follows: where s i is the number of lanes available on the road segment and c denotes the average space reserved for one vehicle (in meters) e road network is populated with agents representing vehicles moving between selected vertices with respect to edge's direction. In any given time t, agents are assigned to one edge and move towards ending vertex with current edge velocity value calculated as follows: where ρ (i) t is the current density on the i-th edge at time t and V min is the fixed, minimum speed. e minimum speed is introduced to prevent edge lock-down if maximum density is reached. Let us note that, due to step-wise character of the simulation, density may temporarily exceed maximal density which would result in negative speed value. To address the problem, speed multiplier in equation (2) is bounded from below by zero. e equation we use is a slightly modified version of the classical Lighthill-Whitham-Richards traffic flow model [39,40]. Vehicles density is common congestion predictor present also in modern traffic flow research [41]. e proposed framework is based on discrete-events simulation (DES); thus system state and simulation clock are updated when particular events occur rather than in arbitrary chosen time interval. Discrete-event based approach was reported in earlier research to produce more accurate results compared to standard discrete-time simulations due to character of numerical calculations [42].
Agent Behaviour and Communication
Design. An agent represents an individual vehicle travelling in the road network G from a starting vertex v S to a destination vertex v D . Agents aim to select the optimal route that minimizes the travelling time between the two assigned nodes. e route from nodes v S to v D is defined as a sequence of n consecutively adjacent nodes (or, alternatively, n − 1 edges): e time required to traverse the i-th edge in simulation time s is equal to the ratio of the length of the edge and the current speed; that is, Hence, the total time, defined as in equation (3), required to travel the route is given by the following formula: With known travel time on each edge, fastest route is determined using A-star graph traversal algorithm, commonly used in routing simulations [43]. However, velocities in the system change dynamically with agents movement, see equation (2), deprecating initially chosen paths. Depending on how routes overlap, the capacity of a particular road may be utilized more heavily than others, thus creating traffic congestion and slowing down all vehicles present on the edge. is effect is more apparent when multiple agents start from nearby vertices and travel to similar destinations, which resembles rush hours scenario when people commute from office or industry districts to residential areas. Bottlenecks may also naturally appear on big arteries selected by many agents due to high speed limit and convenient location.
Each agent is generated with a fixed type: smart or regular. e type determines the individual's behaviour, available traffic information, and route optimization mechanisms. All agents possess full knowledge of static road network characteristics-segments lengths ℓ i and maximum speeds v (i) max . We assume that regular agents calculate travelling time with the average speeds obtained from the "previous day" information about the traffic (short-term memory). Specifically, the average from the "previous day" is calculated based on speeds recorded every 30 seconds in scenario where all agents used speed limits to pick fastest routes with given starting and ending vertices. For smart cars, we assume that vehicles additionally receive full information about the current velocities on roads with Journal of Advanced Transportation fixed time interval and may reroute based on local, on-board calculations, immediately after receiving data.
In our framework, we may take that all routing decisions (for both regular and smart cars) are based only on on-board computer calculations; therefore all agents can be considered as autonomous vehicles, differing only with the amount of available information. However, we could equivalently assume that regular cars are human-driven, where a driver makes a routing decision based on historical traffic data. e crucial distinction between regular and smart cars lies in the amount of information they have at the moment of making routing decisions. e utilization of historical data supports more even traffic distribution and, as a result, shorter travelling time in the system. However, the deterministic routing approach may lead to undesired outcomes [44]. For example, agents heading towards a similar direction tend to choose overlapping routes, switching congestion to new area instead of alleviating it. In order to reduce this undesired effect, probabilistic approaches such as the random k-shortest path or the entropy balanced k-shortest path may be implemented [45]. More advanced and computationally intensive algorithms such as metaheuristics can also be applied to the routing problem [46]. In our system, we assume that regular agents apply the k-shortest path algorithm with probabilities assigned using the Boltzmann distribution; see equation (6). Regular vehicles follow initially selected routes until they reach their destinations as they obtain no additional information during the simulation. We designed regular agents to provide a simple representation of currently used vehicular navigation systems.
Let (R 1 , R 2 , . . . , R k ) be a series of k-shortest routes from the next node on agent's current route to the destination node. Routes are calculated using Yen's algorithm [47] with an assigned travel time t i based on equation (4). Time values are ordered such that t 1 ≤ t 2 ≤ · · · ≤ t k− 1 ≤ t k and normalized in order to remove influence of absolute length differences on probability calculations. Normalized time values t N i express fractions of the longest time t k ; that is, values are positive and t k � 1: e probability p i for selecting the i-th route is calculated as in equation (6). e corresponding probability is higher for routes with a shorter travelling time, but the behaviour of the distribution may be controlled by parameter T. If T is close to 0, the probability assigned to the fastest route approaches one, while large values of T yield distributions that are close to uniform. Example of parameter T influence on the routes probabilities is provided in Table 1. Please note that the probabilities are also affected by dynamically changing travelling time on a given set of routes (see equation (4)): e smart agents inherit all route optimization mechanisms from agents of regular type but additionally utilize "smart" rerouting service. We assume that smart vehicles receive full information about the current velocities with fixed time interval and may reroute based on local, on-board calculations, immediately after receiving data; thus frequency of rerouting is controlled with update period value and no other trigger for rerouting is considered. Moreover, the smart individuals predict position where they expect to receive next update and change the route only on short fragment between the next junction and the following junction after predicted location. Such mechanism ensures that rerouting will have a meaningful impact on the time reduction-the smart agents scale decision boundaries based on a point of receiving updated speed values. Routes in the kshortest path algorithm between two given vertices are calculated on demand but, due to high computational complexity of the algorithm, received set of routes is stored for possible reuse by other agents (see Section 2.3). With accurate weights (velocities), smart agents may divert from congested roads and effectively choose a faster path to their destination instead of relying on a biased estimation used by regular agents. e effectiveness of rerouting is expected to rise with decreasing update interval as agents reroute more frequently.
In order to reduce overlap of paths, the k-shortest path algorithm is applied every time rerouting is triggered but routes with time twice as big as shortest route are removed from the set. While rerouting on short distances, time differences between calculated routes are much higher than with distant endpoints. In that case, the k-shortest path algorithm may lead to increased travelling time compared to regular agents. Additionally, in case of one-segment rerouting, path is forcefully extended to two-segment path to introduce viable alternatives for the k-shortest path algorithm.
Additional information provided for smart agents comes from a 5G VANET-based Traffic Management System focused on congestion detection and a rerouting service. e proposed solution's infrastructure (see Figure 1) consists of the following components: (1) In-vehicle On-Board Units (OBUs) capable of calculating new route to destination point and sending vehicle's velocity to external units (2) Road Side Units (RSUs) acting as brokers providing partial velocity data to Traffic Management Center (TMC) and aggregated to OBUs (3) Centralized TMC aggregating and sending back data obtained from RSUs All infrastructure components are considered to be 5Ggrade; thus 100% area coverage, no data loss, and insignificant
Simulation Framework.
We have implemented the simulation framework called SmartTransitionSim.jl and make the code available with Open Source license on GitHub (https://github.com/KrainskiL/SmartTransitionSim.jl). e simulation software is implemented in the Julia programming language. OpenStreetMapX.jl (https://github. com/pszufe/OpenStreetMapX.jl) package is used for parsing OpenStreetMap map files into directed graph with possibility of caching them for faster execution. e package also provides utility functions to operate on loaded graph, e.g., coordinates conversion and edge characteristic extraction. e Julia language provides a simple but comprehensive syntax, so SmartTransitionSim.jl may be easily modified for personal use. Documentation for the current version is also available in the GitHub repository. e framework was optimized in terms of performance using. Major performance tweaks include the following: (i) Yen's algorithm is based on custom, fast A-star implementation-5 times performance improvement over a standard implementation (ii) Routes calculated by the k-shortest paths algorithm are saved for future reuse (memoization technique)leading to up to 15 times faster simulation execution in comparison to no-memoization (iii) Simulations use common, separately generated agents pools-halved overall running time We have designed the simulation tool in such a way that the simulations can be executed in a distributed fashion.
Additionally, the simulation model has been adjusted to work with KissCluster software (KissCluster is available at https://github.com/pszufe/KissCluster.) that can be used to manage the distributed simulation execution and the data collection process in the Amazon Web Service cloud. e agents are generated with both starting and ending nodes chosen randomly from a given rectangle area, designated by a set of geographic coordinates within provided map bounds. All agents are generated at once and no further vehicles are added during the lifespan of a simulation. With that assumption, the simulation emulates morning or evening rush hours when congestion is usually dense and effective traffic management can provide highest time reduction benefits. User controls "wave" direction with appropriate starting and ending areas. Other input parameters for simulation run are listed in Table 2.
e population of agents consists of N individuals with an α fraction of smart agents (smart agents penetration). Smart vehicles receive speeds update every U seconds and reroute by picking one of k fastest routes based on the Boltzmann distribution with a T parameter. e simulation can work in two modes: base and smart (see Figure 2). In the base mode, only regular agents occur and VANET functionalities are disabled. e scenario serves as the baseline for comparison with smart scenario where both regular and smart agents occur (Figure 2(b), Algorithm 1). Effectiveness of implemented TMS is measured as the percentage of the total time reduction (difference in the sum of the travelling times of all agents) between the smart and the base scenario with fixed input parameters.
Experiment Setup
e evaluation of the proposed model was conducted on a map of San Francisco in California, USA. We assumed scenario of evening commuting from financial (blue area) to residential district (red area) (Figure 3). e parameter grid was created as the Cartesian product of the parameters values described in Table 3. On every computing node, common pool of 10,000 agents was generated and sampled by consecutive simulation processes. Due to the probabilistic nature of the model, simulations were repeated 3 times for each parameter combination-in total, 54,000 simulation runs were conducted (the final simulation run took a total of 1, 500 AWS EC2 vCPU computational hours. An additional 10,000 vCPU-hours have been used to calibrate and validate the model). Repetitions value was deemed sufficient, considering moderate average coefficient of variation for time Data: Transportation network graph, starting/ending area coordinates, parameters (Table 2) Result: Array of vehicles travel time (1) generate VEHICLES population with source and destination location; (2) calculate initial routes (speed limits) and run "previous day" simulation; (3) calculate initial routes (average speeds from line 2); (4) SIMULATION_CLOCK :� 0; (5) repeat (6) EVENT_TIME, EVENT :� findmin(NextEdge(),NextUpdate()); (7) SIMULATION_CLOCK :� SIMULATION_CLOCK + EVENT_TIME; (8) if EVENT �next_edge then (9) UpdateAgentsAndVelocities(); (10) if NODE � DESTINATION then (11) RemoveVehicle(); (12) end (13) end (14) if EVENT �next_update then (15) foreach vehicle ∈ VEHICLES do (16) if vehicle is smart then (17) KShortestPathRerouting(); (18) end (19) end (20) end (21) Until active(Vehicle) � 0; ALGORITHM 1: Simulation pseudocode. 6 Journal of Advanced Transportation (1) How does the total travelling time within the city change with increasing smart cars penetration and the number of agents? Congestion avoidance mechanisms provide a significant advantage for smart vehicles but, since rerouting decisions are made locally, individuals in smart populations may diminish gains of other smart agents. (2) Does the route overlap issue occur? at is, is the kshortest algorithm necessary? If yes, is the algorithm effective in distributing the traffic? Which value of T provides a near-optimal trade-off between an ability to avoid traffic congestion and rerouting to longer paths? (3) How does the time reduction change with decreasing update period? With more frequent rerouting, vehicles may react more effectively to changes of traffic conditions, although competition mechanism between smart agents may intensify as well.
Experiment Results
e obtained results prove a significant impact of routes overlap and the smart agents competition problem in scenarios with high smart cars penetration. All time reduction values show the difference between overall travelling time in scenario with a given α and the base scenario without smart agents (α � 0). In smart populations with the deterministic rerouting (k � 1), unmitigated overlap issue tremendously affects TMS efficiency. Travelling time reduction effect is near 0% and exhibits high variance (Figure 4(c)). With a mixed agent structure, system quality is affected by the overlap but routing is effective even with the deterministic approach (Figure 4(b)). In scenarios dominated by regular agents, the competition mechanism has no significant impact on travelling time but results are in general less stable compared to scenarios with higher smart agents ratio (Figure 4(a)).
Based on the gathered data, the k-shortest paths algorithm alleviates detrimental effects of overlapping routes. It is worth noting that with smaller smart agents penetration only two routes are required to reach a peak performance while, with high α, similar results were obtained with pool of two, three, or four paths. e behaviour of the algorithm may be also controlled with regularization parameter T. Time reduction for T � 1.0 and T � 10.0 is similar, while for T � 0.1 results are clearly inferior. Since the small value of T favors fast routes, the k-paths algorithm distributes traffic less efficiently.
In line with expectations, time reduction effectiveness increases with the number of agents. With 3,000 vehicles, the total travelling time was reduced up to 15% while with 7,500 agents maximum value reached 30%. Independently of the population size, the highest system performance was recorded for the penetration of smart cars around 85% ( Figure 5). It is worth mentioning that the largest time reduction effect is reached for simulations with k � 2 ( Figure 5(a)) and performance is declining with increasing k (that is, heatmaps become more yellow). e observation confirms that additional routes in the k-shortest path algorithm do not contribute much to alleviating congestion but create possibility of choosing significantly slower path.
e results also confirmed that the updating interval plays a key role in increasing the performance of the Traffic Management System. e difference, measured in the percentage of time reduction, between 50 and 300 seconds update period reached up to 10% percent and was higher for smaller population of agents (Figure 6(a)). For all population Journal of Advanced Transportation size and smart cars penetration, the biggest leap in system performance occurred between 250-and 200-second update interval. By contrast, for smaller interval values, the difference becomes insignificant (Figures 6(b)-6(d))-update period reaches optimal state near 5 s value. However, since the routing mechanism is closely linked with traversed edges length and map characteristic, the optimal value may change based on set scenario. For 7,500 agents, peak percentage travelling time difference reached 32% for both 50 s and 100 s update intervals, indicating that the optimal update period value is between 50 and 100 seconds. is factor should be taken into account when designing the communication protocol and polling mechanism for communicating vehicles.
In order to finish the analysis, let us mention that the simulation framework was validated against multiple scenarios and design settings. Numerous routing rules and agent characteristics (in particular, initial route generation process) were tested and refined. Investigation took into consideration also criteria for starting agents movement, multiple rerouting triggers, and additional VANET infrastructure mechanisms. roughout the development process, the framework was tested in different map settings (e.g., Warsaw and Winnipeg urban areas) to verify that the simulation design provides consistent, interpretable, and meaningful conclusions. Validation scenarios included analysis of the parameters as in Table 3 and various starting and ending areas per each map. We estimate that 50,000 simulation runs were conducted to refine and validate the design. Numerical results obtained from the experiments mentioned above are qualitatively consistent with the presented outcomes; thus in the paper we focused on detailed exposition of San Francisco setup.
Conclusions
As smart cars adoption will likely take many years, it is important to validate underlying intelligent systems under different penetration rates. In order to fill the gap in the related work, we have developed novel microscopic simulation framework for assessing Traffic Management Systems under varying smart vehicles penetration level. We proposed centralized TMS based on the k-shortest path algorithm and conducted experiments using the framework. e experiments have shown that the proposed service can significantly reduce the travelling time in urban environment. By controlling the simulation parameters, the system performance may be finetuned. Moreover, the analysis revealed that an increasing smart cars penetration activates mechanisms connected with the underlying algorithms and the system characteristics may differ depending on the fraction of smart units. In particular, the optimality of the proposed parametrization of path selection probabilities (introduced to avoid a situation where too many agents take the same overlapping routes) depends on the level of smart car penetration. In practice, it means that the smart vehicle movement algorithm should be tuned when the transportation ecosystem changes. Hence, considering that the transition to fully automated vehicles will span a number of years, assessing intermediate effects should be an important stage of designing modern transportation systems. Additionally, simulation results clearly show that, with the increasing volume of traffic, the role of smart vehicles in reducing the congestion is increasing. Moreover, the marginal value of better communication system in reducing the congestion is also increasing with the bigger smart car penetration-again, this value is greater with cities of smart congestion.
Future model extensions may include a more sophisticated rerouting algorithm with individuals' decision based on the regional (vehicle clusters) or global communication between agents and VANET infrastructure. A more advanced design is required to tackle the problem of uneven traffic distribution and the competition between smart agents. Next possible framework modifications include an implementation of the value of time characteristics as a performance measure instead of a simple time reduction. We assume that the value of time of all agents is equal but, from an economic or social point of view, introducing an additional heterogeneity may be beneficial, e.g., higher time cost for public service or delivery vehicles. Another consideration is the generality of the conclusions. Presented outcomes qualitatively held for particular scenario of San Francisco and other experiments in various setups, as described in Section 4. Of course, as in any simulation study, the generality and stability of the conclusions is a topic that may be studied deeper in specific contexts of user interest. For this reason, we provide all source codes of the framework we have used as an open-source project to allow interested parties to implement or modify our proposed model in their own research.
Data Availability
e OpenStreetMap map data and Julia script used to support the findings of this study are available from the corresponding author upon request. Simulation framework used in the script is available in GitHub repository (https:// github.com/KrainskiL/SmartTransitionSim.jl).
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 7,061.6 | 2020-03-03T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Fabrication and Characterization of Near Infrared Molybdenum Disulfide/Silicon Heterojunction Photodetector by Drop Casting Method
In this work, a highly efficient, molybdenum disulfide (MoS2) based near infrared (NIR) heterojunction photodetector is fabricated on a Si substrate using a cost-effective and simple drop casting method. A non-stoichiometric and inhomogeneous MoS2 layer with a S/Mo ratio of 2.02 is detected using energy dispersive X-ray spectroscopy and field emission scanning electron microscope analysis. Raman shifts are noticed at 382.42 cm-1 and 407.97 cm-1, validating MoS2 thin film growth with a direct bandgap of 2.01 eV. The fabricated n-MoS2/p-Si photodetector is illuminated with a 785 nm laser at different intensities, and demonstrate the ability of the photodetector to work in both regions, the forward biased and reverse biased from above 1.5 V and less than -1.0 V. The highest responsivity, R is calculated to be 0.52 A/W while the detectivity D* is 4.08 x 1010 Jones for an incident light intensity of 9.57 mW/cm2. The minimum rise and fall times are calculated as 1.77 ms and 1.31 ms for an incident laser power of 9.57 mW/cm2 and 6.99 mW/cm2 respectively at a direct current bias voltage of 10 V. The demonstrated results are promising for the low-cost fabrication of a thin MoS2 film for photonics and optoelectronic device applications.
Introduction
Transition metal di chalcogenides (TMDC) have recently drawn a lot of interest owing to their exceptional optical and electronic properties, which give them value for optoelectronic applications. TMDC have a typical structure of MX2, where M being a transition metal, commonly Mo and W and X being a chalcogen such as S and Se. Amongst these TMDC, MoS2 has the layered structure of S-Mo-S with one Mo atom covalently bonded to two S atoms. Furthermore, MoS2 has a particular advantage over graphene in that graphene has a zero bandgap, while monolayer MoS2 has an indirect bandgap of 1.29 eV and direct transition starts from 1.8 eV. Also, it has stable crystalline structure, size dependent bandgap and found in either semiconductor or metallic nature. It is widely reported in various applications ranging from sensors, transistors, and solar cells to optical fiber lasers and photodetectors [1][2][3][4][5][6][7][8][9][10] .
MoS2 can be synthesized using a variety of techniques, depending on the properties that the application requires. In its bulk form, these 2D materials have weak Van der Waals forces between the layers, and as such mechanical exfoliation has been an easy and common method of obtaining 2D material films that are a few layers in thickness. In fact, the use of scotch tape has been reported extensively as a simple method of obtaining a MoS2 film that are only a few layers thick, but not appropriate for sizable production due to its inability to control the shape of the flakes obtained, nor their thickness and size 11 . In anticipation of a single atomic layer, or two to three layers thick MoS2 films, liquid exfoliation is instead used. Also, ultra-thin layers are obtained by the sonication of an exfoliated solvent. The drawback of this approach however is that the process can cause defects in the fabricated layers and reduce the number of layers obtained, thus limiting its applications 12,13 . Another widely used method to obtain thin MoS2 layers is chemical vapor deposition (CVD) where atomically thin layers can be synthesized by thermal evaporation or sulfurization on a precursor reagent such as Mo 14 , MoO2 and MoO3 15,16 . However by using this approach, the obtained films are polycrystalline in nature and incorporate small crystallite deposits, making it difficult to control the layers 17,18 . In this regard, the drop casting method has become a popular technique for depositing MoS2 layers onto a photonic surface due to simplicity, high stability, and reproducibility for larger scale production. Significant reports have already demonstrated the potential of this fabrication method, capable of generating highly stable, reproducible, and efficient thin films [19][20][21][22] .
In the development of photodetectors, 2D materials play a vital role in increasing the performance and reliability of the photodetector. In this regard, the number of MoS2 layers has a great impact on the performance of the photodetector. Mechanically exfoliated single, double, and triple layers of MoS2, labelled as 1L, 2L and 3L respectively are used to fabricate a phototransistor with the reported optical bandgap for the 1L being 1.82 eV, 1.65 eV for the 2L and 1.35 eV for the 3L. The fabricated devices based on the 1L and 2L have high performance detection under green light illumination while the 3L based photodetector is only sensitive when illuminated by under the red wavelength region 23 . A reduced bandgap allows for the detection of light across a wider range of wavelength, and closer to the near infrared (NIR) region for MoS2 multiple layers thick. Similarly, ultraviolet (UV) to infrared (IR) photo-detection has been reported previously using thin films obtained by mechanical exfoliation 24 . Moreover, NIR photodetector has also been realized by multi-layer MoS2 flakes obtained via chemical exfoliation 25 . Recently, a broadband UVvisible-NIR (UV-Vis-NIR) photodetector using the aforementioned 2D material is reported, with a detectivity of 10 10 Jones and a responsivity of 0.0084 A/W that demonstrates good performance 26 .
In this work, a low-cost and highly efficient heterojunction photodetector device using MoS2 thin film deposited on the surface of a Si substrate by drop casting technique is proposed and demonstrated. The n-MoS2/p-Si photodetector device is characterized for its structural, optical, morphological, and compositional properties by Raman, photoluminescence (PL), field emission scanning electron microscope (FESEM) and energy dispersive X-ray (EDX) spectroscopy, respectively. The device is further characterized for its optoelectronic properties by current-voltage (IV) measurement system under illumination and dark conditions using a 785 nm near infrared (NIR) light source.
Device Fabrication
The MoS2 based heterojunction photodetector device is fabricated employing the drop casting technique. The boron (B) doped crystalline silicon (c-Si) wafer is used as the p-type substrate in this device configuration. A thin layer of MoS2 serves as the n-type layer to establish a heterojunction with p-Si, as revealed in Figure 1. mg/L with a lateral size of 100 nm -400 nm. The MoS2 solution is first sonicated for 30 min at 80 ℃ and at the same time the p-Si wafer is cut into 3 cm x 2 cm rectangular shape. The substrate on the other hand is cleansed ultrasonically using isopropyl alcohol (IPA) and deionized (DI) water for 20 and 30 min respectively to eliminate any contaminants from the surface before being dried using pure nitrogen (N2) gas. Subsequently, a hotplate is heated to 60℃ and the substrate is placed on top of it for pre-deposition heating for 5 min. Approximately 5 µl of the MoS2 solution using a micropipette is drop casted onto the surface of the p-Si substrate. After about 5 min, the sample is removed from the hotplate and kept in a desiccator for 24 hrs. to dry naturally. Finally, the electrodes are formed by silver (Ag) paste deposited on both the p-Si and n-MoS2 surfaces to form the conductive contacts. The schematic diagram of the fabricated heterojunction device is presented in Figure 2 (a).
Characterization & Device Measurement
The surface morphology of the fabricated device is obtained using a JEOL JSM7600F FESEM, while compositional analysis and mapping are performed using an Oxford Instruments EDX. An inVia confocal Raman microscope with 532 nm illumination is used to obtain the elemental composition of the heterojunction device. The optoelectronic characteristics of the heterojunction n-MoS2/p-Si photodetector is measured under 785 nm illumination at the NIR region. A Keithley 2410 -1100 V SourceMeter® is used to obtain the IV curves between -10 V to 10 V. The distance from the surface of photodetector to the laser source is kept constant at 2 cm and the effective area (A) is calculated to be 0.0706 cm 2 . The power densities of the illumination source are varied (6.06 mW/cm 2 , 6.99 mW/cm 2 , 8.05 mW/cm 2 and 9.57 mW/cm 2 ). The photodetector's time-based responses are collected using a Yokogawa DLM2054 mixed signal oscilloscope. The bias voltages (VB) are varied from 1.0 V to 10.0 V with an interval of 1.0 V. The Stanford Research Systems' DS345 -30 MHz synthesized function generator (SFG) is used to modulate frequency signals from 1 Hz to 20 kHz for testing the photodetector. All the measurements are obtained at the ambient conditions. Figure 3 (a) provides the Raman spectra of the fabricated heterojunction n-MoS2/p-Si photodetector from 200 cm -1 to 900 cm -1 . From the figure, three dominant peaks are observed, with the most intense peak witnessed at 520.40 cm -1 . This peak is attributed to c-Si that is present in the substrate. Another two peaks are noticed at 382.42 cm -1 and 407.97 cm -1 which is the in-plane E 1 2g phonon mode of MoS2 whilst the second peak is the A1g out-plane mode. These two peaks confirm the successful development of MoS2 thin film 27 . The distance (Δ) between the modes is calculated to be nearly 25.55 cm -1 with the weak van der Waals interlayer forces between the sulfur (S) atoms in particular resulting in the lattice vibrations. These findings are consistent with previous research 28 . The photoluminescence (PL) spectra of the MoS2 is given in Figure 3 (b) and from this the energy bandgap is calculated. A direct bandgap of 2.01 eV is obtained, indicating the successful deposition of a n-type MoS2 thin film from the drop casting technique 29,30 .
FESEM & EDX
The surface morphology of the MoS2 thin film grown on top of the p-Si substrate is presented in Figure 4 (a). From the figure, it can be observed that the nano-flakes formed have a general length of 2.0 microns and width of 0.7 micron, together with several nanoparticles of 100 nm to 1 µm diameter. The surface of the film appears rough and inhomogeneous which is attributed to the coffee ring effect that normally occurs during the drop casting of most materials. This effect is unavoidable and is faced almost constantly as reported by many other research groups 31 . Elemental compositional analysis obtained by EDX is depicted in Figure 4
IV Measurement
IV measurements are performed to analyze the opto-electronic properties of the heterojunction n-MoS2/p-Si photodetector at 785 nm with NIR illumination source in the light and dark conditions. The photodetector is illuminated at various power densities e.g., 6.06 mW/cm 2 , 6.99 mW/cm 2 , 8.05 mW/cm 2 and 9.57 mW/cm 2 to attain the IV curves. Figure 6 (a) shows the logarithmic IV curves obtained under dark and illuminated conditions within the bias voltage range of -10 V to 10 V. In Figure 6 (b), the linear IV curves can be observed from -10 V to 10 V for both dark and illuminated conditions and confirms the successful establishment of a p-n junction between the p-Si and n-MoS2 layers. The threshold voltage is observed to be around 1.5 V in the forward biased region and around -1.0 V in the reverse biased region. This indicates that the fabricated device can only operate at 1.5 V and above in the forward biased region and -1.0 V and below in the reverse biased region under illumination and dark conditions. Figure 6 (d) represents the IV curves of the fabricated device in the reverse biased region from -5 V to 0 V while Figure 6 (e) shows the IV curves in the forward biased region from 0 V to 5 V. The current is also found to be linear with respect to the bias voltages in the reverse and forward biased regions, thus confirming the ability of the fabricated heterojunction photodetector to operate at two different regions. Figure 2 (b) shows a schematic band diagram to help in understanding the operational mechanism of the fabricated heterojunction n-MoS2/p-Si photodetector device. A built-in potential at the interface enables the separation of photocarriers generated as the device is exposed to a light source at the wavelength of 785 nm in the light spectrum, resulting in the formation of a photo-response. The barrier height is shortened, and the separation of holes and electrons is stimulated as the voltage drop is applied between the electrodes. The valence band electrons are influenced to the conduction band. The p-type layer (p-Si) serves as the hole collector, while the n-type layer (n-MoS2) passes the electrons from the higher energy band to the lower energy band. The responsivity (R) of the photodetector is measured from Equation 1, where Iillumination is the current under various illumination conditions, Idark is the dark current, Plaser is the power of the laser source and A is the effective area of the 785 nm incident light 19,21,33 . Figure 6 (f) shows the relationship between the maximum value of R as calculated for various incident laser power densities at a 10 V bias voltage.
The detectivity (D*) of the fabricated device can be calculated from Equation 2, where R is the function of D* 21,33 . Figure 6 (f) shows the power densities with their dependency on D*. A linear function is evident for both R and D* with respect to power densities. The maximum value of R is computed to be 0.52 A/W and D* as 4.08 x 10 10 Jones for an incident power density of 9.57 mW/cm 2 . * = √ 2 (2) The correlation of D* in regard to DC bias voltage in the span of 0 V to 10 V can be pragmatic from the Figure 6 (c) where it is calculated for various illuminated power densities in the NIR region. It is very interesting to observe the detectivity trend of fabricated device, the value of D* increases with the increase in bias voltages until it approached 4.8 V to 5 V where the maximum value of D* is computed. After 5 V, the value of D* decreases gradually while a slight rise is observed after 9 V. Figure 7 signifies the time dependent current responses of various power densities illuminated under 785 nm at DC bias voltages (3 V, 5 V and 10 V). Figure 7 (a) indicates the response times for a power density of 6.06 mW/cm 2 , Figure 7 (b) for a power density of 6.99 mW/cm 2 , Figure 7 (c) for a power density of 8.05 mW/cm 2 and Figure 7 (d) for a power density of 9.57 mW/cm 2 . The modulation frequency is set to 1 Hz throughout the measurement. Table 1 shows the rise and fall times of the fabricated heterojunction photodetector when illuminated at 785 nm for various intensities at bias voltages (DC) of 3 V, 5 V and 10 V. A decreasing trend is observed when comparing the rise time of the various power densities with the increase in the bias voltages (DC) from 3 V to 10 V except for the intensity 9.57 mW/cm 2 where no specific trend is observed. In the case of fall time, a non-linear trend is observed with the rise in bias voltages and the same movement is found when compared among various power densities. The rise time among various power densities has an increasing pattern within the same bias voltage except for 5 V. The time dependent current responses for a variety of modulation frequencies i.e. 1 Hz, 10 Hz, 50 Hz, 100 Hz, 500 Hz, 1 kHz, 5 kHz, 10 kHz and 20 kHz are given in Figure 8 The performance of the proposed photodetector in this work against other MoS2 heterojunction structure-based devices is tabulated in Table 4. From the results, it can be seen that the proposed device is highly responsive in the NIR region with D* calculated as 4.08 x 10 10 Jones. Therefore, the proposed photodetector structure which is fabricated using drop casting the nanoparticle solution onto the surface of the substrate can realize the development of MoS2 applications in sensors, detectors, photovoltaic as well as photodetectors at large scales with lower costs.
Conclusion
In this work, a highly efficient heterojunction n-MoS2/p-Si photodetector is fabricated, and its performance demonstrated. Characterization analysis of the fabricated device gives Raman shifts at 382.42 cm -1 and 407.97 cm -1 , validating the presence of MoS2 thin film that is deposited using a cost-effective and simple drop casting method. The normalized S/Mo ratio is found to be 2.02 with a direct bandgap of 2.01 eV for an inhomogeneous and non-stoichiometric MoS2 layer. The photodetector is revealed to various light intensities at 785 nm, with the threshold voltages found to be at 1.5 V in forward bias region and -1.0 V in the reverse bias region. The maximum value of R is calculated to be 0.52 A/W and D* as 4.08 x 10 10 Jones for an incident power intensity of 9.57 mW/cm 2 . The minimum rise time is given as 1.77 ms for an incident laser power of 9.57 mW/cm 2 and minimum fall time as 1.31 ms for an incident power density of 6.99 mW/cm 2 at 10 V DC bias voltage. The minimum rise time is calculated to be 15.52 ms at 3.0 V for 20 kHz frequency and the minimum fall time is noted to be 14.16 ms at 5.0 V for 20 kHz modulation frequency. The proposed results would have significant applications in optical devices such as sensors, detectors, photovoltaic as well as for the large-scale manufacturing of low-cost photodetectors. | 4,064.2 | 2021-04-05T00:00:00.000 | [
"Physics"
] |
Comparison of the Effect of Nd:YAG and Diode Lasers and Photodynamic Therapy on Microleakage of Class V Composite Resin Restorations
Background and aims Considering the importance of disinfecting dentin after cavity preparation and the possible effect of disinfection methods on induction of various reactions between the tooth structure and the adhesive restorative material, the aim of the present study was to evaluate microleakage of composite resin restorations after disinfecting the prepared dentin surface with Nd:YAG and Diode lasers and photodynamic therapy. Materials and methods Standard Class V cavities were prepared on buccal surfaces of 96 sound bovine teeth. The samples were randomly divided into 4 groups based on the disinfection method: Group 1: Nd:YAG laser; Group 2: Diode laser; Group 3: photodynamic therapy; and Group 4: the control. Self-etch bonding agent (Clearfil SE Bond) was applied and all the cavities were restored with composite resin (Z100). After thermocycling and immersing in 0.5% basic fuchsin, the samples were prepared for microleakage evaluation under a stereomicroscope. Data was analyzed with Kruskal-Wallis and Wilcoxon signed-rank tests at P<0.05. Results There were no significant differences in the microleakage of occlusal and gingival margins between the study groups (P>0.05). There were no significant differences in microleakage between the occlusal and gingival margins in the Nd:YAG laser group (P>0.05). In the other groups, microleakage at gingival margins was significantly higher than that at the occlusal margins (P<0.05). Conclusion Nd:YAG and Diode lasers and photodynamic therapy can be used to disinfect cavity preparations before composite resin restorations.
Introduction ecurrent caries is one of the most common problems after tooth restorative procedures. 1 Many authors have attributed recurrent caries, pulp inflammation and necrosis to microleakage. 2 Failure to remove infected tooth structures during cavity preparation aggravate problems associated with cavity margin microleakage. Bacteria remaining on the dentinal cavity floor can preserve their viability for a long time. 3 Leung et al reported that the number of bacteria remaining in the cavity can double during a one-month period after the restorative procedure. 4 Therefore, removal of the infected dentin is important to prevent recurrent caries, and disinfection of dentin is recommended after cavity preparation. 5 Several techniques have been introduced to this end. Different kinds of chemical agents, including sodium hypochlorite, chlorhexidine, EDTA (Ethylenediaminetetracetic Acid), hydrogen peroxide, povidoneiodine, citric acid, triclosan, glutaraldehyde, calcium hydroxide, silver nitrate, halogens, some lasers, ozone therapy equipment and photodynamic therapy have been evaluated in relation to their antimicrobial effects. [6][7][8] A possible problem with chemical agents is their tissue toxicity at the concentrations used. 9 In addition, an in vitro study has shown that a large number of bacteria are still viable even after application of povidone-iodine or sodium hypochlorite for 15 minutes. 10 Application of lasers has been extensively studied in operative dentistry as an alternative for burs for cavity preparation, treatment of dentin hypersensitivity and preparation of dentin before application of adhesive systems. The efficacy of lasers has been shown in occluding and opening the dentinal tubules (depending on the energy level used), producing microscopic surface irregularities without demineralization, and sterilization of the dentin surface. 11,12 Nd:YAG laser (Neodymium: Yttrium-Aluminum-Garnet) is a pulsed infrared laser with a wavelength of 1064 nm; it is highly absorbable in pigmented tissues. 13 This laser could be used in tooth hard structures for increasing resistance to acid attacks, remineralization of incipient caries, debridement and alteration of enamel pits and fissures to prevent carious lesions, disinfection of cavity preparations, 13 treatment of dentin hypersensitivity, 14 apical seal of endodontic obturations, decreasing root canal bacterial counts, 15 sterilization of laser-irradiated surfaces and increasing penetration of fluoride into the enamel. 16 It may result in liquefaction and recrystallization of laser-irradiated enamel and dentin surfaces, producing a glass-like morphologic appearance, which is a surface devoid of any microorganisms. 13 Diode laser is a laser produced by stimulation of Gallium and Arsenide, with or without Aluminum or Indium. It has a wavelength of 800-1064 nm. Hemoglobin and pigmented tissues and materials are most affected by the 810-830-nm wavelength of this laser. 17 Gutknecht et al reported that the Diode laser can eliminate bacteria up to a depth of 500 µm in dentin at a wavelength of 980 nm, compared to chemical agents which penetrate only to a depth of 100 µm. 18 Photodynamic therapy is a treatment modality in which the chemicals used become activated and release reactive cytotoxic oxygen species at a certain wavelength. These chemicals have a penetrating capacity and can become active within the tissue, which is the basis for photodynamic therapy. 9 At first, the target cells are selectively subjected to the sensitizer and then irradiated by a complementary wavelength. 19 This technique has exhibited high efficacy in the treatment of neoplasms. 9 Photo-activated disinfection or photodynamic antimicrobial therapy (PACT) is a term used for the disinfecting protocol in which bacterial cells are targeted instead of malignant cells. 9 This technique is effective against a large number of gram-positive and gram-negative bacteria of the oral cavity with the use of different sensitizers and various wavelengths. Recent studies have shown that this technique can eliminate bacteria in planktonic culture media and samples taken from the plaque and biofilm. 19 The success of this technique depends on factors such as bacterial sensitivity, the type of the photosensitizer used, the time needed for the delivery of photosensitizer and irradiation duration. 19 All the lasers mentioned above would have different effects on tooth hard structures and interactions with the dentin; a study has shown that Nd:YAG laser is more effective in decreasing the diameter of dentinal tubules compared to the diode laser. 20 Although previous studies have shown the negative effect of oxygen and other oxidating agents (bleaching agents) on the bond strength of adhesives, the effect of photodynamic therapy on the bonding process is still unknown. The aim of the present study was to evaluate microleakage of composite resin restorations at occlusal and gingival margins after the application of Nd:YAG and Diode lasers and photodynamic therapy.
Materials and Methods
Ninety-six sound bovine incisors were used in the present in vitro study. The teeth were stored in 0.5% chloramine T solution before the study. All the teeth were sealed and cleaned by pumice and a rubber cup. Standard Class V cavities were prepared on the buccal surfaces, with the dimensions of 2 mm in depth, 2 mm in the mesiodistal and 3 mm in the occlusogingival dimensions; 22 the occlusal and gingival margins were both placed 1.5 mm occlusal and apical to the CEJ, respectively. A sharp diamond fissure bur in a high-speed handpiece along with air and water spray was used for cavity preparation. 23 A new bur was used for every 5 cavity preparations. 1 The samples were randomly divided into 4 groups of 24 based on the preparation procedure as follows: Group 1: Nd:YAG laser (Nd:YAG Dental Laser, Lambda Scientifica Srl, Vicenza, Italy); based on manufacturer's instructions the parameters of the laser beam used were as follows: a pulsed wavelength of 1.064 µm; non-contact with a distance of 1 mm from the surface; an output power of 1.5 W; an energy level of 50 mJ; and a frequency of 15 Hz for 10 seconds. The fiber optic diameter was 400 µm.
Group 2: Diode laser (Chesse TM 4W Mini Dental Diode Laser, Wuhan Gigaa Optronics Technology CO, Ltd, China); the parameters of the laser beam used were as follows: a wavelength of 810 nm; an output power of 1 W; and continuous mode. The fiber optic diameter was 200 µm.
Group 3: Photodynamic therapy: based on manufacturer's instructions the tolunium chloride solution (Pharmaceutical grade of the vital stain, Toluidine blue O) was placed over samples with a concentration of 12.7 mg/L. Then the samples were irradiated with a Diode laser beam (RJ-LASER, Fabrikstr 2279183, Walkirch, Germany) at a wavelength of 655 nm and a power of 10 J/cm 2 using the continuous mode; a dental bar head measuring 70 mm in length and 8 mm in diameter was used for 120 seconds.
Group 4: The control; the samples did not undergo any antimicrobial procedures.
Subsequent to the above-mentioned antimicrobial procedures, self-etch bonding agent (Clearfil SE Bond, Kuraray Co, Ltd, Osaka, Japan) was applied based on manufacturer's instructions. The primer was applied for 20 seconds and was gently dried. The bonding agent was applied and light-cured for 10 seconds. The cavities in all the groups were restored using bulk technique with composite resin (Z100, 3M ESPE, Dental Products, St Paul, MN, USA) and light-cured using a halogen light-curing unit (Astralis 7, Ivoclar Vivadent, Schaan, Liechtenstein) at a light intensity of 400 mW/cm 2 using a probe with a diameter of 8 mm, perpendicular and barely touching the surface for 40 seconds. The output of the light-curing unit was tested with a radiometer. Finally, the restorations were finished with diamond burs (Diamant Gmbh, D&Z, Berlin, Germany) and polished with polishing disks (Sof-Lex TM, 3M ESPE, Dental Products, St. Paul, USA).
The samples were stored in distilled water at 37°C for 48 hours and then underwent a thermocycling procedure consisting of 500 cycles at 5-55°C with a dwell time of 30 seconds and a transfer time of 10 seconds. 22 Subsequently, the samples were covered with a layer of wax and all the tooth surfaces were covered with nail varnish expect for restoration surfaces and 1 mm around the margin of restorations. The samples were then placed in 0.5% fuchsin basic solution for 24 hours at room temperature. 1 Finally, a diamond disk (Diamant Gmbh, D&Z, Berlin, Germany) was used in a low-speed handpiece under water spray to divide them into two halves at the center of the restoration. Microleakage was evaluated under a stereomicroscope (Nikon, SMZ 1000, Tokyo, Japan) at ×25. 21 Dye penetration was evaluated at tooth-restoration interface based on the following criteria: 0: No dye penetration 1: Dye penetration at tooth-restoration interface up to the half of the cavity depth 2: Dye penetration to the whole cavity depth without the involvement of the axial wall 3: Dye penetration along the axial wall ( Figure 1).
Non-parametric Kruskal-Wallis and Wilcoxon signed-rank tests were used to analyze data in relation to the amount of microleakage at occlusal and gingival margins at a significance level of P=0.05.
Results
The descriptive results of microleakage at occlusal and gingival margins in the study groups are presented in Table 1. Kruskal-Wallis test did not reveal any significant differences in the occlusal and gingival margin microleakage between the study groups (P>0.05).
Intra-group comparison of microleakage at occlusal and gingival margins by Wilcoxon signed-rank test did not reveal significant differences in group 1 (P=0.96).
In the other groups, microleakage at gingival margins was significantly higher than that at occlusal margins: P=0.004 in group 2; P=0.011 in group 3; and P=0.001 in group 4.
Discussion
Microleakage is defined as the accumulation of bacterial fluids, molecules and ions between the cavity walls and the restorative materials, which is not clinically detectable 24 and is one of the most important reasons for recurrent caries and pulpitis. 2 There is evidence that it is not necessary to remove all the infected dentin adjacent to the pulp in order to control caries, and only elimination of soft and moist dentin and adequate obturation of the cavity with the restorative material is sufficient. 25 Some studies have shown that dentinal bacteria can remain viable for a long time and can double in numbers one month after the restorative procedure. 4 In addition, the majority of restorative materials available now do not have the potential to obturate the cavity for a long time; 26 therefore, it would be reasonable to disinfect dentin before restoration of the cavity in order to prevent recurrent caries. 5 Some of these techniques include the use of various chemical agents and application of some laser types and photodynamic therapy. The most important factor for the efficacy of disinfection techniques is their potential for penetration. Bacteria can penetrate up to 1100 µm into the peri-luminar dentin, 27 but disinfecting chemical agents can only penetrate up to 130 µm into the dentin. 28 Lasers with a wavelength in the range of infrared waves, such as Nd:YAG and Diode lasers, have been used for various purposes, including removal of carious tooth structures, obturation of dentinal tubules and antibacterial activity. 29,30 In this context, the bactericidal activity of Nd:YAG laser, which can penetrate more than 1000 µm into the dentin, has been used to eliminate bacteria from dentin. 31 Penetration of lasers is due to the function of enamel rods and dentinal tubules as optic fibers. 32 Gutknecht et al reported a 99.91% decrease in bacterial counts by Nd:YAG laser in the roots of extracted teeth. 33 Widespread use of Diode laser has been reported in root canal treatment to overcome the problem of inadequate penetration of disinfecting agents, elimination of the smear layer produced due to instrumentation and its antimicrobial activity. 29 Lee et al showed that laser irradiation through dentin disks measuring 500 µm eliminates 97.7% of Streptococcus mutans species, compared to a decrease of 54% in bacterial counts with the use of chlorhexidine, demonstrating a higher efficacy for the Diode laser. 34 Wilson et al reported the use of photodynamic therapy as a technique to eliminate cariogenic bacteria and plaque-forming organisms in the presence of a photosensitizer and application of a low-power laser beam. 35 Various other studies have used photosensitizer and different light sources other than that used by Wilson et al for their antimicrobial activity against various microorganisms. [36][37][38][39] The technique has yielded good results in vivo and in vitro. [40][41][42][43] Zanin et al used a light-emitting diode and toluidine blue on biofilms and reported a 95% decrease in Streptococcus mutans, S. sanguis and S. sorbinus counts. 39 Interaction of laser with tooth hard structures is determined by radiation parameters, including wavelength, pulse energy, exposure duration, repetition rate and the optical properties of the tissue involved. 34 The aim of the present study was to evaluate microleakage of composite resin restorations after the use of three disinfection techniques in Class V cavities. Based on the results the degree of microleakage in the study groups was not significantly different. Studies have yielded differing results in relation to the effect of Nd:YAG laser. Obeidi et al attributed a decrease in microleakage with the application of Nd:YAG laser to the energy of the laser beam. 2 Kwaguchi et al 44 reported that Nd:YAG laser had no effect on the marginal microleakage of composite resin restorations; however, Navarro et al 45 reported a decrease in microleakage of composite resin restorations with the application of Nd:YAG laser, which is consistent with the results of a study carried out by White et al. 46 In addition, Wen et al 47 reported an increase in the tensile bond strength and a decrease in microleakage with the application of the laser. Dentin surface irradiation with Nd:YAG laser results in chemical and morphological changes. Chemical changes include an increase in calcium and phosphorus content and a decrease in oxygen concentration. 30 Nd:YAG laserirradiated dentin exhibits a glass-like surface due to heat liquefaction and re-solidification along with the liquefaction of the smear layer and the partial obturation of dentinal tubules and a decrease in microorganism counts. 13 A decrease in dentin permeability results in a decrease in postoperative pain and an increase in resistance against solubility in acids during caries process. Elimination of the smear layer and microorganisms is in favor of the bonding process; however, liquefaction and obturation of dentinal tubules is not in favor of this process. 13 The majority of studies on Diode laser have been carried out on root canals. Faria et al reported an increase in the apical microleakage with the application of Diode laser; they observed primary liquefaction and changes in the smear layer under an electron microscope. 48 Costa Ribeiro et al 15 and Esteves-Oliveira et al 49 reported liquefaction and partial obturation of dentinal tubules after Diode laser irradiation.
Studies on photodynamic therapy have been predominantly carried out on microbial culture media and suspensions and there have been limited studies on dentin samples. 42,43,50 In the present study, changes in the amount of microleakage were expected in the Nd:YAG and Diode laser groups due to the probability of dentinal surface changes and in the photodynamic therapy group due to the presence of photosensitizer agent (oxygen) and perhaps its interference with resin polymerization; however, it appears the lasers used with the parameters previously mentioned did not have the capacity to exert the expected dentinal surface changes and the amount of the oxygen in the photosensitizer agent was not sufficient to interfere with resin polymerization and increase microleakage. In the intragroup comparison of occlusal and gingival margins in the Diode laser, photodynamic therapy and control groups, microleakage at gingival margins was higher than that at occlusal margins, which might be attributed to polymerization shrinkage of composite resin, the forces of which have exceeded the bond strength to dentin, resulting in gaps at gingival margins. The effect of the factors above on obturation of dentinal tubules might be another reason involved. The nature of gingival margins, too, can be effective in this respect because dentin at gingival margins contains a significant amount of water, organic materials and a moist surface, which compromise the bonding mechanism and increase microleakage. 1 However, in the Nd:YAG laser group there were no significant differences in microleakage between occlusal and gingival margins, which might be attributed to possible minor changes in gingival dentin by this type of laser and the decrease in microleakage at gingival margins.
Finally, it should be pointed out that although based on previous studies the above-mentioned techniques were used as cavity disinfecting agents, evaluation of dentin surface by electron microscopy, microbial culture and evaluation of thermal effects of the lasers on affected dentin were not carried out in the present study. In order to extend the results, it is suggested that future studies be carried out by im-plementing the conditions mentioned above.
Conclusion
Under the limitations of the present study it can be concluded that Nd:YAG and Diode lasers and photodynamic therapy can be used for disinfection of cavities without any detrimental effect on microleakage. | 4,244.8 | 2013-05-30T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Performance analysis of multi user massive MIMO hybrid beamforming systems at millimeter wave frequency bands
Millimeter-wave (mmWave) and massive multi-input–multi-output (mMIMO) communications are the most key enabling technologies for next generation wireless networks to have large available spectrum and throughput. mMIMO is a promising technique for increasing the spectral efficiency of wireless networks, by deploying large antenna arrays at the base station (BS) and perform coherent transceiver processing. Implementation of mMIMO systems at mmWave frequencies resolve the issue of high path-loss by providing higher antenna gains. The motivation for this research work is that mmWave and mMIMO operations will be much more popular in 5G NR, considering the wide deployment of mMIMO in major frequency bands as per 3rd generation partnership project. In this paper, a downlink multi-user mMIMO (MU-mMIMO) hybrid beamforming communication system is designed with multiple independent data streams per user and accurate channel state information. It emphasizes the hybrid precoding at transmitter and combining at receiver of a mmWave MU-mMIMO hybrid beamforming system. Results of this research work give the tradeoff between multiple data streams per user and required number of BS antennas. It strongly recommends for higher number of parallel data streams per user in a mmWave MU-mMIMO systems to achieve higher order throughputs.
Introduction
The next generation wireless communication systems including 5G NR use mMIMO beamforming techniques to achieve higher SNRs and spatial multiplexing to enhance the data throughput at mmWave frequencies. The major challenge in using mmWave frequencies is propagation loss, therefore 5G NR networks are deployed with mMIMO (large scale antenna arrays) to overcome the path loss. The advantage at mmWave frequency bands is that we can accommodate more antenna elements in the given physical dimensions as wavelengths are very small. To reduce the system cost, antenna elements can be grouped into subarrays and each T-R module is dedicated to an antenna subarray using an emerging technique for mmWave communications called ''Hybrid Beamforming''. Hybrid beamforming designs are capable of transmitting data to multiple users using MU-MIMO with multiplexing gains and these MU-MIMO systems have high potential in mmWave communication networks. Hybrid transceivers consist of lesser RF chains compared to number of transmit antenna elements as they use analog beamformers in the RF domain, and digital beamformers in baseband domain. Hybrid beamforming design balances beamforming gains (to overcome path losses) and power consumption, hardware cost in mmWave mMIMO communication systems.
A common rule-of-thumb in mMIMO systems is that M/K [ 10 (where 'M' is number of antennas and 'K' is number of users) so that user channels are likely to be orthogonal and provides maximum efficiency. It is possible to reduce the computational complexity in hybrid beamforming for mmWave communication systems with lesser number of analog RF chains (compared to number of users) and their performance is close to that of optimal (or fully) digital beamformers [1]. As per 3GPP standard, 5G wireless technologies use frequencies in FR2 band as shown in Table 1 to have short range communication and high data rates [2]. MU-mMIMO wireless communication networks use SDMA to have multiplexing gains by serving multiple UEs with same T-R resources and gives substantial improvements in system throughput. MU-mMIMO system improves spectral efficiency as it allows BS Tx to communicate with many UE Rxs simultaneously using same T-R resources. However, the challenge in MU-mMIMO systems is designing transmit vectors by considering co-channel interference of other users. In UE dense scenarios, MU-MIMO dimensions are increased to fully exploit the spatial multiplexing capabilities. In these scenarios, it is challenging to distinguish UEs in spatial domain as number of pairing users are more. mMIMO increases spatial resolution with more number of narrow beams and gives high degree of freedom for MU paring. It allows number of BS antenna elements of order tens to hundreds, thereby also increasing the number of data streams in a cell to a large value.
MU-mMIMO hybrid beamforming structure shown in Fig. 1 divides beamforming among RF analog domain and baseband digital domain with cost, complexity and flexibility tradeoffs [4]. In RF analog domain, beamforming is achieved by applying phase shift to each antenna element in the antenna subarray. In baseband digital domain, beamforming is achieved using channel matrix to derive precoding and combining weights that helps to transmit and recover multiple data streams independently using a single channel.
Literature review
Hybrid beamforming designs are introduced to reduce the training overhead and hardware cost in mMIMO systems. Hybrid beamforming can be classified based on CSI (average or instantaneous), carrier frequency (mmWave) and complexity (reduced, full or switched complexity). Selection of algorithm to get the best tradeoff between these parameters depends on channel characteristics and application [5]. Acquiring precise and accurate CSI for MU-mMIMO systems is a challenge at mmWave frequencies as the number of BS antennas are high. Accurate channel estimation in MU-mMIMO can be achieved using jointiterative scheme based on step-length optimization [6]. Beamforming neural network (based on deep learning) for mmWave mMIMO systems can optimize the beamforming design and it is robust with higher spectral efficiency (compared to traditional beamforming algorithms) in the [7]. For a given number of BS antenna elements, optimal number of UEs scheduled simultaneously gives maximum spectral efficiency in mMIMO systems. The spectral efficiency can be same for DL and UL (allows joint network optimization), also it is independent of instantaneous UE locations [8]. Multitask deep learning (MTDL)-based MU hybrid beamforming algorithm for mmWave mMIMO OFDM systems can give better results in terms of sum-rate and lower run-time compared to traditional algorithms [9]. mmWave UL MU-mMIMO system uses lens type antenna array at BS (two-dimensional, both elevation and azimuth angle) and uniform planar array at MS based on arrival/ departure angles of multi-path signals. ''Path delay compensation'' technique at BS transforms MU-MIMO frequency selective channels into parallel smaller MIMO frequency-flat channels at lower hardware cost and increased sum-rates [10]. Effective channel estimation scheme is proposed for time-varying DL channel of mmWave MU-MIMO systems based on angle of arrival/ departure and with minimum number of pilots [11]. Low complex single cell DL MU-mMIMO hybrid beamforming system with perfect CSI gives sum-rate that approaches ideal channel capacity [12]. For a given number of RF chains, performance gap between hybrid and digital beamforming can be reduced by minimizing the number of multiplexed symbols [13]. Clustering and feedback based hybrid beamforming for DL mmWave MU-MIMO NOMA systems give maximum sum-rates compared to OMA systems [14]. A blind MU detection algorithm based on ''markov random field'' to model clustering sparsity and estimate mMIMO channel performs better than the systems that do not exploit clustering sparsity of channel [15]. Manifold optimization, eigenvalue decomposition and OMP algorithms used in designing hybrid beamforming for broadband mmWave MIMO systems offer BER and spectral efficiency closer to fully digital beamforming designs [16]. When SNR or number of RF chains are increasing in mmWave MU-mMIMO system, the optimal hybrid precoding and combining schemes using OMP algorithm gives performance close to fully digital precoding in terms of total sum-rates [17]. Energy efficiency in MU-mMIMO systems is inversely proportional to number of RF chains. Usage of optimal RF and baseband precoding matrices can improve energy and cost efficiency by 76.59% compared to OMP algorithm [18]. Generalized block OMP algorithm for channel estimation in mmWave MU-MIMO system uses different strategies of constructing pilot signals/beamforming weights and this scheme outperforms the existing channel estimation algorithms including OMP [19]. ''Distributed compressive sensing'' method can decrease the feedback overhead, training of CSI estimation at UE and ''Joint OMP algorithm'' performs CSI recovery at BS of a MU-mMIMO systems [20]. Optimal hybrid beamforming scheme for MU-mMIMO relay systems with mixed and fully connected structures based on ''successive interference cancelation'' maximizes the sum-rates [21]. An efficient hybrid beamforming technique for relay assisted mmWave MU-mMIMO system based on ''Geometric Mean Decomposition Tomlinson Harashima Precoding'' algorithm can give performance closer to fully digital beamforming [13]. A channel estimation scheme called ''generalized-block compressed sampling matching pursuit'' for mmWave MU-MIMO systems over frequency selective fading channels can offer better performance than OMP algorithm [22]. Optimal hybrid beamforming design is proposed to minimize Tx power under SINR constraints in a MU-mMIMO system for the cases where number of UEs are less than and greater than the number of RF chains. It gives the optimal Tx powers close that of fully digital beamforming design [23]. Low complex ''hybrid regularized channel diagonalization'' scheme that combines analog beamforming and digital precoding for mmWave MU-mMIMO system performs better than conventional block diagonalization based hybrid beamforming designs even in the presence of lowresolution RF phase shifters [24]. ''Hybrid beamforming with selection'' scheme decreases the computational and hardware cost of small-to-medium bandwidth MU-mMIMO systems with moderate frequency selective channel [25]. Hybrid precoder and combiners for DL frequency selective channels are configured in a mmWave MU-MIMO system based on factorization, iterative hybrid design. BS simultaneously estimates all channels from UEs on each subcarrier using compressed sensing scheme to reduce the number of measurements [26]. Nonconvex hybrid precoding problem in mmWave MU-MIMO systems is addressed using ''penalty dual decomposition'' method under the assumption of perfect CSI. It uses lesser number of RF chains but still, the performance is close to that of fully digital beamforming [27]. ''Atomic Norm 27:1925-1939 1927 Minimization'' method is used for accurate, low-complex channel estimation and spectral efficiency in mmWave MU-MIMO systems which is based on continuous channel representation [13]. Spectral efficiency of a MU-mMIMO hybrid beamforming system can be improved by using low complex manifold optimization algorithm and its efficiency is closer to fully digital beamforming designs [13]. For accurate channel estimation with minimum training overhead and RF chains (compared to beamspace approach), MU-mMIMO hybrid beamforming system is viewed as non-orthogonal angle division multiple access to simultaneously serve multiple users using same frequency band [28]. To improve throughput (close to fully digital beamforming) of mmWave MU-mMIMO OFDM systems, hybrid beamforming with user scheduling algorithm is defined for DL where BS allocates frequency resources for members of OFDM user group (users with identical strongest beams). Analog beamforming vectors are used to find the optimal beam of each user and digital beamforming is used to get best performance gain (by decreasing residual inter-user interference) [29]. Low computational complexity hybrid beamforming scheme for mmWave MU-MIMO UL channel reduces inter-user interference and its performance is close to the corresponding fully digital beamforming design [30]. Optimal decoupling designs for analog precoder and combiner of a DL MU-FDD mMIMO hybrid beamforming systems are defined by selecting the strongest Eigen-beams of receiving covariance matrix with limited instantaneous CSI. Simulation results proved the need of second order channel statistics in designing digital precoder to reduce intergroup interference [31]. Coordinated RF beamforming technique in mmWave mMIMO systems based on ''Generalized Low Rank Approximation of Matrices'' needs only composite CSI instead of complete physical channel matrix. This technique provides competing solution by considering the coordination between BS and UEs to get maximal array gain with no dimensionality constraint in both TDD and FDD systems [13]. Hybrid beamforming is designed for mmWave MU-mMIMO relay system to enhance the sum-rates (decreasing sum MSE between received signal of digital and hybrid beamforming designs) using digital beamforming. The total sum-rates are increased with accuracy of angles of arrival/departure and number of RF chains [32]. Optimal unconstrained precoder and combiner algorithms for mmWave mMIMO systems are designed for the feasibility of low-cost analog RF hardware implementations. Numerical results of the proposed algorithms [33] showed that spectral efficiency of mmWave systems with transceiver hardware constraints approaches the unconstrained performance limits. Low-complexity phased-ZF hybrid precoding is applied in RF domain (to get large power gains) and low dimensional ZF precoding is used in baseband domain (for multi-stream processing) in mmWave MU-mMIMO systems [34]. The tradeoff between energy efficiency (in bits/J) and spectral efficiency (in bits/channel/MS) is quantified for a small-scale fading channel of MU-mMIMO systems to achieve higher spectral and energy efficiencies using ZR or MRC and UL pilot signals at BS [35].
Proposed methodology
In In hybrid beamforming, the number of TR modules (N T RF ) are less than number of antenna elements (N T ) and each antenna element is connected to one or more TR modules for higher flexibility.
The mathematical representation of hybrid beamforming is as follows: Combining weights matrix; W The matrices F RF and W RF define signal phase values. To achieve optimal weights for precoding and combining there exists some constraints during optimization process. For ideal case, resulting combination of matrices F BB *F RF and W RF *W BB are equal to calculation of F and W without any constraints.
Block diagram of MU-mMIMO system
The Fig. 4 represents a block diagram that gives complete steps of data processing in a MU-mMIMO system. At the Tx, multiple user's data is channel encoded using convolutional codes. The channel encoded bits are mapped to equivalent QAM complex symbols and generate mapped symbols from bits/user. The QAM data of each user is divided into multiple transmit data streams. Digital baseband precoding is used to assign weights for subcarriers of transmit data streams. In this paper, precoding weights are computed using ''hybrid beamforming with peak search (HBPS)'' algorithm as it performs better for larger arrays of mMIMO systems and these precoding weights are used to get corresponding combining weights at the Rx. HBPS gives all digital weights and identifies N T RF , N R RF peaks to get corresponding analog beamforming weights instead of searching iteratively for dominant mode (data streams that use most dominant mode of MIMO channel gives higher SNR) in the channel matrix. The resultant digital signal is modulated using OFDM modulation with pilot mapping followed by RF analog beamforming is performed for all Tx antennas. The modulated signal is transmitted through a rich-scattering MU-mMIMO channel and it is demodulated, decoded at Rx side as shown in Fig. 4. Channel sounding and estimation are performed at the Tx and Rx respectively using ''joint spatial division multiplexing (JSDM)'' algorithm as it allows large number of BS antennas with minimum CSI feedback from UEs in a MU-mMIMO downlink channel.
Analysis of mu-mMIMO system design
To perform MU-mMIMO transmission in mmWave cellular communication systems, high-dimensional channels need to be estimated for designing MU precoder. Digital precoding gives high performance at the cost of hardware complexity and power consumption (more number of RF chains and ADCs). On the other hand, analog precoding has less complexity, but with limited performance (it supports only one data stream). Hybrid precoding for MU-mMIMO systems shown in Fig. 5 is a combination of digital precoding and analog precoding. Hybrid precoding design reduces number of RF chains and also maintain spatial multiplexing gain in mmWave MU-mMIMO system. In MU-mMIMO systems, precoding and combining techniques are used to improve signal energy in the direction and channel of interest with help of available channel information at Tx. In a single user MIMO, the benefits of an antenna array are less because of lower channel rank. On the other hand, MU-MIMO system Wireless Networks (2021) 27:1925-1939 1929 creates rich effective channels through spatial separation of users.
The MIMO channel can be characterized asy ¼ Hs þ n ð3Þ where y received vector, s transmitted vector n noise vector The channel matrix, H ¼ where h ij is a complex gaussian random variable that models fading gain between the ith transmitted and jth receiver antenna.
MU-MIMO DL
If CSI is known, diagonalization of channel matrix (H) gives unconstrained optimal precoding weights by taking the first N T RF dominating modes. So, assume that BS has CSI at the Tx (CSIT) then we can perform MU precoding where it is possible to send signals to all users at the same time and frequencies (T-F), still allow users to recover signals with low complexity. The DL link signal or observation vector ð Þ for user where k number of users, x k signal intended for user k, H k channel from BS to user k, n k noise.
2nd term in Eq. (4) represents signals for other users. Instead of sending user data stream x k , we perform precoding as x k = W k s k . Therefore, DL link signal for user W k precoding matrix s k transmitted QAM symbols for the user k.
2nd term in Eq. (5) represents data precoding for other users.
For a case where each user has only one antenna (therefore, one data stream/user), the size of precoding matrix W k is N t 9 1.
The scalar observation of user where h T k DL channel for user k, n k scalar noise. The 1st term in Eq. (6) represents effective channel seen by the user and 2 nd term represents interference due to other users.
After considering all vectors for all users of a MU À MIMO system; y ¼ HWs þ n ð7Þ y ¼ transmitted QAM symbols for all users.
It is essential to design precoding matrix W at the BS using schemes like ZF, MMSE so that the performance of a MU-MIMO system is optimum. The BS sends data to users that change over time and ''user scheduling'' is used for selection of best K users among a large set to have good effective channel H. Precoding is also possible with statistical CSI (like covariance matrix of channel) instead of instantaneous CSI.
MU-MIMO UL
At Rx, all signals are added and the observation vector; The UL observation vector, Transmitted signals of all users n noise vector.
MU-MIMO precoding
Consider K users with one antenna per user and one BS with M antennas then the DL channel matrix, H of size K x M where each row corresponds to DL channel of a single user.
We consider ZF precoding where precoding matrix; where H ? pseudo inverse of the channel. If the channel matrix is poorly conditioned, W may be very large and it leads to high signal transmission power. Therefore, we need to enforce a constraint that the total power, P Total = W k k 2 One way to achieve this is scale the entire precoding matrix with a low enough value and guarantee the equal SNR per user. On the other hand, scale each column to have fixed power and it gives different SNR per user.
The adjustment of fixed powers can be achieved using . p 1 power assigned to user 1, p k power assigned to user k. The channel capacity (bits/sec) = Available spectrum (Hz) x Spectrum efficiency (dB).
Under no CSIT conditions, the MIMO channel capacity (C) bounds between where q SNR, B channel bandwidth, M t number of Tx antennas, M r number of Rx antennas.
The left hand limit value is channel capacity for LOS channel and the right hand limit is for rich scattering channel.
Case 1: Very large number of Tx antennas and channel matrix, H is almost orthogonal rows where the number of rows represent number of Rx antennas (small) and number of columns represent number of Tx antennas (large).
HH H & M t I Mr scaled identity matrix The capacity of MIMO channel; where B bandwidth, q SNR.
MU-mMIMO
MU-MIMO system alone is not feasible as multiple antennas at UEs is not cost effective and gains are modest due to limited number of antennas (\ 10), also signal processing is highly complex. Alternatively, mMIMO systems can have one antenna per user and 100 0 s of antennas at the BS. Consider one BS with M antennas and K users with one antenna each (K \ M). In order to have favorable propagation conditions, users should be sufficiently separated. Assume that a MU-MIMO at BS with channel reciprocity (use TDD) and nearly orthogonal channels (users are sufficiently separated).
For UL channel, the channel matrix can be decomposed as
MU-mMIMO: UL
The observation model for the UL is given as H has many rows and few columns and x is a vector (size K 9 1) contains signals for each of K users.
BS can apply the following shaper or combiner based on the property The combining matrix; where w~CN (0, N 0 MD), z k = Md k x k ? w k , and w k-* CN (0, N 0 Md k ). The SNR of k th user is given as, SNR k = Md k E x /N 0 where N 0 noise power, N 0 Md k noise variance Therefore, Rate for the kth user; Total sum -rate at BS is sum of all individual rates of 0 K 0 users; R sum ¼ Blog 2 Equation (20) indicates the capacity of the system. This means that, with asymptotically large antennas and simple linear combining at the BS gives optimal results even in the presence of multi-user interference.
MU-mMIMO: DL
The DL is little more complicated because BS needs to do additional processing (precoding) so that users do not see interference from other users.
Observation model for DL with M Â K precoding matrix is given as; where H T downlink channel, W suitable precoding matrix, n noise vector seen by each user, s vector of size K 9 1 and contains data for each of users.
From MU-MIMO, a good precoder is pseudo inverse of channel matrix. Therefore, we choose precoding of the form where H * complex conjugate of the channel HM scaling factor.
D p diagonal matrix that contains square root of powers for each user.
To meet the power constraints, power allocation D p to ensure W k k 2 = trace(W H W) = P Total . We find that W H W = HD p H T H * HD p /M = HD p DDHD p = D p D. D p matrix of powers along the diagonal D matrix of path loss values along the diagonal The observations seen across different users; At user k: the observations can be split into y k The channel in the DL for user i, h i T = Hd i g i T h i T row vector of length M.
We conclude that MU-mMIMO DL channel is a matched filter with asymptotically optimal linear precoder. This means that with simple linear processing at BS, it is possible to remove all MU-interference in DL among all users.
Results analysis
According to 3GPP, future mmWave wireless communication systems recommend 28 GHz frequency band for MU-mMIMO [36]. mmWave MU-mMIMO communication link between the BS and UEs is validated using scattering-based MIMO spatial channel model with ''singlebounce ray tracing'' approximation. This model considers UEs at different T-R spatial locations and randomly placed multiple scatterers. A single channel is used for sounding as well as data transmission, allows path loss modelling with LOS and non-LOS scenarios (these are closer to real scenarios). The channel matrix is updated periodically to mimic variations of MIMO channel over time. The radiation patterns for antenna arrays are isotropic with rectangular or linear geometry. The simulations were performed for a maximum of 256 9 16 MU-mMIMO system for four users and eight users with parameters shown in Tables 2 and 3. At Tx end, 256-element rectangular antenna array is used with 4-RF chains and at Rx, 16-element square array is used with 4-RF chains. Therefore, each antenna element is connected to 4 phase shifters and there are 4-RF chains. To have maximum spectral efficiency of MU-mMIMO, each user is assigned with independent channels and each RF chain is used to send an independent data stream, therefore maximum of 4 streams is supported. The MU-mMIMO Rx at UE of each user is modeled by compensating for path loss, thermal noise. Figures 6,7,8,9, 10 and 11 represent the RMS EVM values in four users and eight users MU-mMIMO systems of 16-QAM, 64-QAM and 256-QAM modulation schemes with multiple number of BS antennas. It is observed that, for users with only one data stream, the RMS EVM is very high compared to users with multiple data streams. For a give modulation scheme, RMS EVM is decreasing as number of BS antennas are increased for users with single data stream. For users with more number of data streams, optimum RMS EVM values are achieved for 128 BS antennas and there is very slight increment in error values for 256 BS antennas. Figures 12, 13, 14, 15, 16 and 7 represent the RMS EVM values in four users and eight users MU-mMIMO systems with 64, 128, and 256 BS antennas respectively for various modulation schemes. The following observations are made from these figures: for the given number of BS antennas, users with single data stream, the RMS EVM is minimizing at higher modulation order and increasing number of BS antennas. When compared with users with more data streams, the reduction rate of RMS EVM is very high in users with single data stream as the number of BS antennas are increased from 64 to 256. For users with more data streams, optimum performance is achieved with 128 BS antennas and lowest RMS EVM values are obtained. RMS EVM values are very slightly increased for 256 BS antennas compared to 64 BS antennas. Interestingly, for a given number of BS antennas there is almost no change in RMS EVM values when the modulation order is increased from 4 to 8. Figures 18, 19 and 20 represent the equalized symbol constellation per data stream in MU-mMIMO system with different combinations of modulation schemes and number of BS antennas. In the constellation diagrams, variance of the recovered streams is more at the users with lower number of independent streams. This is due to the absence of dominant modes in the channel and it causes poor SNR. From the receive constellation diagrams of all combinations, it is very clear that the variance of recovered data streams is very much better for users with multiple data streams as the symbol points are properly located with less dispersion in symbol constellation diagram. The reason is that, data streams use most dominant mode of scattering MIMO channel and these have higher SNR. On the other hand, the symbol points in equalized symbol constellation of users with single data stream are highly dispersed and they have poor SNR values. Figures 21, 22 and 23 represent the signal radiation patterns in MU-mMIMO wireless systems with multiple BS antennas. The stronger lobes of 3D response pattern in mMIMO designs represent distinct data streams of users.
These lobes indicate the spread achieved by hybrid beamforming. From these figures, it is clear that the radiation beams are becoming sharper for increasing number of BS antennas and this increases the reliability of signal there by throughput.
Conclusions and future scope
A mmWave DL MU-mMIMO hybrid beamforming communication system is designed with multiple independent data streams per user. From the overall results, it is Wireless Networks (2021) 27:1925-1939 1935 observed that for users with lower number of independent data streams, RMS EVM values are higher and for more number of independent data streams, the RMS EVM is less. Therefore, the increasing number of data streams per user leads to decrease in RMS EVM values. For the given modulation scheme, as the number of BS antennas are increasing, the RMS EVM is decreasing. It gives trade-off between the number of Tx/Rx antenna elements and multiple data streams per user. It has been concluded that if the user data is divided into more number of parallel data streams then it requires less number of active antenna elements to transmit the signals. From the simulation results, it is concluded that 256 9 16 MIMO system is more suitable for eight users and 128 9 16 MIMO system for four users with multiple data streams. It is strongly recommended to have higher number of independent data streams per user in a mmWave MU-mMIMO systems to achieve higher order throughputs. For future research Wireless Networks (2021) 27:1925-1939 1937 directions, the MU-mMIMO hybrid beamforming designs with an objective of reduction in RMS EVM values for users with single (or less) number of independent data streams and achieve higher throughput are of great investigation interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
Funding Open access funding provided by Manipal Academy of Higher Education, Manipal. | 6,734 | 2021-02-04T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Impact of the structure on the thermal burnout effect induced by microwave pulses of PIN limiter diodes
Positive-intrinsic-negative (PIN) limiters are widely used to protect sensitive components from leakage power itself and adjacent high-power injection. Being the core of a PIN limiter, the PIN diode is possible to be burnt out by the external microwave pulses. Here, using a parallel computing program for semiconductor multi-physics effects designed by ourselves, we studied the influence of the thickness of the I layer and the anode diameter of the PIN diode on the maximum temperature change curve of the PIN diode limiter. The damage threshold criterion in the numerical simulation was first studied by comparing experimental results with simulation results. Then, we determined the impact of the structure on the thermal burnout effect induced by microwave pulses of PIN limiter diodes.
In the front-end of a radar system, Positive-intrinsic-negative (PIN) limiter is one of the most important modules to protect the back sensitive devices from leakage power itself and adjacent high-power injection [1][2][3] . However, with the development of the pulse power technology, the widespread use of radar, the electromagnetic environment faced by radar systems is becoming more and more complicated. External microwave pulses can couple into the electronic systems through the antenna and further damage the PIN limiter [3][4][5] .
Being the core of a PIN limiter, the PIN diode is a sensitive semiconductor device, which is possible to be burnt out by the injected microwave pulses. The burnout of the PIN diode may lead to the failure of the radio frequency front end or even the entire electronics system 6,7 . Thus, many studies have been carried out for damage effects of the microwave pulse for the PIN limiter. Junction burnout, metallization burnout and thermal second breakdown are indicated to be the main causes of the burnout effect by microwave pulses of the PIN diodes [8][9][10][11] . However, few literatures about the impact of the structure, especially the I layer thickness and the anode diameter of the PIN diode, on the thermal burnout effect induced by microwave pulses have been reported.
In this work, using the JEMS-CDS Device, a parallel computing program for semiconductor multi-physics effects, we studied the damage threshold criterion in numerical simulation through comparing experimental results and simulation results. And then, we determined the influence of the structure of the PIN limiting diode on the thermal burnout effect caused by the microwave pulse through simulation.
Structure of the studied PIN limiter
A typical PIN limiter includes single or multistage PIN diodes. To eliminate the interferences from other factors except the I layer thickness and the anode diameter of the PIN diode, such as other PIN diodes and complex peripheral circuits, a single-stage limiter, whose structure is shown in Fig. 1, is chosen as the target of the study. The typical single PIN diode limiter consists of one PIN diode, two Direct Current (DC) block capacitors, and a parallel inductor. The inductance of the parallel inductor is 40 nH, the DC block capacitors are all 30 pF in this work and the PIN diodes are model CLA series manufactured by Skyworks 12 . The structure of model CLA series PIN diodes whose material is silicon is shown in Fig. 2
Outline of numerical method and validation
In our numerical methodology, a set of semiconductor equations based on the drift-diffusion model 13 are at first solved so as to obtain the transient heat source distribution over the PIN diode, The drift-diffusion model includes the following equations.
Poisson equation
where ε m is the permittivity of the silicon. φ is the electrostatic potential. q is the elementary electronic charge. n and p are the electron and hole density, respectively. N D and N A are the density of donors and acceptors, respectively. ρ s is the fixed charge or interface state charge in the insulating layer. Continuity equation where J n 和J p are the current densities of electrons and holes, respectively. G and U are the electron-hole generation and recombination rates, respectively. Carrier transport equation where μ n and μ p are the mobility of electrons and holes, respectively. E represents the intensity of electric field. T is the temperature (K). k b is the boltzmann constant. When microwave pulses are applied to the PIN diode, the time dependent heat conduction Eq. 14 will be further solved to get its transient temperature distribution. The heat generation in the semiconductor is written as The first term on the right side of the formula is ohmic heating, where J is the current density vector and E is the electric field. The second term is the exothermic and endothermic heat caused by the recombination and generation of carriers, where U is the carrier recombination rate and G is the carrier ionization rate.
Aiming at the research requirements of multi-physical effects mechanism of devices in complex electromagnetic environment, a parallel computing program for semiconductor multi-physics effects, JEMS-CDS-Device, is developed. The program is based on the unstructured grid parallel framework-JAUMIN. It uses the finite volume method (FVM) to discretize and uses the Newton method to get fully coupled solution of the "electric-carrier transport-thermal" problem 15 .
According to the microstrip circuit of the limiter shown in Fig. 1, the simulation circuit of the PIN limiter is established in the simulator as shown in Fig. 3, where S is the microwave pulse source, R1 is the 50 Ω internal resistance of the pulse source, L1 and L2 are the equivalent inductance of the PIN diode welding gold wires, and R2 is the load impedance.
The signal caused by the external electromagnetic pulses coupling into the ribbon cable is similar to be a low-damping sinusoidal voltage signal, which can be approximately expressed as 16 where U 0 is the amplitude of electromagnetic pulses, f is the pulse frequency, and φ is the initial phase. This simulation does not consider the influence of the initial phase, so the initial phase is set to be 0, the pulse frequency is set to 3 GHz, and the pulse width is set to 100 ns, which are consistent with the experimental settings.
The employed structure parameters were obtained from the data sheet of the CLA series PIN diodes 12 , and the dopant profiles were extracted by semiconductor process simulation. In order to verify the feasibility of the analytical model, take CLA4601 PIN limiter as an example, the typical performance characteristics of the PIN limiters obtained from simulations and experimental measurements were compared and analyzed. As shown in Fig. 4, the simulation are compared with the test data, both are in very good agreement with each other. Figure 5a shows the internal temperature distribution of the burnt-out CLA4601 PIN limiter obtained by simulation. The highest temperature occurs at the junction edge between the P + and I regions of the CLA4601 PIN limiter. Therefore, we speculate that when the device starts to burn out, the first burned position should be at the junction edge between the P+ region and the I region. To further verify the analytical model, the limiter PIN diodes damaged by microwave pulses were physically analyzed via dual beam focused ion beam (FIB) cross section analysis (FEI Helios 600). The cross-sectional view of the limiter PIN diode is shown in Fig. 5b. It can be seen from the Fig. 5 that the burn out area of the device is in perfect agreement with the simulation result. Therefore, the physical models selected for simulation can simulate the physical process of high power microwave injection into the PIN limiters, which can be applied to further preliminary analysis of effect mechanism.
Numerical results and discussion
The Skyworks CLA series of silicon limiter diodes have two structures, mesa-constructed and planar-constructed. In this study, the widely used mesa structure devices CLA4601, CLA4602, CLA4604 and CLA4605 were selected for effect experiment research. And compared with the simulation results, the device parameters are shown in Table 1.
In the numerical simulation of the electromagnetic effect of microwave devices, the maximum temperature criterion in a semiconductor device as the melting point of the specific semiconductor material or electrodes is usually used to determine a burnout phenomenon in the simulation 11,17-20 . Therefore, the burnout power It is noteworthy that the use of a microwave power limiter generally leads to additional insertion loss in a receiver, which increases its noise figure and reduces its dynamic range 17 . This insertion loss is an important indicator of the microwave power limiters and can be used to evaluate the degree of damage to PIN limiters. In the effect experiment, the limiter insertion loss change of 3 dB was used as the damage criterion. Figure 6 shows the schematic of the experimental system employed in our work for studying the thermal burnout effect in PIN diode limiters by injecting microwave pulses into it. This system consists of a self-made microwave source system, several attenuators, directional coupler, coaxial detector (Keysight 8470B), and digital oscilloscope (LeCroy WavePro 640Zi). For our experiments, a series of microwave pulses are generated by the microwave source system, which can be changed gradually by tuning the step attenuator. A self-made timedomain synchronization control system and the signal source (Agilent E8257D) are used to control the pulse width, repetition frequency of the microwave pulses. The conventional microwave pulse parameters (20 Hz repetition frequency and 5 s action time) were selected for the experiments. The results of the device damage threshold obtained by the experiment were shown in Table 2 and Fig. 7 by black ball.
It can be seen from Fig. 7 that the experimental results and the simulation results basically have the same trend. The burnout power thresholds increase with the increase of the limiter diodes serial number. However, www.nature.com/scientificreports/ the experimental thresholds are obviously larger than the simulation results, and the bigger the thickness of the I layer, the more obvious the difference is. When the thickness of the I layer is 1 μm, the experimental result is close to the simulation threshold, and the difference is within 2 dB. When the thickness of the I layer is 2 μm, the experimental result is far from the simulation threshold, and the difference is about 4 dB. The difference between the burnout power thresholds obtained by simulation and experiment are so huge that it cannot meet the practical application. www.nature.com/scientificreports/ The above phenomenon may be caused by the inconsistent damage criteria, Preliminary research results 21 show that it is not accurate to set the maximum temperature criterion in a semiconductor device as the melting point of the specific semiconductor material or electrodes to determine a burnout phenomenon in the simulation. Previous experiments 21 found that the I layer of the limiter has been basically burned through in the longitudinal direction when the insertion loss changed by 3 dB. Thus, using the hot spot reaching the melting point of the silicon penetrates the I layer as the damage criterion, the burnout power threshold of the limiters were re-simulated. The simulation results were shown by blue star in Fig. 7. It can be seen that using this device damage criterion, the simulation results are closer to the experimental results. When the thickness of the I layer is 1 μm, the difference between the experimental result and the simulation threshold is within 1 dB, and when the thickness of the I layer is 2 μm, the difference between the experimental result and the simulation threshold is about 2 dB. This damage criterion is obviously more reasonable and accurate, and both the trend and the threshold are more consistent with the experimental results.
In order to study the influence of the I layer thickness on the microwave burnout power threshold of the PIN limiter, the other parameters are the same as those of the CLA4601 PIN diode except for the thickness of the I layer. The damage thresholds of the devices based on the two damage criteria are simulated respectively, and the simulation results of the relationship between the I layer thickness and the burnout power thresholds are shown in Fig. 8.
The simulation results based on the maximum temperature of the device reaching the melting point of the material are shown in Fig. 8 by black square. The burnout power threshold generally decreases as the thickness of the I layer increases. The reason for this phenomenon may be that as the thickness of the I layer increases, the series resistance of the PIN diode increases, and the voltage coupled to the PIN diode die increases accordingly. At the same time, as the thickness of the I layer increases, the charge storage capacity in the I layer increases. So the peak leakage time is longer, that is to say, it need to take longer time to extract the carriers in the I layer to reach the low-resistance limiting state. Therefore, it is more conducive for the PIN diode to absorb more energy to reach the burned state. Also, it should be noted that the burnout power threshold based on the melting point does not change significantly as the thickness of the I layer increases. For example, the difference of the burnout power threshold is just only 0.9 dB between the thickness of the 1 μm and 5 μm I layers.
The simulation results based on the I layer burned through are shown in Fig. 8 by red circle. The burnout power threshold basically increase with the increase of the thickness of the I layer, which is consistent with the usual conclusion. The increase of the I layer thickness will enlarge the thermal power capacity of the PIN diode, so more energy is required to burn the I layer.
Apart from the thickness of the I layer, the anode diameter is also one of the important device parameters of the PIN diode. Although the anode diameter of a specific PIN diode has been determined at the factory, it is also meaningful to study and understand the influence of the anode diameter on the burnout power threshold. In order to study the influence of the anode diameter on the microwave burnout power threshold of the PIN limiter, the other parameters are the same as those of the CLA4601 PIN diode except for the anode diameter. The damage thresholds of the devices based on the two damage criteria are simulated respectively, and the simulation results of the relationship between the anode diameter and the burnout power thresholds are shown in Fig. 9. www.nature.com/scientificreports/ It can be seen from Fig. 9 that the anode diameter has a more obvious effect on the burnout power threshold for the burnout of microwave pulse injection. The relationship between the anode diameter of the PIN diode and the burnout power threshold is approximately linear. The main reason for this phenomenon is that the PIN diode with a larger anode diameter has a larger dynamic area (that is, the lateral area of the three layers of P, I and N), which cause the current and thermal power capacity of the device is higher. From the perspective of power density, the larger anode diameter leads to a larger area of the heating disc, and actual received microwave pulse power per unit area is correspondingly lower, which resulting in a higher device burnout power threshold.
Conclusion
In summary, we investigated the impact of the structure on the thermal burnout effect induced by microwave pulses of the PIN limiter diodes. We found that using the I layer penetrated by the hot spot reaching the melting point of the material as the damage criterion is significantly better than the traditional melting point criterion, both the trend and the threshold are more consistent with the experimental results. This discovery have important reference significance for the analysis of electromagnetic sensitivity of electronic information system and the design of protection and reinforcement of related components. | 3,693.2 | 2021-06-18T00:00:00.000 | [
"Physics"
] |
Density of Eigenvalues of Random Normal Matrices with an Arbitrary Potential, and of Generalized Normal Matrices
Following the works by Wiegmann-Zabrodin, Elbau-Felder, Hedenmalm-Makarov, and others, we consider the normal matrix model with an arbitrary potential function, and explain how the problem of finding the support domain for the asymptotic eigenvalue density of such matrices (when the size of the matrices goes to infinity) is related to the problem of Hele-Shaw flows on curved surfaces, considered by Entov and the first author in 1990-s. In the case when the potential function is the sum of a rotationally invariant function and the real part of a polynomial of the complex coordinate, we use this relation and the conformal mapping method developed by Entov and the first author to find the shape of the support domain explicitly (up to finitely many undetermined parameters, which are to be found from a finite system of equations). In the case when the rotationally invariant function is $\beta |z|^2$, this is done by Wiegmann-Zabrodin and Elbau-Felder. We apply our results to the generalized normal matrix model, which deals with random block matrices that give rise to *-representations of the deformed preprojective algebra of the affine quiver of type $\hat A_{m-1}$. We show that this model is equivalent to the usual normal matrix model in the large $N$ limit. Thus the conformal mapping method can be applied to find explicitly the support domain for the generalized normal matrix model.
Introduction
The normal matrix model became a focus of attention for many mathematical physicists after the recent discovery (see e.g. [11,6,7,8]) of its unexpected connections to the 2-dimensional dispersionless Toda hierarchy and the Laplacian growth model (which is an exactly solvable model describing free boundary fluid flows in a Hele-Shaw cell or porous medium). The original normal matrix model contained a potential function whose Laplacian is a positive constant, but later in [12], Wiegmann and Zabrodin considered a more general model, where the potential function was arbitrary. This is the model we will consider in this paper.
In the normal matrix model with an arbitrary potential function, one considers the random normal matrices of some size N with spectrum restricted to a compact domain D 1 and probability measure where dM is the measure on the space of normal matrices induced by the Euclidean metric on all complex matrices, W is a potential function (a real function on C with some regularity properties, e.g. continuous), and Z N is a normalizing factor.
In the original works on the normal matrix model, the potential was where P is a complex polynomial of some degree d, and β a positive real number. For this type of potential, it was shown in the works [11,6,7,8] (and then proved rigorously in [3]) that under some conditions on the potential, the asymptotic density of eigenvalues is uniform with support in the interior domain of a closed smooth curve. This curve is a solution of an inverse moment problem, appearing in the theory of Hele-Shaw flows with a free boundary. Thus, applying the conformal mapping method (see [10] and references therein), one discovers that the conformal map of the unit disk onto the outside of this curve which maps 0 to ∞ is a Laurent polynomial of degree d. This allows one to find the curve explicitly up to finitely many parameters, which can be found from a finite system of algebraic equations.
In [12], Wiegmann and Zabrodin generalized this analysis to an arbitrary potential function. They showed that the density of eigenvalues is the Laplacian of the potential function, and the eigenvalues are concentrated in the domain which can be determined from an appropriate inverse moment problem. This was proved rigorously in the paper [5], which extends the Elbau-Felder work to the case of an arbitrary potential.
One of the goals of the present paper is to use the generalized conformal mapping method, developed in [4] by Entov and the first author for studying Hele-Shaw flows with moving boundary for curved surfaces, to calculate the boundary of the region of eigenvalues explicitly in the case when where Φ is a function of one variable. In this case, the conformal map of the disk onto the outside of the curve is no longer algebraic, but one can still give an explicit answer in terms of a contour integral. Another goal is to extend the above results to the case of generalized normal matrix model. In this model, we consider block complex matrices of a certain kind with commutatation relations similar to the definition of a normal matrix; they give rise to * -representations of the deformed preprojective algebra of the affine quiver of type m−1 . We prove that the problem of computing the asymptotic eigenvalue distribution for this model, as the size of the matrices goes to infinity, is equivalent to the same problem for the usual normal matrix model. This allows one to find the boundary of the eigenvalue region explicitly if the potential is given by (1).
The structure of this paper is as follows. In Section 2, we state some basic facts about the normal matrix model. In Section 3, we define the generalized normal matrix model, and write down the probability measure in this model. In Section 4, we recall some facts about the equilibrium measure and explain that the asymptotic eigenvalue distribution tends to the equilibrium measure in the normal matrix model and the generalized normal matrix model. In Section 5, we use the singular point method from [4,10] to reconstruct the boundary of the support domain of the equilibrium measure. Let dM be the measure on N (D) induced by the Euclidean metric on Mat N (C). It is well known (see e.g. [9,1]), that in terms of the eigenvalues this measure on N (C) is given by the formula where M = U diag(z 1 , . . . , z N )U † , U ∈ U (N ), and dU denotes the normalized U (N )-invariant measure on the flag manifold U (N )/U (1) N . Now let W : C → R be a continuous function. If M is a normal matrix, then we can define W (M ) to be diag(W (z 1 ), . . . , W (z N )) in an orthonormal basis in which M = diag(z 1 , . . . , z N ). It follows from the above that the probability measure on N (D) with potential function W is given by where Here we assume that the integral is convergent (this is the case, for instance, if D is compact). 3 The generalized normal matrix model
Generalized normal matrices
Let us consider the following generalization of normal matrices. Let m ≥ 1 be an integer. For a fixed collection λ = (λ 1 , . . . , λ m ) of real numbers such that i λ i = 0, and a domain D, we define N m (λ, D) to be the subset of A ∈ Mat mN (C) satisfying the following conditions: for any A ∈ N m (λ, D), • If A ij , i, j = 1, . . . , m are N × N blocks of A, then A ij = 0 unless j − i = 1 mod m; • The spectrum of A 12 A 23 · · · A m1 is contained in D; Note that N 1 (0, D) = N (D), thus elements of N m (λ, D) are a generalization of normal matrices. We will thus call them generalized normal matrices. Remark 1. Generalized normal matrices are related in the following way to quiver representations. Let Q be the cyclic quiver of type m−1 , andQ its double. Let Π Q (λ) be the deformed preprojective algebra of Q with parameters λ (see [2]). By definition, this algebra is the quotient of the path algebra ofQ by the relation a∈Q [a, a * ] = λ i e i , where e i are the vertex idempotents.
The algebra Π Q has a * -structure, preserving e i and sending a to a * and a * to a. It is easy to see that N m (λ, D) is the set of all matrix * -representations of Π Q of dimension N δ (where δ = (1, 1, . . . , 1) is the basic imaginary root) such that the spectrum of the monodromy operator a 1 · · · a m is in D.
We have the following lemma, which is a generalization of the fact that a normal matrix diagonalizes in an orthonormal basis: , it suffices to prove the lemma in the case when A i A † i is a scalar in V i , in which case the statement is easy.
The Euclidean measure on generalized normal matrices
First, let us consider the N = 1 case. Pick real numbers α i such that λ i = α i − α i−1 , and let Thus to each A ∈ N m (λ, C) we can attach a real number x = r 2 i − α i , which is independent of i, and a complex number z = m j=1 r j e iθ j . It is easy to see that the point (z, x) belongs to the Moreover, it is clear that any point of Σ corresponds to some A, and two matrices A, A ′ giving rise to the same point (z, x) are conjugate. This implies that we have a bijection between the equivalence classes in N m (λ, C) under the action of U (1) m and points of Σ. Writing z = re iθ , we see that x, θ are coordinates on Σ, so we may write the Euclidean measure on N m (λ, C) using the coordinates x, θ.
This implies that the Euclidean measure on N m (λ, C) is as desired.
Let us now consider the case of general N . From Lemma 1, we know that under the action of U (N ) m , the equivalence class of A ∈ N m (λ, C) can be represented by m diagonal matrices where Thus ((z 1 , x 1 ), . . . , (z N , x N )) is a point on Σ N /S N . Similarly to N = 1 case, it is easy to show that this gives rise to a bijection between conjugacy classes of elements of N m (λ, C) and points of Σ N /S N . Using this fact and combining the method of computation for usual normal matrices with the N = 1 case, one gets the following result.
Proof . At first, consider the subset N diag m (λ, C) of N m (λ, C) consisting of the elements M of the form (3). Then by Theorem 1, the measure on N diag m (λ, C) induced by the Euclidean metric is the product measure: where dU diag is the Haar measure on U (1) N m /U (1) N . Now consider the contribution of the off-diagonal part. Consider the elements of the Lie algebra of U (N ). Let V i,j,k , W i,j,k be the derivatives of (exp(tv i,j )) k M and (exp(tw i,j )) k M at t = 0, where a k := (1, . . . , 1, a, 1, . . . , 1) ∈ U (N ) m , with a ∈ U (N ) in the k-th place. Then by formula (4), we have To calculate φ, let us denote by B i,j,k , i = j, the derivative of (exp(tE i,j )) k M (note that since E i,j lies only in the complexified Lie algebra of U (N ) m , we have (exp(tE i,j )) k M / ∈ N m (λ, C), but this is not important for our considerations). Then equation (5) takes the form Now φ can be easily calculated. To do so, we note that for a given i, j, the transformation (exp(tE i,j )) k changes only the entries a p i,j of M . On these entries, it acts by This means that for each i, j, | ∧ k B i,j,k | = |J i,j |, where as desired.
The probability measure with potential function on generalized normal matrices
Let W : C → R be a potential function. The probability measure on N m (λ, D) corresponding to this function is defined similarly to the case of usual normal matrices: D), where M i are the blocks of M . Thus in terms of eigenvalues Example 1. Let us calculate the potential function corresponding to the quadratic potential Tr(M M † ). We have Thus if we choose α i so that i α i = 0 (this can be done in a unique way), then so the corresponding potential function is W (z) = mQ −1 (|z| 2 ) (the function Q is invertible on the interval [−α, ∞), where α = min α i ). An equilibrium measure for W on D is a measure σ ∈ M(D) such that Theorem 3. The equilibrium measure σ exists and is unique. It satisfies the equation where C is a constant, almost everywhere with respect to σ.
The proof of this theorem can be found in [3]. Note that equation (6) does not have to hold outside the support of σ.
Note also that if σ is absolutely continuous with respect to the Lebesgue measure near a point z 0 in the interior of D, and dσ = g(z)d 2 z, where g is continuous near z 0 and g(z 0 ) > 0, then ∆W = 4πg near z 0 . This clearly cannot happen at points where ∆W ≤ 0. In particular, if ∆W ≤ 0 everywhere, then dσ tends to be concentrated on the boundary of D.
Asymptotic eigenvalue distribution in the normal matrix model
In Section 2, we defined a measure by formula (2). We are interested in the behavior of this measure when N → ∞. Let δ z = 1 N N j=1 δ z j be the measure on D corresponding to the points z j . Then This shows that the leading contribution to the integral with respect to the measure P N (M )dM comes from configurations of eigenvalues z 1 , . . . , z N for which the expression in parentheses in the last equation is minimized. This means that in the limit N → ∞, we should expect the measures δ z for optimal configurations to converge to the equilibrium measure with potential function W . This indeed turns out to be the case, as shown by the following theorem, proved in [3].
Theorem 4. Let the k-point correlation function be
Then the measure on D k converges weakly to dσ ⊗k , where dσ is the equilibrium measure on D, corresponding to the potential function W .
In particular, if k = 1, it means that the eigenvalue distribution tends to the equilibrium measure in D as N → ∞.
Asymptotic eigenvalue distribution in the generalized normal matrix model
As we have seen above, the eigenvalue distribution in the generalized normal matrix model is In the limit N → ∞ the second term becomes unimportant compared to the first one, which implies that Theorem 4 is valid for the generalized normal matrix model. Thus in the limit N → ∞, the usual and the generalized normal matrix models (with the same potential) are equivalent.
Reconstruction of the boundary of the domain
In previous sections, we showed that in the normal matrix model and the generalized normal matrix model, when N → ∞, the eigenvalue distribution converges to an equilibrium measure on D corresponding to some potential function W . In this section, we will try to find this measure explicitly in some special cases. More specifically, we will consider the case when ∆W > 0. In this case, if the region D is sufficiently large, it turns out that the equilibrium measure is often absolutely continuous with respect to Lebesgue measure, and equals dσ = (4π) −1 χ E ∆W d 2 z, where E is a region contained in D (the region of eigenvalues), and χ E is the characteristic function of E. More precisely, it follows from Proposition 3.4 in [3] that if there exists a region E ⊂ D such that dσ satisfies equation (6) in E, and the left hand side of this equation is ≥ C on D \ E, then dσ is the equilibrium measure on D for the potential function W . Moreover, note that if E works for some D then it works for any smaller D ′ such that E ⊂ D ′ ⊂ D. So, in a sense, E is independent of D. (Here we refer the reader to [5], section 4, where there is a much more detailed and precise treatment of equilibrium measures, without the assumption ∆Φ > 0).
Thus let us assume that E exists, and consider the problem of finding it explicitly given the potential W .
The reconstruction problem
We will consider the case when D = D(R) is the disk of radius R centered at the origin, and where Φ is a function of one variable continuous on [0, ∞) and twice continuously differentiable on (0, ∞), and P a complex polynomial. We assume that (sΦ ′ (s)) ′ is positive, integrable near zero, and satisfies the boundary condition lim s→0 sΦ ′ (s) = 0. Computing the Laplacian of W , we get (taking into account that ∆ = 4∂∂): where s = zz. Define the measure dσ = gd 2 z.
Suppose that the region E exists, and contains the origin. In this case, differentiating equation (6) with respect to z, we have inside E: On the other hand, inside the disk D, the function W 0 (z) := 2 D g(ww) log |z − w|d 2 w satisfies the equation ∆W 0 = 4πg, and is rotationally invariant, so where C ′ is a constant. Hence, differentiating, we get, inside D: Thus, subtracting (7) from (8), we obtain inside E: Let I(s) = π s 0 g(t)dt = sΦ ′ (s). Then∂I(zz) = πzg(zz)dz. Thus, using Green's formula, we get from (9): where the boundaries are oriented counterclockwise. The integral over the boundary of D is zero by Cauchy's formula, so we are left with the equation This equation appeared first in the theory of Hele-Shaw flows on curved surfaces in [4], and it can be solved explicitly by the method of singular points developed in the same paper. Let us recall this method.
The singular point method
Define the Cauchy transform h E of E with respect to the measure dσ by This is a holomorphic function of z which (as we have just seen) is independent of the radius R of D. As we have seen, it is also given by the contour integral and in our case we have h E (z) = P ′ (z). Let f : D(1) → C \ E be a conformal map, such that f (0) = ∞, and (1/f ) ′ (0) = a ∈ R + (such a map is unique).
Lemma 2. The function
continues analytically from the unit circle to a holomorphic function outside the unit disk.
Proof . By the Cauchy formula, we have So by formula (10), we have It follows that the function I(zz)/z − h E (z), defined along ∂E, can be analytically continued to a holomorphic function outside E, which vanishes at infinity. This implies the lemma.
Similarly to [4], this lemma implies the following theorem. Thus, if h is a rational function, then θ can be determined from h up to finitely many parameters.
After this, f can be reconstructed from θ using the Cauchy formula. For this, note that the function I is invertible, since I ′ = g > 0. Also, θ takes nonnegative real values on the unit circle. Thus, we have f (ζ)f (1/ζ) = I −1 (θ(ζ)).
Thus we have The unknown parameters of θ can now be determined from the cancellation of poles in Theorem 5, similarly to the procedure described in [10]. We note that the knowledge of the function h E is not sufficient to determine E (for example if E is a disk of any radius centered at the origin then h E = 0). To determine the parameters completely, we must also use the information on the area of E:
The polynomial case
In particular, in our case, h E (z) = P ′ (z) = a 1 + a 2 z + · · · + a d z d−1 , So we get Finally, note that if the coefficients of the polynomial P are small enough, then all our assumptions are satisfied: the region E exists (in fact, it is close to a disk), and contains the origin. Also, in this case the left hand side of equation (6) is ≥ C, which implies that the equilibrium measure in this case (and hence, the asymptotic eigenvalue distribution) is the measure dσ in the region E. This implies that g > 0, i.e. our analysis applies in this case.
Some explicit solutions
Consider the case Φ(s) = Cs b , C, b > 0. For example, in the generalized normal matrix model with α i = 0 and potential term as in Example 1, one has Φ(s) = ms 1/m , which is a special case of the above.
We have g(s) = π −1 Cb 2 s b−1 , so our analysis applies (note that if b < 1 then g is singular at zero, but the singularity is integrable and thus nothing really changes in our considerations), and I(s) = Cbs b . Thus the integral in (11) can be computed explicitly (by factoring θ), and the formula for the conformal map f simplifies as follows: The parameters a > 0 and ζ j are determined from the singularity conditions and the area condition.
The residue of θ at zero is thus Cbβa −2b . Thus the singularity condition says The area condition is Thus we find β = KC −1 b −1 a 2b−1 , and the equation for a has the form Remark 2. This example shows that to explicitly solve the generalized (as opposed to the usual) normal matrix model in the N → ∞ limit with the quadratic (Gaussian) potential, one really needs the technique explained in Section 5 of this paper, and the techniques of [3] are not sufficient. | 5,334 | 2006-12-05T00:00:00.000 | [
"Mathematics"
] |
The Development of Mobile Application to Introduce Historical Monuments in Manado
. Learning the historical value of a monument is important because it preserves cultural and historical values, as well as expanding our personal insight. In Indonesia, particularly in Manado, North Sulawesi, there are many monuments. The monuments are erected for history, religion, culture and past war, however these aren’t written in detail in the monuments. To get information on specific monument, manual search was required, i.e. asking related people or sources. Based on the problem, the development of an application which can utilize LBS (Location Based Service) method and some algorithmic methods specifically designed for mobile devices such as Smartphone, was required so that information on every monument in Manado can be displayed in detail using GPS coordinate. The application was developed by KNN method with K-means algorithm and collaborative filtering to recommend monument information to tourist. Tourists will get recommended options filtered by distance. Then, this method was also used to look for the closest monument from user. KNN algorithm determines the closest location by making comparisons according to calculation of longitude and latitude of several monuments tourist wants to visit. With this application, tourists who want to know and find information on monuments in Manado can do them easily and quickly because monument information is recommended directly to user without having to make selection. Moreover, tourist can see recommended monument information and search several monuments in Manado in real time.
Introduction
Manado is one of the most famous tourism destination in Indonesia and is located in Asia-Pacific, which is bordering on the Philippines, Republic of Palau and the Pacific Ocean [1].It is nicknamed Maldives Van Celeb because this city has the beautiful National Park of Bunaken which is similar with Maldives (Maldives) in the north-western tip of Sulawesi [2].Beside Bunaken, Manado also has many interesting destinations which are historical monuments that have religious history, cultural history, and even the history of Indonesian Independence struggle.Cultural diversity and monuments make Manado as a city that has so many tourism destinations.In fact, in 2017, there is government program which is MWTC (Manado as World Tourism Centre 2017) so that Manado becomes the destination of the International and Local tourists.Travelling is a necessity that cannot be separated from human life.The monuments are needed by tourists in order to experience adventure tour in Manado.
Access to information services are still inadequate and not proportional with many tourism destinations that are available, especially the monuments in Manado.Tourists get difficult to find the locations.They are not getting detailed information's about the destinations they want to visit, for example, they cannot get information about route which is short, safe, and free from congestion.So, the time wasted on the way to the location, even the tourists can have over budget to hire a tour guide who is not organized.
The development of information technology, especially mobile phones, becomes a facility that improves services in accessing the data needed by tourist to know the locations, distance, and description of the monuments in Manado.One result of the mobile technology development is the emergence of a mobile phone that has Android as its operation system.The term "Android" comes from the Greek word with the prefixand that means "male" and the suffix -elides means "the same or similar", the full meaning is together "to be human".Android is a set of software for mobile device that has a relationship between one to another and becomes a set of system programs or a set of application programs that forms a complete system [3].
Android is meant to revolutionize the mobile phone market by bringing internet to mobile phone and enabling its operations just like operating a personal computer (PC), so with Android, consumers can easily get information's via smart applications on their phone [3].An application that consists of information's about the historical monuments such as location, distance, and their description is needed for accuracy in determining which monuments are priority and can be visited by tourists.
Based on the problems described above, there is an application that can help visitors and tourists in finding information and location of the monument in Manado.The application is location-based service (LBS).LBS means mobile service providers facilitate their mobile phone with information about tourism destination that based on their users location.The development of mobile campus applications to guide students, parents, and/or visitors in finding places on campus.By using LBS (Location Based Service) and NFC (Near Field Communication) feature, phones will be connected automatically in close range with other NFC phones [4].The increase of popular Smartphone also opens opportunities for mobile services to develop mobile tourism application to suggest tourism destinations based on context factors such as location, weather, and time available.This mobile tourism application is seen as the most efficient way to help travelers on their trip.
Location Based Service (LBS) is a service which is location-based, and will be accessed via mobile device (Smartphone, etc.)So that, it can display a map along with the location where the mobile device is located.Location-based service (LBS) provide customized information according to user characteristics, so it can be easily expanded for many other usages, such as guiding routes for transportation systems or for tourism destination [5].
Location-Based Services application provides a set of services for users who are from geographic location of mobile devices.Use of this service makes it possible for users to search and find other people, vehicles, resources and provide location-sensitive services, besides tracking their own location.Requests for location may come from mobile devices or other entities such as application or networks providers.It is possible to automatically trigger Location Based Services when the mobile device is in a particular location.In this study, it is discussed how to implement location-based services on Android [6].
It is possible to automatically trigger Location Based Service when the mobile device is in a particular Location.This service can also be derived from the user's mobile device to meet location-based requests such as finding areas of interest, checking traffic conditions, finding friends, vehicles, resources, machinery and emergency requests.This study will discuss how to implement location-based services on Android [7].
Developing Location Based Services applications uses data mining for mobile users.Data Mining aims to find interesting and useful knowledge from the database.Conventionally, the data were analyzed manually.Many useful relationships are not revealed and may not be user identity.Through data mining, it can extract interesting knowledge and regularity.Mobile application which is built using Data Mining approach can find the nearest and famous location around a certain area.By using the data mining approach, it extracts the database with the help of different data mining algorithms.The nearest location is determined by the wireless network.It provides data using open source Android.It provides a world-class platform for creating applications and games for android users everywhere, and open markets for immediate distribution to users [8].
Mobile computing has grown in such a way that users can access all the information on a single device where people are always moving with mobile devices like laptops, cell phones, tablets, etc.Using a user geographic location, many of the information's related to mobile device users can be collected.The location information of mobile users can improve the class of services and applications that can be provided to mobile device users.These application and service classes are called as location-based service.Location Based Services (LBS) is a type of service that helps obtaining user's geographic location and more useful information near the user's location.This location-based information can be obtained by various terms such as position, vicinity, proximity, context, maps, routes, places, and more [9].
Design of tour guide system is based on three layers of architecture.The architecture includes the browser layer, top layer and bottom layer.It uses KNN algorithms and collaborative filtering to calculate and recommend tourism information to users.Limitations found in this study are the application is not very efficient in providing information and predicting the right places with an affordable price [10].
Tour guide system uses web technology services and three layers architecture, by using web technology and three layers architecture that are browser layer, business logic layer and server layer.Lucene is used to create an index for data which is used to execute efficient requests.Limitations found in this study are the differences in the number of Geographic description details of the code at different locations, for example, one place has information and detail description in the form of building address, but the other place only has city name and zip code [11].
Research Methodology
In conducting this research there is a research methodology used to obtain information that is completely understood and the results in accordance with the expected results, as well as get the results of quality scientific work [12].Therefore, the researcher uses methods in the form of: Methods of data collection uses interviews and field studies.Interviews were conducted with local tourism authorities to obtain information in monuments in Manado.In addition, researcher also uses sources from other existing research such as journals, papers and reference books.Research methods can be described as follows [13].
Observation Method
At this stage, researcher collects data by doing observation to monuments that are interesting.
Library Method
This method is data collection by studying books (literature study), journals and other references that are related to this research.
For developing monument recommendation system, it uses initial determination of KNN algorithm, grouping items with K-means and the process of filtering or evaluating items with collaborative filtering.K-Nearest Neighbor (KNN) algorithm is a method to classify objects based on learning that is very close to the object.The working principle of K-Nearest Neighbor (KNN) is to find the closest distance between the data that will be evaluated with the nearest K-Neighbor in the training data. (1) One of partition algorithm is the K-means algorithm, it is algorithm based on the definition of initial centroid after determining the number of first group.The process is periodically used in the K-means algorithm to obtain a database cluster.The data items in this algorithm are grouped into a data set according to the closest distance to a cluster [14].
Filtering process or evaluating items which are based on the perceptions of other users becomes an important determinant of collaborative filtering algorithm.A profile is used to filter a number of items.Collection and profile development become a collaborative filtering technique, then determine the relationship between the suitability of model equations and data.The purpose of collaborative filtering is to filter number of items which users select, and provide recommendations to users with the remaining options [15].
Analysis
In developing this system, it is required a phase which is called analysis phase, and there will be need analysis in this phase.This phase is done because the system that is made sometimes is not working as what system maker expects.this problem happens because need analysis is not done correctly.The purpose of the phase is to examine and identify problems that happen to system users, so that systems which will be built and developed in accordance with what is expected by the user.Besides, disadvantages and errors in building the system can be minimized by making or adding minor changes to the built system.
Information about the monuments is the important thing that should be given to tourists, and one of the information that must be covered is the monuments location.There are still many tourists who do not know about the existence of monuments in Manado, they do not know about the location of each monument, address, short description, route location and related information needed.This obstacle happens, because not all tourists know the location and information related to the existing monument, especially for tourists who visit one of the monuments in Manado for the first time.
Tourists still search information manually by asking other people who they meet, besides, some of them find the information from brochures in the form of hardcopy, or find monument information via internet.However, the search for monument information by reading brochure is rarely encountered because of the globalization.Similarly, asking others is also rarely done now, because the information that they get is limited to the experience of person they meet.In addition, searching the information on the internet is not helpful enough because there is no special website that provides sufficient information's about the monuments.Therefore, it is necessary to develop a system that can help and facilitate the tourists to obtain the monument information's that are recommended in a real time.
As the result of this study, the proposed system is developing a system that can recommend the right monument to the users regarding the existence of monuments in Manado.Tourists can be facilitated by the recommendation about the monument they want to visit.The information displayed is recommended monument, brief description, map and route location to the monument using Smartphone device through mobile monument application.
Design
Design phase is the stage that describes the application design and develops the system structure.This stage includes modelling data, architecture, interface design and the implementation.This phase describes about design process that will be transformed into software design, which can be predicted before it is implemented into the program.The process will focus on determining the overall system architecture, the design phase will also be used to perform system-building activities.Then, in developing this system, use-case is used to explain what users can do to monument recommendation system using mobile device.Figure 2 shows the use-case diagram of the monument recommendation system.The following is the design of monument recommendation system architecture that will be shown in Figure 3
Results and Discussion
The monument recommendation system which is built uses four modules in its application.The first module is used for system authentication purposes, the second module is used to display the initial system interface, the third module is used to display monument recommendations to users and the fourth module is used to display details of the monuments.
In module 1, there is checking system about user registration that is done by monument recommendation system.If users have not registered, they can register in the system directly.After done registration, user will get authentication from the system, then they can access the system.
In module 2, system will display the monument categories which are historical, cultural, and religious monuments.The classification of monuments will use different icons in representing them, in order to ease users when they select category they want.Each recommendation of the monument given will be shown with the list of monuments based on the classification.Besides, there are also features to know information about the use of system and how to use it.
Module 3 is used to provide information about classification of the monuments.The information displayed is a list of monuments recommended.Furthermore, in Module 4, each monument classification will display detailed information, maps and location routes that are needed by the users.The maps and routes are displayed on Google Maps that is tracking users' location by utilizing GPS.GPS is a navigation system that gives information about the coordinate of users location.Then, the system will display some interesting locations or other places such as public facilities, cafes, airports and so forth.
Users can interact into the system using Smartphone devices to access GPS using monument applications, but GPS must be active to do it.Then, the system will respond command given by performing related processes.The process will provide the information needed according to the longitude and latitude to display the monument in interface system.In addition, the system also automatically reads the database related to some monuments through the information that has been built, and the monument will be displayed based on each choice according to wishes of the user.Monuments displayed include historical monuments, cultural monuments and religious monuments.
Information's displayed by Manado monument application are: KNN algorithm is used to find and get the distance between items and users needs in cluster.Then, the classification can be grouped in neighboring categories.K-mean algorithm is used to observe and group items into the allocation of several sections to each cluster observation, according to the nearest mean, as well as the prototype cluster.Collaborative filtering is used to filter the users choice which then recommend some of remaining options to the users.
To calculate the suitability between tourist object criteria and tourists needs, similarity formula is used.
Similarity(P, C) = (s1 * w1 + s2 * w2 +… + sn * wn) / (w1 + w2 + … + wn)
(2) In the interface picture above, it shows the initial interface of monument recommendation system that consists of historical monuments, cultural monuments, and religious monuments.Each monument has been distinguished according to its category and lists.The lists are list of historical monuments, list of cultural monuments, and list of religious monuments.Through the monument lists, users can find detailed monument information or can find map of monument locations and monument location routes that can direct users to the monument they want to go.
Then, maps and routes to the monuments which are recommended by the system, will be shown in the following picture.The picture above shows map of the location and route of one monument that has been recommended by the system when the user selects the monument as he wishes.The system will respond and give route of the location to the monument based on the user starting position by using GPS support on the user's mobile device.
Conclusions
In this research, researcher presents a design for the monument recommendation system using mobile devices' location-based services (LBS), KNN, K-Nearst, and Collaborating Filtering.The monument recommendation system makes tourists get information about the monument in Manado more easily and efficiently.Monument location can be obtained using the directions menu on each monument that aims to facilitate the users in order to reach the location, because it displays the route that must be passed from users starting position.The combination of mobile applications using Location Based Services (LBS) and KNN is chosen because it is interactive.It means that there is a two-way interaction between users and the system, which the system will easily recommend interesting monuments that can be selected by tourists.So, tourists realize that monuments' attractiveness in Manado increases. | 4,187.4 | 2018-02-01T00:00:00.000 | [
"History",
"Computer Science"
] |
Downlink Cooperative MIMO in LEO Satellites
We consider a communication scheme in which two low earth orbit (LEO) satellites jointly transmit (at the same time and frequency) to a multi antenna land terminal (LT). This scheme can increase the achievable terminal throughput by up to a factor of 2, depending on the channel matrices. The implementation aspects of this scheme are well-known but the usefulness of this scheme for LEO satellites has yet to be studied. Because the satellite channel is dominated by its line-of-sight (LOS) component, the terminal’s ability to separate the two streams depends critically on the system’s instantaneous-configuration; i.e., the relative location of the satellites and the terminal antennas. Since the relative locations of LEO satellites vary rapidly, the throughput characterization is not straightforward. To characterize the network performance we consider two satellites having one antenna, transmitting to a single LT having multiple antennas. We introduce a novel stochastic framework that assumes the terminal orientation as random. Using the proposed framework, we show that if the terminal antennas are close, the network throughput is nearly independent of the terminal orientation. When the terminal antennas are sufficiently separated, the result is completely different. For this case, we define an outage event and calculate the outage probability. The definition of outage in our novel stochastic framework allows us to prove that dual satellite transmission can indeed increase the downlink throughput with high probability.
I. INTRODUCTION
LEO satellite communication (SatCom) is expected to play an important role in wireless communications by providing global-coverage, high-throughput and low-cost internet access [1], [2]. This includes remote rural areas, civil aviation, as well as commercial and cruise shipping lines. Thousands of LEO satellites are expected to be deployed in the next decade by different commercial and public institutions [3], [4]. The idea is to create a flexible, low-latency SatCom network that supplies high coverage, independent of the terrestrial infrastructure. To justify such a large investment, the network must provide a high information rate and high reliability. This requirement is even more pressing for airplanes and ships, where thousands of subscribers will only be served by a few satellites.
Multiple input multiple output (MIMO) communication is a mature technology with a well-established theory and practical algorithms (e.g., [5], [6]) which are incorporated The associate editor coordinating the review of this manuscript and approving it for publication was Adao Silva . into 5G [7] and other protocols [8]. To compete with terrestrial systems, SatCom will need to resort to MIMO technology and take advantage of the significant research achievements in this field [9]. It is therefore important to evaluate the potential MIMO performance gain and determine whether this technology is feasible in LEO SatCom. The key obstacle to incorporating MIMO into LEO-SatCom is its lineof-sight (LOS) channel characteristics.
MIMO shows a significant gain in terrestrial wireless communications, primarily in cases of rich-scattering environments and systems with sufficient antenna spacing at the transmitter and receiver; all of which guarantee wellconditioned channels. 1 By contrast, satellite channels in high frequency bands are characterized by strong LOS components with negligible scattering. Since scattering is widely believed to be a prerequisite for well-conditioned MIMO channels, the feasibility of spatial multiplexing to the SatCom 1 A well-conditioned channel has multiple, significant singular-values. This enables it to support multiple spatial data streams and separate them at the receiver. channel has been questioned in the past [10] and is subject to contentious discussions in the scientific community.
In MIMO satellite systems, the transmit or receive antennas must be spatially separated to achieve spatial-multiplexing gain. It has been shown, both theoretically [11] and experimentally [12], that spatial multiplexing in LOS-dominant channels is possible if multiple satellites in different orbital positions cooperate or if the LT antennas are sufficiently far apart (at least several kilometers). This latter separation is not possible in most LEO applications, which renders spatial multiplexing from a single satellite to a single terminal infeasible (even if both have multiple antennas). In this paper, we focus on the first alternative; that is, two satellites that cooperate to spatially multiplex data to a single, multiantenna, LT.
Moreover, in a LOS channel, the exact antenna location has a significant effect on MIMO performance. Hence, for GEO satellite MIMO systems, much work has been devoted to optimizing antenna placement [11] or to improving user grouping based on their locations [13]. However, such configurations with LEO satellites is impractical due to the high speed at which these satellites travel. It is thus crucial to analyze the potential multiplexing gain for LEO satellites.
In this paper, we propose a new approach to analyze the performance of LEO communication systems. We consider two satellites with limited means of cooperation, that transmit to a single land terminal (LT). In more explicit terms, each satellite transmits an independent data stream with its single antenna, where the only coordination is through coarse synchronization and by adjusting transmission rates. The LT, which is equipped with multiple antennas, is responsible for the separation and detection of the two data streams. Note that LEO satellites have a large angular velocity with respect to Earth and the distance between each satellites and the terminal varies with time. Thus, unlike the single satellite case, this small distance variation may cause significant changes in the channel phase, which affect the terminal's ability to separate and detect the two data streams. Thus, good separation of the two data streams is not guaranteed in all satellite-terminal configurations.
To characterize the network performance, we present a novel stochastic framework for satellite communication. We incorporate randomness into the channel by considering the terminal orientation (azimuth rotation) as random. Using this stochastic framework, we obtain a closed-form expression for the channel distribution in the downlink of cooperative MIMO communication with two LEO satellites and a single LT that is equipped with a uniform circular array (UCA) of antennas.
Based on this characterization, we evaluate performance in the two extreme cases where the LT antennas are very close and very far from each other (i.e., when the UCA radius is small or large). For close antennas, we show that the throughput changes very slowly, and is nearly independent of the terminal orientation. Thus, the throughput can be well-predicted by a deterministic closed form expression. This expression depends solely on the network parameters and on a normalized measure of the satellite separation.
For far antennas, the throughput can change rapidly as the satellites move. In this case, we characterize the performance of the distribution of the instantaneous rate for joint transmission. This distribution serves to characterize the outage probability or to evaluate the throughput of an adaptive network with timely feedback from the LT. The results show that in almost all scenarios, the outage probability is low. That is, dual satellite transmission can increase the downlink rate with high probability.
The rest of the paper is structured as follows. Section II presents the system model for MIMO in satellite communication. Section III details our statistical characterization of the deterministic LOS channel. Section IV furnishes the performance analysis for the cases of small and large antenna separation. Section V introduces supporting numerical results and Section VI notes our concluding remarks.
II. SYSTEM MODEL
We consider a downlink in a satellite communication network that consists of two LEO satellites, each having a single antenna, 2 and a single fixed LT with M > 1 antennas. The two satellites transmit independent data streams simultaneously to the terminal. We assume, for simplicity, that the terminal decodes each stream using zero-forcing (ZF) and further assume single-user decoding (SUD); i.e., the terminal decodes the signal of each satellite independently, while treating the signal of the other as noise.
A. COORDINATE SYSTEM
We use two coordinate systems, Cartesian and Spherical, both centered at the terminal's location. The Cartesian coordinate system is depicted in Figure 1, wherek is a unit vector normal to Earth (z-axis),î is a unit vector pointing east (x-axis), and similarly,l points north (y-axis). Given a point with Cartesian coordinates (a x , a y , a z ) = a xî + a yl + a zk , its spherical coordinate representation is (r, θ, φ) where The terminal antennas are arranged in a UCA of radius d where the angle between adjacent pairs is 2π/M , as depicted in Figure 2, and the angle between antenna m and theî-axis is given by Explicitly, the Spherical coordinates of antenna m are (d, γ m , 0) and the Cartesian coordinate representation of the antenna location is denoted as (b m,x , b m,y , b m,z ).
Recalling that each satellite has a single antenna, the Spherical representation of satellite is given by (r , θ , φ ), as depicted in Fig 2. The corresponding Cartesian coordinates is given by (a ,x , a ,y , a ,z ).
B. SYNCHRONIZATION AND CHANNEL MODEL
The main factors characterizing the LOS channel are the distances between each LT antenna and each satellite. The distance between antenna m in the LT and satellite is given by: In Spherical coordinates it is given by where (r , θ , φ ) denotes the satellite coordinates at t = 0, and we used the zero elevation angle (φ = 0) of all terminal antennas. We further define m, where the latter approximation neglects terms of order O(d 2 /r 2 ) because r d. The baseband signal, transmitted by satellite ∈ {1, 2}, is given by , T is the symbol duration,τ is satellite timing offset, p(·) is the pulse shape andf is the satellite hardware frequency-offset. The pulse shape, which is normalized, is chosen such that it does not induce inter symbol interference (ISI). Thus, the pulse auto-correlation is the Kronecker delta function.
Using the LOS channel, and considering the large difference between the satellite distance and the LT antenna separation (r d), the received baseband-signal at antenna m is modeled by where ϒ is a constant expressing the effect of transmitter hardware, antenna gains, and atmospheric and rain attenuation; c is the speed of light, f c is the carrier frequency and f c c dr dt is the Doppler frequency shift of satellite . The additive noise, n m (t), is a complex white Gaussian with two sided spectral density of N 0 . Note that the model of (7) includes several standard approximations. First the gap between the distances from the satellite to the LT center and to its m antenna, m, (cf. (6)), is neglected in the attenuation term ( ϒ r ) and in the signal delay (s(t − τ )). This small gap affects only the argument of the exponent, where it is multiplied by f c .Furthermore, the dependence of r in time is considered only in the Doppler shift. The Doppler shift is also assumed to be the same for all LT antennas due to their proximity.
The LT employs two synchronization circuits. Each circuit estimates (and tracks) the overall time offset τ =τ + r c and frequency shift f =f + f c c dr dt of one satellite. This can be done using various known schemes for synchronization in LEO satellite networks (which have been comprehensively investigated in the last few decades). For example, [14], [15], present synchronization for CDMA communication, where the inherent interference mitigation between different CDMA spreading codes allows a simple separation of the different satellite signals. Current works focus primarily on 5G networks and present synchronization algorithms that are robust to multi-satellite transmission and large Doppler shifts, while employing the standard 5G primary synchronization signals (e.g., [16]). Note that once the ZF equalizer is initialized, each circuit has a clean signal from its corresponding satellite. Thus, in the following, we assume that τ and f are perfectly known at the LT, for = {1, 2}.
The receiver separately compensates for the delay, τ , and frequency shift, f , of each satellite. This operation creates two different signal branches for every m, i.e., y m, (t) = y m (t)e j2π(f c τ +f t) (8) for ∈ {1, 2}. Each signal branch employs a match-filter, matched to p(t), and synchronized to its respective satellite. Thus, the sampling times at the branch that corresponds to satellite are nT + τ , n ∈ Z. The resulting signal is where¯ = 3 − is the index of the other satellite and For = 1 the latter can be written as and λ = c/f c is the wavelength of the carrier frequency. Correspondingly, for = 2, one obtains As both signals can be expressed in terms ofH, the ZF equalizer designed forH is able to separate the signals completely. We note thatH is not affected by the difference in the satellite timings and Doppler shifts. Hence, the design of the ZF equalizer, as well as its performance, are not affected by the different synchronization of each satellite.
C. CHANNEL CHARACTERISTICS AND SIGNAL TO NOISE RATIO
It is usefull to write the channel matrix as is the gain-matrix, in which each diagonal entry corresponds to the gain of every satellite, and is the phase matrix, where we also substituted (5). The terminal employs a ZF equalizer to separate the symbols transmitted from the two satellites. From (15), the ZF equalizer vector designated to decode the signal from satellite , could be built as a function of H, only; i.e., where e ∈ R 2×1 is a vector of all zeros except the entry that is equal to 1, and we assume that (H H H) −1 exists. Employing the ZF equalizer one obtainŝ Using (17), the terminal can decode the signal from satellite without any interference 3 from the other satellite. The signal-to-noise ratio (SNR) for decoding the data sent by satellite ∈ {1, 2} is given by It is convenient to compare the network throughput to a reference scenario in which only satellite serves the terminal, while the latter employs a maximal ratio combining equalizer. To compare the case of two transmitting satellites to that of a single satellite, we consider two different power constraints. In one case, every satellite has an independent power constraint and the single-satellite SNR is given by While the independent power constraint may be suitable in some cases, it is not completely ''fair'' since the network transmits twice the power when transmitting from two satellites. Hence, the performance gain comes both from the use of MIMO topology and from the increased transmission power.
Thus, to have a fair comparison, we also consider the case where the total transmission power is equal. That is, in the single satellite case, we allow the satellite to double the power. Namely, if the symbol energy for each satellite in the dual case is σ 2 x , then the symbol energy in the single satellite case is 2σ 2 x (3dB higher). Here, the single-satellite SNR is given by SNR s,3dB = As expected, without any interference, the single satellite SNR satisfies namely, the ZF equalizer induces an SNR loss with respect to the single satellite case. For convenience, in most of this paper we consider the independent power constraint (i.e., comparing to SNR s ). The analysis under the second constraint is the same up to a constant of 2. In the numerical section, we show that in most cases, the gain from the use of MIMO outperform single satellite even with the joint power constraint.
III. STATISTICAL CHARACTERIZATION OF THE DETERMINISTIC LOS CHANNEL A. STATISTICAL CHARACTERIZATION OF THE RATES
The joint transmission scheme considered here, in which each satellite transmits an independent data stream, aims at increasing terminal throughput by utilizing the multiplexing gain. However, it induces an SNR loss (22). In the case of a single satellite, the throughput is given by where B is the transmission bandwidth whereas in the twosatellite case it is given by It can be shown that from which it follows that if the SNR loss is negligible, the throughput can indeed be (nearly) doubled. On the other hand, if the SNR loss is significant, the throughput can be even lower than the one obtained with a single satellite. While (24) completely characterizes the network throughput, it does not provide insight into the problem. Because the throughput depends on the distances between every antenna and each satellite. Even small variations in the location or orientation can lead to a significant different throughput. It is therefore important to evaluate these variations in both cases where satellites can track variations and not. We consider two different models:
1) OUTAGE MODEL
In the outage model, the satellites do not track the instantaneous changes in the channel state. In this case, the network decides on a code rate, R O (which can vary from time to time, but not as fast as the channel does). Every codeword is encoded at rate R O , and each satellite transmits a different part of the resulting codeword. We assume that the channel remains constant during the entire codeword and that full CSI is available at the LT (CSIR). If the instantaneous channel cannot support decoding at a rate of R O , the decoding will fail, and we say that the terminal is in outage. In this case, it is important to predict what the success rate will be, i.e., what percentage of the codewords will be successfully decoded at the LT.
2) FEEDBACK MODEL
Here, we assume low-rate feedback from the LT to the satellites, which indicates the achievable rate at any given time. In many cases, such feedback is not a burden on the network and, therefore, practical. The required feedback rate depends on the distances between LT antennas. For example, Figure 10 indicates that the update rate should be of the order of tenths of a second for a UCA radius of d = 30 cm. As the link rate is typically several Mbps, a tiny fraction of that link can provide feedback with negligible delay.
In systems with such a timely feedback, satellites can precisely know the achievable data rate, and consequently, adapt the code-rate to ensure decoding success. In this model, the decoding will (almost) always be successful, but the instantaneous rate of the network will change significantly over time. Hence it is important to predict the characteristics of these variations.
In both models, to gain more insight, it is instructive to consider the network variability as random. To that end, we present a novel stochastic framework, where we assume that the locations of the terminal and the satellites are fixed and known, but the orientation of the terminal is random. More specifically, we assume that the terminal orientation angle, γ 0 , is uniformly distributed over [0, 2π). Our analysis shows that this type of randomness is sufficient to characterize the network throughput.
Thus, in the following we consider the SNR of Equation (20) as random. We further denotẽ that will be referred to as the ''individual instantaneous rate". Moreover, we refer to as the ''instantaneous rate". The operational meaning of these R D is different in the two communication models. In the outage model, R D determines the probability for a given outage-rate threshold R O . Explicitly, the outage probability is defined as In the feedback model, R D is an achievable rate. Thus, the distribution of R D indicates the network-throughput distribution (e.g., the average network throughput isR = E[R D ].
B. ANALYSIS OF THE ''INSTANTANEOUS RATE''
We now analyze the network performance under both the outage and feedback models, by characterizing the distribution of the ''instantaneous rate" (cf. (27)). Comparing (20) and (21) it is convenient to write: where and it can be shown that Note that unlike the two-satellite case, the SNR of the LT served by a single satellite is deterministic; i.e., does not depend on the terminal orientation. Thus it is sufficient to characterize the SNR loss since it is only random variable that effects the ''instantaneous rate". We also note that the SNR loss is independent of ∈ {1, 2} (cf. (30)); i.e., independent of the serving satellite. Thus, R D (24), can be written as We now characterize the distribution of the instantaneous rate through the evaluation of the outage probability. As shown in Appendix A, the outage probability P O , for an outage rate threshold R O , is given by where is the outage SNR loss. Note that µ = 0 is equivalent to R O = B log 2 1 + SNR 1 + B log 2 1 + SNR 2 and hence leads to an outage probability of 1. On the other hand, µ = 1 is equivalent to R O = 0, which means no outage events. Moreover, there exists a value 0 < µ 1 < 1 for which . (23)), the maximum rate of the single satellite.
Another interesting case is when SNR 1 = SNR 2 , where for every µ 1 < 0.5 the throughput is improved in the twosatellite case in comparison to single satellite case (albeit only slightly in the low SNR regime). As indicated by (34), the SNR-loss, |S M | 2 , is a key to determine the outage probability. Therefore, to evaluate whether simultaneous transmission improves system performance, we analyze the distribution of |S M | 2 .
We now characterize the statistical properties of |S M | 2 . Recalling that the receiver employs a UCA, (30) can be written as and, using trigonometric identities, we get · sin(θ 1 ) cos(φ 1 ) − sin(θ 2 ) cos(φ 2 ) . Let then To simplify the exponent argument, denote where and Using the latter notation one obtains The expression for S M in (42) is important as it provides a compact representation of the effect of different network features. This includes non-varying parameters such as the carrier wavelength, λ, UCA radius at the LT, d, and its number of antennas, M . The satellite locations, which are represented solely by u and ψ, vary with time, but slowly enough to be considered constant during the analysis period.
From (41), u can be interpreted as the length of an edge of a triangle, as shown in Fig 3, where the other two edges are equal to cos(φ 1 ) and cos(φ 2 ) and the angle between them is θ 1 − θ 2 . To understand the relationship between u and the system geometry, consider Figure 4, which depicts two unit-length vectors, each pointing from the LT location (point A) toward one of the satellites. Plain is tangent to Earth at point A andk is the normal to plain at A. Note that the triangle in Figure 3 is the one created by projecting the normalized satellite position vectors (cf. Figure 4) into plain . Thus, the edge that connects these two projections has a length u. Note that 0 ≤ u ≤ 2 is the only parameter required to characterize the effect of the satellites location. Furthermore, u = 0 if and only if the two satellites are seen in exactly the same direction from the LT. Thus, u is henceforth dubbed normalized satellite separation. The randomness in S M follows only from the orientation angle γ 0 . We therefore study the effect of γ 0 on the distribution of S M and through it, on the network throughput. Recall that S M affects the throughput only through its squared magnitude: Because the exact distribution of |S M | 2 is difficult to evaluate, in the following we characterize it with simple closed form expressions for the two extreme cases: small and large values of ud/λ.
IV. PERFORMANCE ANALYSIS A. PERFORMANCE ANALYSIS FOR SMALL ud /λ
We now show that if ud/λ is small, the variance of S M is very small. Thus, in this regime, S M can be well approximated by a deterministic function of ud/λ, and we can accurately predict the SNR-loss, |S M | 2 . To do so, we upper bound the variance of S M and show that it is significantly smaller than one (while 0 ≤ |S M | 2 ≤ 1). Recalling that γ 0 , the rotation-angle of the first LT's antenna, is uniformly distributed over [0, 2π ), adding a constant to γ 0 does not affect its distribution up to a modulo 2π. Thus, γ 0 + 2kπ M + ψ mod 2π is also uniformly distributed over [0, 2π). Hence, we can calculate the mean and the where c ∈ R and J 0 (c) is the Bessel function of the first kind and 0th order. Thus, the mean of S M is: and, taking the expectation over (43), the variance of S M is: Since the exact evaluation of the variance, (46), does not provide much insight, we turn to the calculation of an upper bound on the variance of S M , which is presented in the following theorem: Theorem 1: The variance of S M is upper bounded by Theorem 1 shows that for small ud/λ the variance is negligible, and is further reduced as M increases. Thus, in this regime, S M can be well approximated by its mean: S M ≈ J 0 (2πud/λ). To further demonstrate the relevance of this approximation, Figure 5 depicts the variance of S M for different number of receive antennas as a function of ud/λ. It shows, that the variance decreases as the number of receive antennas increases, implying that S M approaches E[|S M |] as M gets larger. Moreover, even for small values of M , the variance is very low. For example, when M = 3 the variance is below 10 −2 for any ud/λ < 0.2. Figure 5 also depicts the upper bound on the variance (47). The figure shows that the bound is indeed useful, and that becomes tight for large M or for small ud/λ.
Using Theorem 1, while approximating S M by its expectation (cf., (45)) and considering small values of udλ, it follows that and Recall that the Bessel function has its maximum at J 0 (0) = 1 and it first zero at x 0 = 2.405. Thus, the system will have zero throughput for ud λ = 0, whereas a maximum throughput is expected for ud λ = x 0 2π = 0.383. Proof: [Proof of Theorem 1] To derive the upper bound on the variance of S M , we use the Taylor series of the Bessel function of the first kind. Thus, (46) can be written as Changing the variable in the last summation using p = p 1 + p 2 , (50) is simplified as follows Before continuing, the following lemma is required. Proof: see Appendix B. The left hand term in the parentheses of (51) can be simplified using Vandermonde's identity: VOLUME 8, 2020 Setting m = n = r = p and k = p 1 , one obtains 2 (54) and by substituting (53) and (54) into (51), it follows that and with rewriting we get We next note that for n = 0 the term in the sum within the parentheses of (55) equals 1/(p!) 2 and is canceled out with the right hand term. Noting, also, that summation terms indexed with n = −a are identical to those with n = a, ∀a ∈ {0, . . . , p/M }, (55) can be written as where the latter inequality follows from p/M = 0 for p < M .
It is now possible to bound the variance. Consider In Appendix C we show that M ,M +2n+1 + M ,M +2n+2 ≤ 0 for any M > 1 and n ≥ 0. Thus which establishes the desirable result.
B. PERFORMANCE ANALYSIS FOR ud /λ 1
We now characterize the performance in the case where ud/λ 1. Reviewing (42), we note that in this regime, tiny variations in the satellites' locations induce corresponding small variations in u and ψ. The latter, however, lead to significant fluctuations in S M . Thus, in this regime we expect the achievable joint transmission rate, R D , to vary rapidly with time (this is demonstrated in the numerical results section). Therefore, instead of trying to predict the exact value of the achievable rate, we turn to characterizing its statistical distribution. Let then, (42) can be written as: Using the theory of quantization error distribution (e.g., [17, p. 353]), if ud/λ 1, the distribution of α k is approximately uniform. Furthermore, if cos γ 0 + 2kπ M + ψ = ± cos γ 0 + 2mπ M +ψ , then α k may be assumed to be statistically independent of α m . This inequality is satisfied for any integer k = m as long as M is odd. We therefore consider two cases: odd M and even M .
In the case of odd M , S M can be approximated by where {α m : m = 0, . . . , M − 1} is a set of i.i.d. random variables with a uniform distribution over [0, 2π). Using (34), the outage probability is given by P O (µ) = Pr |S M | 2 > µ where µ is the threshold SNR loss. Hence If M is even, there is an inherent dependence between the antennas, because γ m+M /2 = γ m + π for every m = 0, . . . , M /2 − 1. Thus and by combining the latter with (60), it follows that Using the uniform-distribution approximation, once again, it follows that for even M the outage probability, (34), can be approximated as The outage expressions in (62) and (65) can be further simplified if M is sufficiently large, by applying the central limit theorem. In Appendix D we show that E[S M ] = 0 and E[|S M | 2 ] = 1 M , for all M > 0. However, if M is odd, the central limit theorem yields a complex Gaussian distribution, whereas even M results in a real Gaussian distribution. Hence, using the central limit theorem, for large enough M , the outage probability for µ > 0 can be approximated by The approximation of (66) is very simple and intuitive. It shows that in the large du/λ regime, the main parameter that determines the success probability is the number of antennas. As the number of antennas become large, the outage probability goes to zero quite fast. Recall that (66) resulted from two subsequent approximations: the uniform distribution approximation and the central limit theorem. The uniform distribution approximation, which led to (62) and (65), becomes accurate as du/λ increases, whereas the Gaussian approximation becomes accurate as M increases. Nevertheless, in the next section we show, numerically, that these approximations are accurate and useful even with only M = 3 antennas. It is therefore possible to predict the system performance with a simple formula such as (66), which is very important for system management and optimization.
V. NUMERICAL RESULTS
In this section we present numerical results that demonstrate the usefulness and accuracy of the derived approximations. All simulations used parameters, which represent typical values in a SatCom system, as follows: the transmit and receive antenna gains are set to 45dBi and 10dBi, respectively. The transmit power (of each satellite) is 40dBm, the bandwidth is 10MHz and the noise power spectral density is N 0 = −170dBm/Hz (at the terminal). The satellites orbit is 1000 km above Earth (implying an orbit radius of 7371Km) and the carrier frequency is 30GHz.
A. SMALL ud /λ
Section IV-A indicates that for small ud/λ the SNR is well approximated by (48). This is demonstrated in Figure 6, which considers two satellites at θ 1 = 0 • , φ 1 = 90 • and θ 2 = 0 • , φ 2 = 60 • , respectively. Thus, u = 0.5 (see (41)) and the first point of maximal performance is predicted at d = 0.38λ/u = 0.76 cm. The figure depicts the SNR for the signal received from satellite 1 when both satellites transmit independent data streams (joint transmission), as a function of the UCA radius, d. This scenario inherently suffers from SNR-loss because of zeroing the interference inflicted by the other satellite. In this figure, triangles depict the maximum and minimum SNR for random UCA orientations (namely, min γ 0 ∈[0,2π) SNR and max γ 0 ∈[0,2π) SNR), whereas the solid line depicts the approximation of (48). For reference, the figure also depicts the SNR for satellite 1 when satellite 2 does not transmit (single satellite transmission). In this case the SNR remains constant as d varies, because there is no interference.
The figure depicts the SNR for two different numbers of receive antennas that are organized in a UCA, M = 3 and M = 8. As expected for low ud/λ, the difference between the maximum and minimum SNR is negligible, and the approximation of (48) is very accurate. The figure demonstrates that in this case, for M = 3 antennas the approximation is very good up to d = 0.6 cm whereas for M = 8 antennas the approximation is good even beyond d = 1 cm.
The LT's ability to cancel the interference depends on ud/λ. As predicted by (48), the SNR approaches zero when d approaches 0, whereas the maximum SNR is achieved at d = 0.76 cm and is identical to the single satellite SNR. In this maximal case, joint transmission can achieve twice the rate of single satellite transmission.
To further illustrate the performance, we next consider the movement of the two LEO satellites in their orbit, while the LT is fixed. We consider two satellites on the same orbit, that pass exactly above the terminal. The separation angle between the satellites is set to 1 • (with respect to the Earth's center). The LT has a UCA of M = 6 receive antennas with radius d = 2 cm, where the first antenna is oriented in the direction of the satellites' orbit.
Due to the satellites' movement, their normalized separation changes with time, ranging from u = 0.128 at time 0 (when both satellites are at an equal distance from the LT) to u = 0.072 at the figure edges. The change in u brings about a change in the achieved rate, as predicted by (49). The figure depicts the approximation of (49) (in square markers) as well as the actually achievable rate evaluated by simulation (in blue solid line). Since the antenna separation and the satellite separation are quite small, the approximation is again very accurate.
For reference, we again show the instantaneous rate obtained by each satellite alone while the other is idle (dashed lines). As discussed above, we also wish to make a stricter comparison, where the total transmission power is equal; i.e., a single satellite transmission where we allow the satellite to transmit twice the power (3dB higher). The achievable rate for this case is marked in dotted lines, and is obviously better than the single satellite transmission with lower power. Nevertheless, the data rate with joint transmission is significantly higher than the single satellite case, even with the double transmit power. Figure 7 also depicts the perfromance of the optimal linear minimum mean square error (MMSE) equalizer. As expected, the MMSE equalizer outperforms the ZF equalizer, but, the gap is very small. This result is compatible with the well known behavior of the ZF equalizer, whose performance approaches that of the MMSE at high SNR. This outcome indicates that spatial multiplexing-gain can be studied using the ZF equalizer.
B. LARGE ud /λ
When ud/λ 1 we expect rapid changes of the achievable rate, with the outage probability approximated by (62), (65) or (66). We first consider a setup where each satellite creates a 30 • with respect to the zenith in opposite directions; i.e., φ 1 = φ 2 = 60 • , θ 1 = 0 • , θ 2 = 180 • , which implies a 60 • separation between the satellites (from the terminal's point of view). Two type of terminals are considered: one with M = 3 and the other with M = 5 antennas, both having a UCA of radius d = 60 cm. From (41) it follows that u = 1 and ud/λ = 60 1. Thus, the approximation in (62) is expected to hold. Based on our model, we assume that the terminal orientation is random, and uniformly distributed over [0, 2π). Figure 8 depicts the outage probability evaluated via Monte-Carlo simulations according to (34) as a function of the outage SNR loss threshold, µ. The figure also depicts the two approximations; that is, the uniform approximation, (62), and the Gaussian approximation, (66). The results show (as expected) that the uniform approximation is very accurate. Moreover, the Gaussian approximation is also good, even for such a small number of antennas.
If the two satellites have the same (single satellite) SNR, SNR s , then |S M | 2 < µ = 1/2 is a sufficient condition to guarantee that joint transmission will yield a higher rate than single-satellite transmission. In other words, if the SNR loss is smaller than µ = 1/2, the data rate from each satellite is at least half the rate obtained in the case of single satellite transmission (albeit only slightly in the low SNR regime). Therefore, it is interesting to inspect the outage probability for a 3 dB loss (µ = 0.5). Figure 8 shows that even for M = 3 the probability of having no more than a 3dB loss is more than 70%. This result guarantees that joint transmission will bring about a throughput increase in most cases of interest. Figure 9 presents the outage probability as a function of the UCA radius, d, for a scenario of 20 • separation between the satellites (from the terminal's point of view). We set one satellite at θ 1 = 0 • , φ 1 = 65 • and the other at θ 2 = 0 • , Figure 8. The figure shows that as the UCA radius increases, the uniform approximation becomes increasingly more accurate. In particular, for d > 100 cm (where ud/λ = 11.25) the accuracy is very good. The Gaussian approximations accuracy increases with M ; hence, it is acceptable when M = 5 but quite poor if M = 2. Note that the probability for an SNR loss of more than 3dB converges to 50% for M = 2 but is less than 10% for M = 5. Figure 10 considers the movement of two satellites on the same circular orbit with 10 • separation (one after the other). Again, the circular orbit passes exactly above the terminal. The terminal utilizes a UCA with M = 5 and a radius of 30 cm (ud/λ = 33.08 at t = 0). The figure depicts the instantaneous rate using joint tranmission from the two satellites, (33), the rates obtained by each satellite on its own while the other is idle, (23), and the rate when a single satellite uses twice the power (+3dB).
As explained above, the large ud/λ regime is characterized by rapid drops in the instantaneous rate, and rapid recoveries. Recall that the dual satellite instantaneous rate has two interpretations. It can represent the achievable rate in an adaptive system that uses timely feedback from the terminal on the link quality. Alternatively, it can show the capability of the system with a fixed transmission rate to support reliable detection (that is, a drop in the instantaneous rate in Figure 10 below the predetermined transmission rate indicates outage events).
The results show that most of the time, the instantaneous rate of joint transmission, R D , is significantly larger than with a single satellite. On average the maximum rate, obtained by a single satellite (with double transmit power) is 69.6Mbps whereas the average for joint transmission is 105.3Mbps. 4 These demonstrate the contribution of joint transmission to the network throughput. As in other MIMO networks, this contribution will be even larger in networks with higher signal to noise ratios.
Finally, we note again that the ZF performance approaches that of the MMSE. This further demonstrates that the ZF analysis is useful in studying the significant spatial-multiplexing gain that lies in multi-user MIMO.
VI. CONCLUSION
This paper explored the feasibility of MIMO in LEO satellite networks. We introduced a novel stochastic framework for satellite communication analysis, where we considered the terminal orientation as random. Using this stochastic framework, we obtained a closed-form expression for the channel distribution in the downlink of cooperative MIMO communication with two LEO satellites and a single LT equipped with a UCA of antennas.
Based on this characterization, we evaluated the performance in the two extreme cases: when the LT antennas are either very far or very close to each other (i.e., when the UCA radius is small or large). In the latter case, we showed that the throughput varies very slowly, and is practically independent of the terminal orientation. Thus, the throughput can be wellpredicted by a deterministic closed-form expression, which solely depends on the network parameters and on a normalized measure of the satellite separation.
In the case of far antennas, we derived simple, yet accurate, approximations for the distribution of the instantaneous rate. These approximations were evaluated numerically and shown to predict the outage probability very well. Furthermore, they showed that in almost all scenarios, the outage probability is low, and dual satellite transmission can increase downlink rate significantly.
In the other case, where the distance between the LT receive antennas is small, we derived an upper bound on the variance of the SNR loss. Due to the low value of this upper bound, we concluded that the terminal rate is nearly independent of its orientation, and can easily be predicted by the normalized satellite separation (or more precisely by the quantity ud/λ). We showed that the throughput reaches its maximum when the antenna separation satisfied ud/λ = 0.38. At this point, the throughput is twice the throughput of single satellite transmission; that is, the transmissions from the two satellites do not interfere with each other. For lower values ud/λ the throughput decreases, and joint transmission is typically not advantageous. .
APPENDIX A PROOF OF EQUATION (27)
In this Appendix we prove (34) and derive the equation for the outage SNR loss, (35). Starting from (26) and substituting (28) and (29) | 9,935.8 | 2020-01-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
Novel long noncoding RNA LINC02820 augments TNF signaling pathway to remodel cytoskeleton and potentiate metastasis in esophageal squamous cell carcinoma
Esophageal squamous cell carcinoma (ESCC) is one of the most common malignant tumors in China. However, there are no targets to treat ESCC because the molecular mechanism behind the cancer is still unclear. Here, we found a novel long noncoding RNA LINC02820 was upregulated in ESCC and associated with the ESCC clinicopathological stage. Through a series of functional experiments, we observed that LINC02820 only promoted the migration and invasion capabilities of ESCC cell lines. Mechanically, we found that LINC02820 may affect the cytoskeletal remodeling, interact with splice factor 3B subunit 3 (SF3B3), and cooperate with TNFα to amplify the NF-κB signaling pathway, which can lead to ESCC metastasis. Overall, our findings revealed that LINC02820 is a potential biomarker and therapeutic target for the diagnosis and treatment of ESCC.
INTRODUCTION
Esophageal cancer (ESCA), which includes esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma (EA), is one of the most common malignancies in the world [1][2][3][4]. In China, almost 90% of ESCA patients are diagnosed with ESCC [5]. ESCC develops in the esophageal epithelial mucosa, with smoking and alcohol consumption being significant risk factors [6,7]. In recent years, ESCC patient prognosis is still poor even though a multidisciplinary approach is used for esophageal cancer treatment [8]. Ultimately, the research on the molecular mechanisms of ESCC is still not clear.
Long noncoding RNAs (lncRNAs) are transcripts longer than 200 nucleotides [9]. Many studies have found that abnormal expression of lncRNAs played an important role in tumorigenesis and tumor metastasis [10]. It can function at the transcriptional level, post-transcriptional level, and epigenetic level [11]. For example, in ESCC, lncRNA MNX1-AS1 can affect proliferation and metastasis [12], while lncRNA FAM83H-AS1 is associated with differentiation and lymph node metastasis [13]. And lncRNA CCAT1 may be a potential therapeutic target for ESCC [14]. Although researchers have found that numerous lncRNAs are abnormally regulated in ESCC, these targets are not used in clinics. In our study, we aim to find more molecular targets for ESCC. Therefore, three paired ESCC tissues and adjacent normal tissue were collected to identify differentially expressed lncRNAs by transcriptome sequencing. Subsequently, we found a novel lncRNA in ESCC, named LINC02820 in the database, and the other names and sequences were shown in Supplementary Table 1. And as far as our knowledge, our study is the first to evaluate the role of LINC02820 in ESCC.
In the present study, we did a series of functional experiments and found that LINC02820 accelerates the ability of ESCC cell lines to metastasize in vivo and in vitro. Then, combined with the KEGG analysis, we proved that LINC02820 can cooperate with TNFα to amplify the NF-κB signaling pathway. Finally, by the RNA pulldown assays, we found that LINC02820 may interact with splice factor 3B subunit 3 (SF3B3) to function. Overall, our findings revealed that LINC02820 is a potential biomarker and therapeutic target for the diagnosis and treatment of ESCC.
MATERIALS AND METHODS Clinical samples
was extracted from samples and frozen at −80°C. All human tissue samples were supported by the Medical Ethical Committee of the FAHSYSU and SYSUCC. Informed consent was obtained for all patients.
Isolation of nuclear, cytoplasmic, total RNA, and real-time PCR (RT-PCR) Total RNA was extracted using TRIzol reagent (Invitrogen) or the RNA Rapid Extraction Kit (EZBioscience) following the manufacturer's direction. And then, the isolated RNAs were used for reverse transcription with cDNA synthesis kits (Yeasen). SYBR Green Master Mix (Yeasen) was applied to the RT-PCR process. Finally, the Roche 96/384 holes real-time PCR system (Roche) was used to detect the expression of genes. The GAPDH is the internal control.
Cytoplasmic and nuclear isolation was performed with the NORGEN kit (NGB-21000), following the manufacturer's instructions. The LINC02820 level was detected by RT-PCR. And U6 was used as the nuclear reference and GAPDH was used as the cytoplasmic reference.
All the sequences of the primers used were shown in Supplementary Table 4. The expression quantity was calculated by the 2 -ΔΔCT method.
Cell transfection siRNA specific to LINC02820 (SiLINC02820) and control scrambled RNA (SiNC) were synthesized by RiboBio (Guangzhou, China), which is used for transient transfection. The CRISPR inhibit (CRISPRi) method was used to construct a sgRNA lentiviral vector to inhibit the expression of LINC02820 at the transcriptional level, and the relevant sequence is listed in Supplementary Table 5. The LINC02820 overexpression vector (OE-LINC02820) was designed by Kidan Bio company (Guangzhou, China), and the empty vector was used as the negative control (Vector).
In the experiment, siRNA was transfected into cells with Lipofectamine 3000 Transfection Reagent (Thermo Fisher), and the transfection concentration was 100 nM. For lentivirus infection, the lentivirus kit (GeneCopoeia) was used according to the instructions.
For the MTS experiment, 1000 cells/per well were cultured in 96-well plates in which MTT reagent was added 20 μl/per well according to the protocol, and absorbance was read at OD490 after incubated at 37°C for 2 h.
For the colony formation assay, 500 cells/per well were cultured in sixwell plates for 2 weeks. Then colonies were fixed, stained, and count.
Migration, invasion, and wound healing assays
Transwell assays and wound healing assays were applied to explore the migratory and invasive abilities of the ESCC cells. For the wound healing assays, 2 × 10 5 cells were cultured in 6-well plates. When the cells grew all over the six-well plates, the sterile tips were used to make the wounds. Images of the wounds were captured at 0 h and suitable time (12/24 h/36/ 48 h, etc.) were captured under the phase-contrast microscope. The Migration Index was calculated as follows: For the migration assay, the 1 × 10 5 cells were suspended in 200 μl serum-free medium and added to the top chamber (Corning). And 800 μl DMEM with 20% FBS was added to the lower chambers. For the invasion assay, the upper chamber membranes were coated with Matrigel (Corning), the treatment method is the same as the migration assay. After 24 or 48 h, the cells in the bottom chamber were fixed with Methanol, stained with 0.1% crystal violet, and then counted under the microscope.
Western blotting
Cells were lysed with RIPA lysis buffer (Beyotime) and added the protease inhibitor. Next, the concentration of proteins was measured by a BCA Protein Quantification Kit (Yeasen). The standard western blotting protocol was followed. The primary antibodies included antibodies specific for EMT markers, Desmoplakin
Immunofluorescence assays
The 2 × 10 4 cells were cultured in eight-well glass slides (Millipore). After the cells were adherent, they were fixed with 4% paraformaldehyde. Cells were then incubated with antibodies overnight at 4°C. Afterward, the cells were stained with secondary antibodies. DAPI (Solarbio) was used to counterstain the nuclei. Final images were taken via fluorescence microscopy.
RNA-fluorescence in situ hybridization (RNA-FISH)
The probe against LINCO2820 was marked with cy3, which was designed by RiboBio (Guangzhou, China). The 2 × 10 4 cell samples were placed in 8-well glass slides (Millipore) until they adhered to the slides. All subsequent steps followed the manufacturer's directions. Finally, confocal microscopy was used to obtain the images.
Animal experiment
Twenty-six of 3-to 4-week-old male BALB/c nude mice were supplied by Gempharmatech-GD (Guangzhou, China). The Institutional Animal Care and Use Committee at SYSUCC approved all animal experiments in this study.
Twenty-six mice were randomly divided into two groups. The experimental group (n = 13) was injected with the K30 cell line in the footpad, which was infected with lentivirus with an overexpression vector (OE-LINC02820). The K30 cell line transfected with an empty vector was the control group (n = 13). The 1 × 10 6 cells in 50 μl DMEM were injected into the footpad of each mouse to generate a primary tumor. After 4 weeks, the popliteal lymph nodes were isolated and collected in RNAlater solution (Beyotime). And Lymph node volume was measured and extracted. Metastasis was assessed by RT-PCR using specific primers for human HPRT, which does not react with the mouse gene [21,22]. When the CT ≥35, it defined as non-metastasis was, and metastasis was defined as CT <35. The primers were shown in Supplementary Table 4. RNA Pull-down and mass spectrometry LINC02820 and its antisense strand were transcribed in vitro using MEGASCRIPT T7 Kit (Thermo Scientific) and then biotin-labeled with the Pierce RNA 3'-End Desthiobiotinylation Kit (Thermo Scientific). RNA pulldown was performed with Magnetic RNA-Protein Pull-Down Kit (Thermo Scientific) following the manufacturer's protocol. Eluted proteins were detected by mass spectrometry and Western blot.
RNA immunoprecipitation (RIP)
RNA immunoprecipitation (RIP) assays were performed using the EZ-Magna RIP RNA-Binding Protein Immunoprecipitation Kit (Merck Millipore). According to the manufacturer's protocols, the lysis of ESCC cells was incubated with SF3B3 antibody and the wash times, finally, precipitated LINC02820 was detected by RT-PCR.
RNA-seq
The total RNA of K180 and K30 functional cell lines were extracted using a Trizol reagent kit (Invitrogen, USA) according to the manufacturer's protocol. RNA quality was assessed on an Agilent 2100 Bioanalyzer (Agilent Technologies, USA) and checked using RNase-free agarose gel electrophoresis. After total RNA was extracted, eukaryotic mRNA was enriched by Oligo(dT) beads. Then the enriched mRNA was fragmented into short fragments using fragmentation buffer and reverse transcript into cDNA with random primers. And sequenced using Illumina HiSeq2500 by GeneDenovo Biotechnology.
Statistical analysis
GraphPad Prism 8.0 and SPSS 26.0 (IBM) were used for data analyses and visualization. The analysis of correlations between LINC02820 levels and the clinicopathological parameters of ESCC was performed using the χ 2 test (calculated by the Pearson chi-square test or Fisher test). Kaplan-Meier survival analysis was used for survival analysis. Differences between the two groups of data were determined with the Student's t-test. For all analyses, a p value <0.05 was considered statistically significant. All data represent the mean ± SD.
RESULTS
LINC02820 are upregulated in ESCC and associated with ESCC pathological stage We identified 27 differentially expressed lncRNAs in the three paired ESCC tissues and normal tissues by high-throughput RNAseq (|log2FC|> 1 and a p value <0.05) ( Fig. 1A and Supplementary Table 6). We found that some lncRNAs were upregulated in ESCC samples. Subsequently, we focused on the LINC02820, which had no reports on its function in cancers. Our pan-caner analysis of LINC02820 showed that it was increased in many cancers, especially esophageal cancer (Fig. 1B and Supplementary Table 7).
Furthermore, we measured the expression levels of LINC02820 in 86 paired ESCC tumors and normal tissues. The results showed that the LINC02820 levels in ESCC were abnormally increased compared with normal tissues (Fig. 1C, p < 0.001). The receiver operating characteristic (ROC) curves of 86 paired samples also implied that LINC02820 can be a biomarker for the ESCC (Fig. 1D, AUC = 0.8215, p < 0.001). To further investigate the relationship between LINC02820 levels and the pathological features of ESCC patients, we analyzed the expression of LINC02820 in 86 patients who collected the relevant pathological parameters and divided them into groups. They were then classified according to LINC02820 median expression level. We found that the LINC02820 levels were significantly related to the ESCC pathological stage ( Table 1, p < 0.001). The analysis between the expression of LINC02820 and patient overall survival suggests that a high LINC02820 expression might link to a poor prognosis, unfortunately, the p valve is not significant and could be caused by samples insufficient (Fig. 1. E, p = 0.059). These results suggest that LINC02820 may play an important role in ESCC.
LINC02820 doesn't affect proliferation but promotes migration and invasion of ESCC in vivo and in vitro
To determine the role of LINC02820 in ESCC, we examined its expression in ESCC cell lines by qRT-PCR ( Fig. 2A). We found that LINC02820 levels in K180 and K410 cell lines were significantly higher than in the immortalized normal esophageal epithelial cell line (NE1). We then knockdown LINC02820 expression in K180 and K410 cells by transfection of siRNA (SiLINC02820) (Fig. 2B). In addition, the CRISPRi method was used to inhibit the expression of LINC02820 for a long time (Fig. 2C, D). Meanwhile, we found that the copy numbers of LINC02820 in K180 and K410 were decreased after inhibiting the expression of LINC02820 (Supplementary Fig. 1A). Also, we chose the K30 and EC109 cell lines to overexpress (Fig. 2B), which were a low copy of the LINC02820 ( Supplementary Fig. 1A).
Then, through both MTS and colony formation assays, we found that changing LINC02820 expression did not affect the proliferation of ESCC cell lines ( Fig. 2E and Supplementary Fig. 1B-D). However, the migratory and invasion abilities were decreased after LINC02820 was silenced and inversely increased in overexpressed cell lines via Transwell assays (Fig. 3A-C and Supplementary Fig. 1E-G). Similarly, ESCC migratory and invasion abilities declined when we restrained LINC02820 expression in the wound healing assays (Supplementary Fig. 2A-C, G, H). In contrast, ESCC migratory ability increased when LINC02820 was overexpressed ( Supplementary Fig. 2D-F). For in vivo experiments, spontaneous lymph node metastasis experiments were performed to explore ESCC migratory and invasive abilities by injecting an overexpressed K30 cell line into the mouse footpad. We observed that the lymph nodes of OE-LINC02820 mice exhibited more metastasis than the control mice (Fig. 3D, E). In summary, those data suggest that LINC02820 is crucial for the metastasis of ESCC.
LINC02820 promotes metastasis of ESCC through the cytoskeleton remodeling
To explore how LINC02820 promotes ESCC metastasis, we assessed their epithelial-mesenchymal transition (EMT) progression, as it usually affects migration and invasion abilities. However, Fig. 2 The construction of functional cell lines and LINC02820 does not affect proliferation in ESCC Cells. A The expression of LINC02820 in ESCC cell lines. B Analysis of LINC02820 in ESCC cells transfected with siRNA(SiLINC02820) or negative control (SiNC), and transfected with overexpression plasmid (OE-LINC02820) or empty vector plasmid (Vector). C Analysis of LINC02820 in ESCC cells transfected with LINC02820 suppression plasmid (sg-1, sg-2, sg-3, and sg-4) or control plasmid (lenti-guide or guide). D Schematic diagram of CRISPRi/dCas9 and the fluormicrographs of K180/K410 cells transfected with LINC02820 suppression plasmid. E The proliferation of K180, K410, K30, and EC109 cells by MTS assay and colony formation experiment. *p < 0.05, **p < 0.01, ***p < 0.001. we found that the change of LINC02820 expression did not impact numerous EMT markers (Fig. 3F, and Supplementary Fig 3A, B). This indicated that LINC02820 may not work via the EMT pathway. So, we examined the cytoskeleton and invadopodia to explore the possible way, which is instrumental for cancer cell migration [23,24]. We firstly examined markers for invadopodia, including Cortactin, N-WASP, and phosphorylated N-WASP (p-N-WASP), and discovered that the protein levels of Cortactin and p-N-WASP were decreased when LINC02820 was silenced in K180 and K410 cells (Fig. 3G). The phenomenon was the opposite in K30 and EC109 (Fig. 3G). This prompted us to investigate whether LINC02820 can induce F-actin malfunction by transforming it into G-actin. We found that LINC02820 was decreased in K180 and K410 cell lines, F-actin levels decreased and cellular morphology changed (Fig. 3H). Whereas, F-actin increased after LINC02820 overexpression in K30 and EC109 cell lines (Fig. 3I). These data show that cytoskeleton remodeling is also an important factor affecting metastasis in ESCC.
LINC02820 is mainly distributed in the nucleus and might be involved in the TNF/NF-κB signaling pathway It has been reported that the function of lncRNA relies on its subcellular localization [9]. To further discern LINC02820's mechanism, we conducted a positioning test LINC02820. We demonstrated that LINC02820 is enriched in the nucleus of K180 and K410 cells (Fig. 4A-C). In addition, the nuclear LINC02820 has been inhibited when LINC02820 is downregulated in K180 and K410 cells (Supplementary Fig. 4A), on the contrary, nuclear LINC02820 has been augmented in K30 cells transfected with OE-LINC02820 ( Supplementary Fig. 4A). what's more, we used lncATLAS (https://lncatlas.crg.eu/) to predict the subcellular location of LINC02820 and found it may be located in the nucleus ( Supplementary Fig. 4B).
Next, we aimed to explore the potential molecular mechanisms of LINC02820 in ESCC. KEGG analysis of two pairs of matched cells indicated that the TNF signaling pathway (screened by |FC|)≥1.2 and p value <0.05) in ESCC was involved in tumorigenesis (Fig. 4D, E). In a previous study, it has been verified that the TNF signaling and NF-κB signaling pathways are correlated [20]. Fortunately, we also found that the TNF signaling and NF-κB signaling pathways were enriched in the K30 cells (screened by |FC|)≥2.0 and p value <0.05), indicating that LINC02820 may regulate these pathways (Fig. 4F). Moreover, when LINC02820 was altered, we found the downstream NF-κB signaling pathway factors, such as ICAM1, CCL2, CCL4, and CXCL3, also changed, further indicating that LINC02820 may function by TNF/NF-κB signaling pathway (Fig. 4G).
Furthermore, we found that the change of the LINC02820 makes no sense to the upstream NF-κB signaling pathway of IKKα, IκBα and its phosphorylation (Fig. 5A), but may affect IKKβ and its phosphorylation, though the effect is small (Fig. 5A). However, the change of LINC02820 impacts p65 and its phosphorylation (p-p65) changed, obviously (Fig. 5B). We found that LINC02820 inhibition reduced p65 and p-p65 in ESCC cells (Fig. 5B). In contrast, when LINC02820 was upregulated in K30 and EC109 cells, p65 and p-p65 expression increased (Fig. 5B). And the cytosolic protein and nuclear protein were also separated to detect p-p65 in the nucleus, we found that when LINC02820 was knocked down, nucleic p-p65 was decreased (Fig. 5C) whereas LINC02820 overexpression resulted in an increased nucleic p-p65 (Fig. 5D). Altogether, these data imply that LINC02820 may work in the TNF/ NF-κB signaling pathway.
LINC02820 cooperates with TNFα to amplify the NF-κB signaling pathway and reconstruct the cytoskeleton To further analyze the relationship between LINC02820 and TNF/ NF-κB signaling pathways, particularly with migration and invasion capabilities, we used tumor necrosis factor-alpha (TNFα) to stimulate K30 and K180. In the assays, we used low and high concentrations of TNFα (20 or 50 ng/mL) to stimulate ESCC cell lines at different time points (0, 5, 10, 15, 20, and 30 min). We found that the expression of p-IKKα, p65, and p-p65 were altered (Figs. 5E, F, 6A, B). By looking at the cytosolic and nuclear protein of p65 and p-p65, we found that LINC02820 cooperates with TNFα to promote the nuclear translocation of p65, thus regulating the NF-κB signaling pathway ( Supplementary Fig. 5A-D).
And we also found that the migration and invasion abilities of K180 with lower LINC02820 expression exhibited partial recovery when TNFα was added (Fig. 6C). What's more, K30 cells with overexpressed LINC02820 exhibited significantly improved abilities after TNFα was added (Fig. 6D).
In addition, immunofluorescence assays of p-p65 and F-actin visually demonstrate that the level of p-p65 entering the nucleus was increased when LINC02820 was reduced while TNFα was added, notably, the level of p-p65 entering the nucleus was still lower than that of the control group (Fig. 7A). Moreover, when LINC02820 was overexpressed and TNFα was added in K30 and EC109, p-p65 entering the nucleus was increased and the cytoskeleton remodeled (Fig. 7A).
Therefore, we believe LINC02820 synergies with TNFα to magnify the NF-κB signaling pathway and promote cytoskeletal remodeling, thus affecting ESCC metastasis.
LINC02820 may interact directly with SF3B3 to participate in the NF-κB signaling pathway through alternative splicing To further clarify how LINC02820 collaborates with TNFα to promote ESCC metastasis, RNA pull-down combined with mass spectrometry was performed. We prepared LINC02820 probes and the antisense strand as a control (Fig. 8A) and incubated them with whole-cell lysates of endogenous high LINC02820 expression K180 cells and exogenous high LINC02820 expression K30 cells. Mass spectrometric indicated that LINC02820 may interact with SF3B3 ( Fig. 8B-D and Supplementary Fig. 5E), which was further confirmed with western blot (Fig. 8E). RIP assays showed that SF3B3 can directly interact with LINC02820 in ESCC cells (Fig. 8F).
Furthermore, the correlation analysis in ESCA showed that LINC02820 expression was positively correlated with SF3B3 expression and SF3B3 expression positively correlated with p65(RELA) expression (Fig. 8G). Considering SF3B3 is a subunit of U2, a small nuclear ribosomal protein, and is involved in the alternative splicing of RNA. Taken together, we conclude that LINC02820 may function by interacting directly with SF3B3 and contribute to metastasis in ESCC by regulating the NF-κB signaling pathway (Fig. 8H).
DISCUSSION
ESCC is a global oncology problem and metastasis is one of the main reasons leading to poor prognosis of ESCC patients. Understanding the metastatic molecular mechanism of ESCC will provide more treatment strategies for ESCC patients and improve the prognosis of patients with ESCC. In recent years, several studies have shown that lncRNA can participate in ESCC metastasis. For example, lncRNA VESTAR can promote lymph node metastasis in ESCC [25], and lncRNA CASC9 upregulates the expression of LAMC2 to promote metastasis [26]. And lncRNA HOTTIP can regulate ESCC metastasis at both transcriptional and post-transcriptional levels [27]. Those disease-associated lncRNA signatures may be useful for developing novel biomarkers and therapeutic targets for cancers, especially with the development of delivery systems for RNA therapy, like polymer-based, lipidbased, and conjugate-based drug delivery systems [28]. In our study, we found the F-actin had remodeled with the change of LINC02820 and may form "invadopodia" on the cell surface. The cytoskeleton is the driving force of cell movement, which are crucial for tumor cell metastasis [29,30]. Therefore, we believe that LINC02820 is crucial for ESCC.
Meanwhile, many reports are indicating that lncRNAs impact the metastasis of ESCC through the NF-κB signaling pathway. For example, lncRNA NKILA can inhibit the metastasis of ESCC by the NF-κB signaling pathway [31], and lncRNA FTH1P3 regulated metastasis and invasion of ESCC through SP1/NF-κB pathway [32]. Moreover, lncRNA FMR1-AS1 upregulates the level of c-MYC and activates the NF-κB signaling pathway to promote the invasion of ESCC [33]. All data show that lncRNA and NF-κB signaling pathways can function in ESCC. In the present study, we also found that LINC02820 magnifies the NF-κB signaling pathway by promoting p65 translocation into the nucleus under TNFα stimulation. And through the RNA pull-down-MS experiment, we have found that LINC02820 interacts with SF3B3, which is a part of the spliceosome and participates in the precursor-mRNA (pre-mRNA) splicing reaction [34]. It is reported that SF3B3 is a key regulator of pre-mRNA splicing of EZH2, therefore regulating tumor development [35,36]. Thus, we speculated that LINC02820 might interact with SF3B3 to influence the alternative splicing of pre-mRNA for some specific genes (we called "X") and further activate the NF-κB signaling pathway to promote the metastasis in ESCC.
However, we are still left with some questions. We still don't know what caused the dysregulation of LINC02820. There are many reasons for it, such as gene copy number change [37], DNA methylation alteration [38], and transcript stability [39]. And we do not fully understand the specific molecular mechanism of LINC02820 interacting with SF3B3 to affect the pre-mRNA alternative splicing and impact the NF-κB signaling pathway. These all need more experiments to prove.
In summary, our study found a novel lncRNA, LINC02820, which is upregulated in ESCC and impacts metastasis of ESCC via cytoskeletal remodeling. According to a series of assays, we found that LINC02820 may bind to SF3B3 to function, and can cooperate with TNFα to regulate the NF-κB signaling pathway to work in ESCC (Fig. 8H). All in all, our findings contributed to revealing the molecular mechanism of ESCC and revealed that LINC02820 may be a new target for the diagnosis and treatment of ESCC.
DATA AVAILABILITY
Raw data from this study have been deposited to the Research Data Deposit database (www.researchdata.org.cn) under accession number RDDB2022788640. | 5,471.2 | 2022-11-10T00:00:00.000 | [
"Biology"
] |
Tweeting # humanwaste : A practical theological tracing of # humanwaste as a trend on Twitter
Different and divergent facets of human existence are increasingly becoming embodied within a digital domain. The social media platform, Twitter, comprises an important expression of the digital world and social media, but also of popular culture. In a practical theological tracing of the theme of human waste on Twitter, new contents and meaning related to this concept are mapped out in a variety of categories. On the basis of existing and newly developed research methodologies, an exploration is conducted in order to indicate how the digital world can assist in the creation of new empirical realities, hermeneutic outcomes and strategic involvement. In this tracing of human waste as a theme on Twitter, accents of a possible lived spirituality are sounded out and verbalised. It is on the basis of these descriptions that possibilities unfold for new practical theological orientations, both for the present and the future.
Introduction
In the highly-acclaimed science fiction film, Gravity (2013a), the impact and danger of spacelitter was demonstrated to viewers in a gripping manner by means of 3D-technology.In one scene in the movie, portraying a space-adventure, the brilliant medical engineer on her first shuttle mission, Dr Ryan Stone (Sandra Bullock) says to veteran astronaut Matt Kowalsky (George Clooney): 'Clear skies with a chance of satellite debris' (Gravity 2013b).
This striking film leads the viewer to the important realisation that the problem and impact of human waste 1 should be understood on multiple levels and within various different contexts.Arising from this important perspective and in conjunction with the theme of the conference 2 , namely human waste, I will explore and discuss the occurrence and meaning of human waste within the space of the social media platform, Twitter, in this contribution.Expression is given to a practical theological reflection which is concrete and contextual: 'This way of thinking is always concrete, local, and contextual, but at the same time reaches beyond local contexts to transdisciplinary concerns' (Müller 2009:205).During this specific research process, a search is conducted within the context of communication technology for associated thematic markers such as, inter alia, the meaning of human waste and the role of popular culture, as embodied on a social media platform such as Twitter.An exploration of and involvement in this reality confirms a relevant and pragmatic practical theological contribution.
The structuring of the presentation is informed by the generally accepted ongoing circular and spiral practical theological movement between practice and theory (Browning 1991:84), and systematised on the basis of descriptive-empirical, normative and pragmatic perspectives.By making use of Osmer's (2008:4) four-question practical theological inquiry 3 as a grid for mapping this envisaged contribution, my presentation will focus on the following four main aspects: • Firstly, by asking, 'what is going on?', Twitter will be described as a possible expression of the rise and influence of the new social media phenomenon, creating a so-called third space for reflection 'requiring new logics and evoking unique forms of meaning-making' (Campbell 2013b:4).• Secondly, the anatomy of human waste in the digital age will be investigated (i.e.'why is this going on?').• Thirdly, an exploration of the art of hermeneutics will be conducted in tracing the expressions of human waste as a trend (#humanwaste) on Twitter (i.e.'what ought to be going on?').• Fourthly, the quest for ways in which the tracing of human waste as a trend and the contribution towards possible new and relevant articulations of a pragmatic practical theological involvement will be addressed (i.e.'how might we respond?').
1.'Rules governing defecation, hygiene and pollution exist in every culture at every period in history' (George 2008:n.p.).
2.Paper delivered at the annual meeting of the Society for Practical Theology in South Africa, held at the University of Pretoria, South Africa (22-24 January 2014).The theme of the conference was 'Practical theology in Africa and human waste'.
3.'What is going on?'; 'Why is this going on?'; 'What ought to be going on?', and 'How might we respond?' The addressing of this fourth question will be supported by a seven-step methodology associated with an investigation in transversal rationality, as accommodated within a postfoundational practical theology (Müller 2004:300).
What is going on? Twitter and a digital world
As background to the interpretation of the reality of a digital world, various scholars (Campbell 2011;Campbell 2013a;Flew 2008;Hassan 2008;Wagner 2012;) 4 point to at least three driving factors currently leading towards further development and demarcation of the digital landscape, namely the: • continuing development and evolution of the Internet • connectivity and mobility brought about by the Internet and specific apparatuses such as cellular telephones and tablets • influence and magnitude of so-called social media.
All three of these factors are addressed in the focus on the use of the social media platform, Twitter.
Twitter, as a well-known social media platform, is indicated as the chosen praxis terrain for the execution of the project.The motivation for this can be found on a variety of levels.Firstly, Twitter is currently one of the most rapidly-growing social media platforms.At the end of April 2014, Twitter had 255 million monthly active users out of a total of a billion registered users with a Twitter account (Smith 2014).With these statistics in mind, Twitter indeed is a good expression of a digital world with the accentuation of aspects such as, inter alia, mobility and fluidity of information.Secondly, by means of its character and dynamics, Twitter offers access to nationally and internationally available empirical data for analysis.
As part of a descriptive-empirical movement of reflection in the present research process, an endeavour is made to describe the dynamics of the involvement with the indicated praxis.Twitter, developed in 2006(Zappavigna 2012), is generally known as a microblog as it offers the user an opportunity to send a message within the scope of 140 characters (Van Dijk 2011:333; Wagner 2012:120): These messages, known as 'tweets', can be sent through the Internet, mobile devices such as Internet-enabled phones and iPads, and text messages.But unlike status updates, their strict limit of 140 characters produces at best eloquently terse responses and at worst heavily truncated speech.(Murthy 2013:n.p)Twitter has been called the 'SMS of the Internet', with the difference that, unlike an ordinary SMS, a Twitter message 4. Grieve (2013:115) speculates on the four major features of digital practice in the new future: 'First, the web will be smarter, knowing not just what users say, but what they mean.We will see more semantic content, and the applications that support it ... Second, new media will be mobile and we will see an increase in augmented reality (AR) in which digital media are laid over physical real-world environments ... Third the web will grow more interactive ... Lastly, more and more applications will be outsourced to the cloud, with users accessing information stored on the web remotely from netbooks, tablet computers, smart phones, or other devices ... What combination of these features of new media will win out we cannot tell.' is normally visible to every user of the Twitter platform.The transmission of messages, or 'tweets', is conducted from an individual's Twitter account where the user has the option to create their own profile through the use of a Twitter address or a 'handle' and a biographical description with a photograph and some personal background information (Murthy 2012(Murthy :1059;;Qiu, Lin, Ramsay & Yang 2012:710).Naturally, all these variable factors provide the constituents for an exceptionally dynamic interaction leading to the following possibility: Twitter has the potential to increase our awareness of others and to augment our spheres of knowledge, tapping us into a global network of individuals who are passionately giving us instant updates on topics and areas in which they are knowledgeable or participating in real-time.(Murthy 2013:n.p.)Why is this going on?Thinking before tweeting Various authors and researchers have indicated that as citizens of a new digital world, connectedness has become the new passport.In his well-known book, The world is flat: A brief history of the twenty-first century, Friedman (2006:8) writes that the pathways of the world have changed in the wake of, inter alia, the significant developments brought about by various kinds of communication technology, as a result of which more and more people are now able to come into contact with other people across the world.Castells ( 2006) sums up the situation by referring to: [T]he new social structure of the Information Age, which I call the network society because it is made up of networks of production, power, and experience, which construct a culture of virtuality in the global flows that transcend time and space.(p. 381) This passport of connectedness is opening up doorways to new worlds where the connectedness and acceleration of life (Rushkoff 2013:n.p.) is mediated through the transformational mobile device (Sweet 2012:n.p.), a growing and evolutionary Internet availability and the development of associated social media platforms (Campbell 2011:1-18).
In their recent study, Aiello et al. (2013) rightly indicated: As social networking services progressively diffuse in more geographical areas of the world and penetrate increasingly diverse segments of the population, the value of information that is collectively generated on such online platforms increases dramatically.In fact, interactions and communication in social media often reflect real-world events and dynamics as the user base of social networks get wider and more active in producing content about real-world events almost in real-time, social media streams become accurate sensors of real-world events.(p.1268) Twitter has been selected as a typical embodiment 5 of this so-called 'mobinomic world' (Knott-Craig 2012:n.p.) demarcating the formation of a virtual ecosystem of connections across various spheres and layers of life.
5.'With the rising popularity of social networking software, questions continue to emerge regarding new forms of technologically mediated community.Issues being explored include how the blogosphere reshapes our notions of community and how Twitter followers can cultivate a sense of community through creating interlinked personal networks' (Campbell 2013b:67).
Accessing the service most likely through wireless Internet mobile devices, Twitter provides a platform for users to make use of this microblogging site 'to present themselves through ongoing "tweets", revealing a self that is both fluid and emergent' (Wagner 2012:120).
What ought to be going on?Practical theological tweeting?
The interpretation of written texts, as presented in the documents associated with the Christian tradition, as well as of 'the living text of human action' (Brown 2012:112), comprises part of the dynamics of the task of theological hermeneutics (Stiver 2003:178).The interest of practical theology in such practices is confirmed by newer developments that bear the accent of an interest in practically driven events that are contextually and concretely placed within everyday life.
In exploring the art of hermeneutics with a view to tracing the expressions of human waste on Twitter, I proceed from the assumption that '[t]heology is not for Sundays only ... Theology is an everyday affair ... Theology not only articulates beliefs but suggests "designs for living"' (Vanhoozer 2007:7).Underlying this acknowledgement is the conviction that practical theology encapsulates a hermeneutics of the lived religion, in which preference is given to the praxis itself and to the knowledge concerning God that is being developed, found and lived within this praxis (Ganzevoort 2008:11-12).
Underscoring the perception that the culture in which we live is shaping us (Sweet 2012:n.p.) is the belief that the hermeneutics of popular culture 6 holds the promise of pointing beyond, as Cobb (2005) aptly indicates: Theology of culture depends upon this kind of trust that our cultural expressions can testify to a reality that transcends them -a reality that is really there, that matters, and in which providence is at work.Engaging with popular culture in the expectation that it will reveal 'signals of the transcendent, the presence of grace, rumors of angels' (Vanhoozer 2007:33), I envisage that by means of a hermeneutical practical theology of lived religion, focusing on the praxis of everyday living, tweets regarding human waste can be traced 7 and described.
Within the practice of the commonplace, which implies actuality and relevance, amongst other aspects, a quest is 6.'… popular culture is therefore the shared environment, practices, and resources of everyday life for ordinary people within a particular society' (Lynch 2005:14).
7.'When I use the word 'tracing', that is, not only because it sounds so well in combination with sacred.It is especially because of the more than adequate meanings it carries.The first is the archaic meaning of traversing or travelling over a certain area.The second involves meanings like following or tracking the footprints of someone or something, like when on a hunt.Metaphorically, it can be transposed to studying something in detail, like the history of an idea, the whereabouts of money moving around the world, or one's ancestry.It may also refer to the search for traces, signs, evidence, or remains of something that indicate a certain activity or presence.Tracing then has to do with reconstructing and developing knowledge.
The last type of meaning has to do with drawing or sketching.It may be the careful forming of letters or figures or even certain kinds of decoration, but usually it is a form of copying by hand through a transparent sheet.Here tracing has to do with constructing, modeled after an external reality' (Ganzevoort 2009:5).
conducted for the embodiment of a lived religion and the transcription thereof in possible new, normative categories as expressed, inter alia, in a so-called ordinary theology 8 .The development of a postfoundational practical theology, within which an orientation of transversal reality is sustained, comprises a further enlargement of existing hermeneutical orientations, which confirm the importance of empirical descriptions on a multitude of levels.A possible embodiment hereof can be found in a pragmatic theological exploration of the occurrence of the hashtag keyword #humanwaste (in the title of the article and linking with the theme of the conference) as a theme or trend on Twitter.
How might we respond? Trending and tweeting #humanwaste
The hashtag keyword #humanwaste is an indication of a so-called trend on Twitter.Trending refers to the process of monitoring, detection and extraction in real time of relevant thematically sorted information from the continuous stream of data originating from online sources (Aiello et al. 2013(Aiello et al. :1268)).Although the process of topic detection is a complex process (Aiello et al. 2013(Aiello et al. :1279)), I have made use of the seven-step methodology (Müller 2004:300) associated with an investigation in transversal rationality, as accommodated within a postfoundational practical theology (Müller 2004:300) in the exploration and description of the occurrence of the search phrase, #humanwaste, on the Twitter platform.
Description of the specific context
The broader context has already been indicated as the popular and constantly growing social media platform, Twitter, arising from millions of users and thousands of tweets that are sent per minute on a worldwide basis.The focus of the conference, however, falls specifically on 'human waste'.Through the use of the @ and # symbols, specific search domains on Twitter were explored for this article by making use of the built-in search facility of this particular platform.The @ symbol is used specifically in the search for individuals, as this symbol is used to indicate a so-called personal name or address, generally known as a Twitter handle: The dialogue between Twitter users occurs through the at-sign (e.g. a user can direct tweets to another user by prefixing a post with an at-sign before the target user's name).(Murthy 2013:n.p.)By means of the # symbol, a search is conducted within the flow of tweets selected with a specific focus and thematically grouped under the concerned 'hashtag' 9 , in this case #humanwaste.In this way, the focus falls on a certain thematic selection from a stream of information, with a 8.'… ordinary theology in some sense "works" for those who own it.It fits their life experience and gives meaning to, and express the meaning they find within, their own lives.It is highly significant for them because it articulates a faith and a spirituality, and incorporates beliefs and ways of believing, that they find to be salvific -healing, saving, making them whole.Ordinary theology helps people spiritually and religiously' (Astley 2013:n.p.).9.'Hashtags are an emergent convention for labelling the topic of a micropost and are a form of metadata incorporated into posts' (Zappavigna 2012:50).
particular focus on a specific theme.Murthy (2013) therefore indicates: Any word(s) preceded by a hash sign '#' are used in Twitter to note a subject, event, or association.Hashtags are an integral part of Twitter's ability to link the conversations of strangers together.(n.p.)
Describing and listening to in-context experiences
After the broad description of the extent and nature of the context -in this case Twitter (http://www.twitter.com)thenext important step is to obtain greater clarity by means of empirical research on the nature and contents of tweets that are associated with the theme of 'human waste'.In a preliminary investigation during November 2013, it became clear that @humanwaste and associated Twitter-handles are linked to, inter alia, individuals who choose the concerned name(s) as a Twitter handle, and whose tweets mostly bear themes of a trivial nature 10 (@humanwaste; http://www.twitter.com).In the search conducted under the hashtag symbol, a stream of tweets with a strong environmental focus are also found, thematically seeking answers for the question as to how the impact and costs of human waste can be positively addressed and counteracted.
Interpretation of experience in collaboration with co-researchers
Entering the third movement, it is important to first provide some methodological remarks as introduction and to serve as orientation.Not only is the description of the experience important, but the interpretation that the narrator attributes to the experience is also significant.In the third movement of research (Müller 2004:302), the focus thus falls on the meaning or interpretation attributed to the experience by coresearchers.In this regard, Twitter provides an opportunity to work with exceptionally authentic data, as the concerned tweets had not been formulated with the idea that they would be used as research material.Moreover, the different reactions to the initial tweet are also available on Twitter, thus representing an interactive discussion on the theme.There is therefore also the possibility to make use of the interactive character and nature of Twitter, using various actions to engage with characters.Murthy (2013) rightly indicates that: Anyone can post a tweet directed to @BarackObama or @CharlieSheen, and many do.Additionally, anyone can instantly see a tweet and respond to it.One does not even need to 'know' the other user or have their permission to direct a tweet at them.(n.p.)
Description of the experience of different traditions of interpretations
Every community displays certain perceptions and behaviours that are shaped by specific traditions and discourses (Müller 2004:302).It is important to identify these perceptions and discourses to develop a greater understanding thereof.
10.Themes informed through strong sexual nuances were amongst the content of tweets referred to.
Events and experiences are also interpreted differently by different communities.The goal is an optimal understanding of the different experiences.In the tracing of the meaning of the various tweets, but also of the discussions that arise from the tweets, a new way of approaching practical theological hermeneutics can be established.In this way, the so-called 'theatre of the text' expressing 'the living text of human action' (Brown 2012:112) is given concrete shape within the context of a new practical theological hermeneutics.
Religious reflection and spiritual aspects with the focus on God's presence as experienced in a specific situation Regarding the theme of #humanwaste, George (2008) makes the following remark: I say that all the world's great faiths instruct their followers how best to manage their excrement, because hygiene is holy.I explain that taking an interest in the culture of sanitation puts them in good company.Mohandas K. Ghandi, though he spent his life trying to rid India of its colonial rulers, nonetheless declared that sanitation was more important than independence.(n.p.) Taking this observation (with the emphasis on the importance of context) seriously is to acknowledge a practical theology that seeks God's presence in specific experiences, rendering a unique contribution.This is not a forced process, however, but rather an honest endeavour to arrive at an understanding of the co-researcher's religious and spiritual interpretation and experience of God's presence or possibly the lack of such an experience (Müller 2004:302).The researcher's own understanding of God's presence in a given situation renders a valuable contribution to the process of interpretation.In this regard, my own Twitter account (@javdberg) also comprises part of the research and discourse, and I am challenged personally in asking myself how I will respond to the theme of #humanwaste in my own tweets.
In-depth description of the experiences through interdisciplinary investigation
In his description of this sixth movement, Müller (2004:303) not only indicates that interdisciplinary discourse comprises an integral part of practical theology, but he also sketches the character of this dialogue: such a discourse may sometimes be difficult and complicated because terms, argumentative strategies, context and the explanation of human behaviour differ from one discipline to the next.Through interdisciplinary discourse, different patterns and actions may, however, be identified within the greater framework leading to a clearer picture, so that greater understanding can be achieved.Interdisciplinary discourse not only includes dialogues between the various theological disciplines, but also dialogues with other scholarly disciplines, traditionally including anthropology, sociology and psychology, amongst others.With regard to the new domain of research, an interdisciplinary discourse with experts from the information technology and media studies environment is a prerequisite for further development.However, it remains important to incorporate the different interpretations in a greater overall picture, by means of integration.This greater picture of different interpretations might well be visualised by the creation of a new vocabulary and the articulation of a new language.The creation of a possible new language and meaning is already acknowledged within the media studies field: New media studies, as much as old media studies, accepts that the communication and representation of human knowledge and experience necessarily involves language and technological systems ... [This] requires us to rethink the intercession of media technologies in human experience.(Dewdney & Ride 2006:58) Currently, and in my opinion rightfully so, many classical theological concepts have lost their meaning (Ganzevoort 2013:5-6) within a new digital world described by some as the 'iPod, YouTube and Wii Play' culture (Laytham 2012:1).
In search of new and fresh expressions of faith, the possibility exists of articulating a public practical theology, and of providing a space for negotiating new meanings between old and new sources and readers (Ganzevoort 2013:19).Being sensitive towards the process of compiling a future research agenda for practical theology, I would agree that in an evolving digital world, the intersection between new media technologies (e.g.Twitter) and human experience provides a relevant and contextual research domain only accessible through interdisciplinary conversation.
Development of an alternative interpretation
The ultimate goal of research is not merely to describe and interpret experiences or events, but also to interpret them in such a manner that new meaning will be associated with them.Thus, the focus does not fall on generalisations, but rather on deconstruction and emancipation so that greater possibilities of application will ultimately result.As Müller (2004) indicates: It rather happens on the basis of a holistic understanding and as a social-constructionist process in which all the co-researchers are invited and engaged in the creation of new meaning.(p.304) An endeavour is thus made to obtain a new angle of incidence on the acquired knowledge and understanding by means of the foregoing process and interdisciplinary discourse to arrive at new interpretations and meaning.Such new interpretations and meaning are found and also further developed in tracing #humanwaste on Twitter.As a practical means of involvement, and arising from the project in the development of alternative meaning 11 , for example, one could consider the possibility of requesting conference delegates to provide comments by means of own tweets with the inclusion of the hashtag keyword #humanwaste.This could be considered part of the contributions made during a conference such as this one, thereby emphasising that 'social media could be used to manipulate the course of online and offline human dynamics' (Aiello et al. 2013(Aiello et al. :1268)).
11. George (2008:n.p.) is articulating an alternative meaning to the theme of human waste with the following remark: 'How a society disposes of its human excrement is an indication of how it treats its humans too'.
Conclusion
Ironically, in reading the contents of some of the tweets on Twitter, one may wonder whether those particular tweets have not themselves become part of the problem of #humanwaste?In fact, the problem of space-litter referred to in the introduction may well comprise a problem of cyberspace.A practical theological research agenda with sensitivity towards the future could thus indeed accord priority to the occurrence, influence and meaning of cyberlitter as a new expression of human waste.Except for charting this new possible and challenging research avenue, the aim of this practical theological contribution was to explore and trace the possible role of a social media platform like Twitter in addressing the theme of #humanwaste.The engagement with the Twitter-context in this article portrayed a practical theological engagement and strategic involvement with new empirical realities and hermeneutic outcomes.
Theology offers a language to speak about this reality, and can help articulate what is going on in the depths of popular culture ... it is wise to remain open to the more discerning markers of culture.Even of popular culture.(p.294) | 5,856.2 | 2014-10-28T00:00:00.000 | [
"Philosophy"
] |
EPS15-AS1 Inhibits AKR1B1 Expression to Enhance Ferroptosis in Hepatocellular Carcinoma Cells
Epidermal growth factor receptor substrate 15 (EPS15) is part of the EGFR pathway and has been implicated in various tumorigenesis. Increasing evidence suggests that long noncoding RNA (lncRNA) plays an essential role in liver hepatocellular carcinoma (LIHC) by regulating the expression of proteins and genes. Through analysis of the cancer genome atlas (TCGA) database, we found that EPS15 is highly expressed in LIHC tissue, and lncRNA EPS15-antisense1 (EPS15-AS1) decreased in LIHC cell lines. However, the function of EPS15-AS1 in LIHC is still unknown. When EPS15-AS1 was overexpressed in HepG2 cell lines, the expression of EPS15 was reduced and cell activity and invasiveness were inhibited. In addition, we observed an increase in Fe2+ ion and lipid peroxidation after overexpression of EPS15-AS1, and further analysis showed that the susceptibility to ferroptosis increased. Aldo-keto reductase family 1 member B 1 (AKR1B1) belongs to the aldo/keto reductase superfamily and is involved in maintaining the cellular redox balance. Survival analysis revealed that patients with a higher level of AKR1B1 have a lower survival rate in the TCGA database. We also found that EPS15 enhanced the AKR1B1 expression in LIHC, and AKR1B1 had the ability to promote cell invasiveness. Moreover, overexpression of AKR1B1 alleviated the promoting effect of EPS15-AS1 on ferroptosis. Therefore, EPS15-AS1 can induce ferroptosis in hepatocellular carcinoma cells by inhibiting the expression of EPS15 and AKR1B1 and disrupting the redox balance. EPS15 and AKR1B1 may serve as biomarkers for diagnosis and lncRNA EPS15-AS1 potential drug for LIHC.
Introduction
Liver hepatocellular carcinoma (LIHC) is the most common type of primary liver cancer, accounting for 90% of liver cancers [1,2].Chronic infection due to hepatitis B and C viruses is a common risk factor for LIHC, which has become the cancer with the highest recurrence rate worldwide [1][2][3].Additionally, obesity, diabetes, alcohol consumption, and other risk factors for liver injury can further promote the development of LIHC [3,4].The etiology of LIHC is closely related to environmental factors and requires adaptation to changing environmental conditions, in which epigenetic aberrations play a critical role in the development and progression of LIHC [4].DNA methylation and acetylation, alterations in microRNAs and long noncoding RNAs (lncRNAs), and chromatin modifications are the most common epigenetic modifications that also lead to changes in the liver epigenome [4,5].The accumulation of these epigenetic alterations leads to carcinogenesis, progression, and metastasis.LncRNAs are defined as noncoding RNAs greater than 200 nucleotides in length [6].LncRNAs mainly include enhancer RNAs, sense or antisense transcripts, and intergenic transcripts [6,7].LncRNAs are thought to have multiple functions, including the organization of nuclear structural domains,
Ivyspring
International Publisher transcriptional regulation, and regulation of protein or RNA molecules [7].However, the biological processes of the vast majority of lncRNAs remain unknown.
Receptor tyrosine kinases (RTKs) are a family of signaling proteins in which growth factor RTK-mediated cell signaling pathways are essential in maintaining normal physiological functions [8].However, their aberrant activation promotes tumor development [9].Currently, epidermal growth factor receptor (EGFR) is one of the most studied RTK signaling proteins and is closely associated with the development of multiple human tumors [10,11].The epidermal growth factor receptor pathway substrate 15 (EPS15) was originally identified as a substrate for the EGFR signaling pathway [12].Notably, in acute myelogenous leukemias, the EPS15 gene was found to rearrange at t (1;11) (p32, q23), suggesting a role for EPS15 in tumorigenesis and development [13].In addition, Eps15 was also found to be involved in endocytosis and cell growth regulation [14].Therefore, EPS15 may affect the signaling efficiency of EGFR and be involved in the development of some tumors.
LncRNAs can be categorized into five classes, based on their relative position to nearby coding genes: antisense lncRNAs, intronic lncRNAs, intergenic lncRNAs, bidirectional lncRNAs, and promoter-associated lncRNAs, which regulate genes expression in very different ways [7,15].Antisense lncRNAs are transcribed from the antisense strand of a gene (usually a protein-coding gene) and overlap with the mRNA of the gene [15].The presence and positional specificity of this naturally occurring antisense lncRNA suggest that it tends to act more closely with the sense strand than with target genes in general [16].According to the current study, the mechanisms by which AS-lncRNAs affect gene expression on the sense strand can be divided into three categories [16]: 1) The transcription process of AS-lncRNAs represses sense-strand gene expression.
2) AS-lncRNAs bind to DNA or histone-modifying enzymes and regulate the epigenetics of sense-strand genes, thereby affecting gene expression.3) AS-lncRNAs bind to sense-strand mRNA through base complementary pairing and affect variable splicing of mRNA, thereby affecting protein translation and function.LncRNA EPS15-antisense1 (EPS15-AS1) is an antisense lncRNA of EPS15, which has been reported to inhibit EPS15 expression and induce apoptosis [17].However, the role of EPS15-AS1 in LIHC and the mechanism are still unclear.
Ferroptosis is a novel type of programmed cell death triggered by iron-dependent lipid peroxidation, ultimately leading to cell membrane damage [18,19].Uncontrolled lipid peroxidation is a significant feature of ferroptosis, resulting from the interaction between the ferroptosis-inducing and defense systems [19].Ferroptosis is activated when the promoters of ferroptosis significantly exceed the antioxidant capacity of the defense system [19].Some oncogenes and oncogenic signaling can activate the antioxidant or ferroptosis defense system, favoring tumorigenesis, progression, metastasis, and resistance [20,21].Therefore, this study aimed to analyze the expression of EPS15-AS1 and EPS15 in LIHC and to investigate whether EPS15-AS1 has the ability to regulate EPS15 and the sensitivity of LIHC to ferroptosis.
Cell Culture
A total of three cell lines, Huh7, HepG2, and HL7702, were used in the current study.HL7702 is a normal human hepatocyte cell line, and Huh7 and HepG2 are human LIHC cell lines.All cell lines were purchased from the Shanghai Institute of Biochemistry and Cell Biology (SIBCB) and cultured in Dulbecco's modified Eagle's medium (DMEM) (HyClone, USA) containing 10% fetal bovine serum (FBS) (Gibco, USA).All cells were cultured at 37 °C and 5% CO2 in a humid incubator (Thermo Fisher, USA).When the cultured cells were fused to 80-90%, cells were digested with 0.25% trypsin (NCM Biotech, China) and passaged in 1 to 3 passages.
Western Blot Analysis
After incubation under different intervention conditions, all cells were collected and lysed using RIPA lysis buffer (NCM Biotech, China).After 3 minutes of lysis, the lysates were centrifuged at 12,000 rpm for 10 minutes, and the supernatant was collected for western blot analysis.The protein concentrations were quantified using the BCA kit (NCM Biotech, China) to keep the total amount of protein consistent across the different experimental groups.Finally, 20 μg of protein per group was used for western blot analysis.10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (10% SDS-PAGE) (Vazyme, China) was applied to separate the protein, and then the protein was transferred to nitrocellulose membranes (Millipore, USA) at 300 mA for 1 hour.The nitrocellulose membranes containing protein were blocked with 5% nonfat powdered milk (Beyotime, China).Then, the membranes were incubated with the corresponding primary antibody at 4 °C for 12 h.The primary antibodies against EPS15 (dilution ratio, 1:1000), β-Actin (dilution ratio, 1:20000) and AKR1B1 (dilution ratio 1:1000) were purchased from ABclonal (#A9814, #AC038, #A18031).Next, the nitrocellulose membranes were washed with TBS-Tween and incubated with secondary antibody (HRP-conjugated goat anti-rabbit IgG, ABclonal, #AS014, 1:10,000).Finally, the chemiluminescent HRP substrate (NCM Biotech, China) was applied for imaging, and the image was detected by a chemiluminescence detection system (Bio-Rad, USA).
Invasion Assay
The trans-well chamber used in the current study was purchased from NEST Biotech (China, #725201).HepG2 cells were digested and resuspended at a concentration of 200,000/ml, and then, 100 μL of the cell suspension was seeded in each upper chamber of the trans-well.In addition, 500 μL of DMEM containing 10% FBS was added to the lower chamber of the trans-well.Finally, the cells were cultured for 24 hours, and the trans-well chamber with cells was collected and stained with 0.1% crystal violet solution.
Wound Healing Assay
HepG2 cells (1 × 10 5 per well) were seeded in 6-well plates for the migration assay.Until the cells were fused to 90-95%, the monolayer of cells was scratched with a 200 μL plastic tip.Then, the cells were rinsed three times with PBS and cultured with DMEM containing 5% FBS for 12 hours.Images were taken at 0 and 12 hours for analysis of migration distance: migration distance = (initial wound widthwound width at each time point)/2 (μm).
Flow Cytometry
Mitochondrial membrane potential staining was performed using the JC-1 staining kit, which was purchased from Beyotime Biotech, China (#C2006).Lipid peroxidation was detected using a lipid peroxidation probe-BDP 581/591 C11 kit (Dojindo, Japan, #L267).In addition, intracellular Fe 2+ ions were detected with an iron ion detection probe-FerroOrange kit (Dojindo, #F374), and an Annexin V-FITC/PI Kit (Dojindo, #AD10) was used to detect the percentage of cells with damaged cell membranes.All staining was performed according to the corresponding manufacturer's instructions.
Transfection and Construction of Overexpression Cell Lines
The expression vector used in the current study was pcDNA3.1, and Lipofectamine 2000 (Thermo Fisher, USA) was used to transfect pcDNA3.1.In addition, the three overexpression plasmids, including overexpression EPS15 (OE_EPS15), overexpression EPS15-AS1 (OE_EPS15-AS1), and overexpression AKR1B1 (OE_AKR1B1), were all purchased from Sangon Biotech (Shanghai, China).The overexpression plasmids and Lipofectamine 2000 were mixed separately with 50 µl of DMEM and left to stand for 5 minutes.Then, the plasmid and Lipofectamine 2000 were mixed and incubated for 20 minutes at room temperature, and the transfection complex was immediately added to the HepG2 culture plate.Then, the HepG2 cells and plasmids were cultured together for 24 hours, switched to normal DMEM containing 10% FBS, and cultured for another 24 hours.After obtaining overexpression cell lines, gene expression levels were examined using RT-qPCR and western blot analysis.
Online Databases and Bioinformatics Analysis
The GEPIA2 online analysis tool (http://gepia2 .cancer-pku.cn/#index) is a tool for analyzing The Cancer Genome Atlas (TCGA) database and was used to perform survival analysis and to compare the expression of EPS15 and AKR1B1 in LIHC tissues and adjacent normal tissues.
To find the correlation between EPS15 and ferroptosis-associated proteins, an interaction network between EPS15 and ferroptosis-associated proteins was constructed by using STRING (https:// cn.string-db.org/),which is a protein-protein interaction network functional enrichment analysis website.In addition, ferroptosis-associated protein was obtained from FerrDb (http://www.zhounan.org/ferrdb/current/), a database that summarizes the latest ferroptosis-associated markers and genes.Cytoscape 3.10.0was used to show the interaction network diagram.
Statistical Analysis
GraphPad Prism (version 9.0) was applied to conduct statistical analysis.The mean ± standard deviation was calculated to describe continuous variables.A t test was used to compare the two groups, and one-way ANOVA followed by Dunnett's multiple comparisons test was used for statistical analysis among multiple groups.P < 0.05 was considered to indicate a significant difference.
EPS15-AS1 expression was decreased in LIHC cells
The EGFR signaling pathway is one of mammalian cell physiology's most important signaling pathways [10].It promotes tumorigenesis mainly by affecting tumor cell proliferation, angiogenesis, tumor invasion, and metastasis.Aberrant activation of EGFR signaling pathways is one of the mechanisms of tumor development.It has been reported that the EPS15 gene encodes a protein part of the EGFR signaling pathway [12].In this study, according to the data of LIHC tissues in the TCGA database, we observed that the expression of EPS15 in LIHC tissue was higher than that in normal liver tissue (p = 0.055) (Figure 1A).
Additionally, the patients with high expression of EPS15 had a lower survival rate than those with low EPS15 expression (log rank p = 0.059) (Figure 1B).Comparison between the normal hepatocyte cell line HL7702 and the LIHC cell lines HepG2 and Huh7 further revealed that the gene transcription and protein expression levels of EPS15 were higher in LIHC cells than in normal hepatocyte cells (Figure 1C and 1D).Interestingly, we also found that the transcription level of the lncRNA EPS15-AS1 was significantly decreased in HepG2 and Huh7 cells compared with that in HL7702 cells (Figure 1E).Therefore, these results suggest that the expression level of EPS15 is closely related to the development of LIHC and that EPS15-AS1 may be involved in the regulation of EPS15 expression during the development of LIHC.
EPS15-AS1 inhibited LIHC cell activity by decreasing EPS15 expression
Antisense lncRNAs are transcribed from the antisense strand of a protein-coding gene and overlap with the mRNA of the gene, and this structure of antisense lncRNAs provides the basis for the regulation of gene expression [16].Thus, we hypothesize that EPS15-AS1 can modulate EPS15 expression in LIHC cells, which in turn affects the invasiveness of LIHC cells.To verify the effects of EPS15-AS1 in HepG2, overexpression of EPS15-AS1 was performed using the pcDNA3.1 plasmid.RT-qPCR and western blotting analysis showed that EPS15 transcripts were significantly reduced in the EPS15-AS1 overexpression group (OE_EPS15-AS1), and the level of EPS15 proteins was also decreased (Figure 2A and 2B).In addition, invasion assays showed that the number of cells passing through the trans-well chambers was reduced in the OE_EPS15-AS1 group (Figure 2C and 2D), and wound healing assays also showed a significant decrease in the migratory ability of the OE_EPS15-AS1 group compared with the control group (vector group) (Figure 2E and 2F).These results suggest that EPS15-AS1 can inhibit LIHC cell activity by affecting the expression of EPS15.
Finally, both the invasion and wound healing assays confirmed that elevated EPS15 promoted the invasiveness of hepatocellular carcinoma, but overexpression of EPS15-AS1 inhibited HepG2 activity by suppressing EPS15 expression (Figure 3C-F).Therefore, all these results suggest that EPS15 has the ability to promote LIHC cell invasiveness, whereas overexpression of EPS15-AS1 can inhibit LIHC cell activity and invasiveness by downregulating EPS15 expression.
EPS15-AS1 increases the susceptibility of LIHC cells to ferroptosis
During the previous experiments, we observed that the cellular status became significantly worse with overexpression of EPS15-AS1 or inhibition of EPS15 expression.Moreover, we also found significant changes in intracellular Fe 2+ ion levels (Figure 4A), leading us to suspect that EPS15 may influence the relationship between LIHC and ferroptosis.As shown in Figure 4A, intracellular Fe 2+ increased in the OE_EPS15-AS1 group and decreased in the OE_EPS15 group compared with the Vector group.Ferroptosis is an iron-dependent programmed cell death characterized by mitochondrial dysfunction and uncontrolled lipid peroxidation.JC-1 is a mitochondrial membrane potential staining reagent, and BDP is a lipid peroxidation probe.As shown in Figures 4B and 4C, overexpression of EPS15-AS1 significantly promoted lipid peroxidation and mitochondrial dysfunction in HepG2 cells, whereas overexpression of EPS15 attenuated the effects of EPS15-AS1.Finally, to observe whether the changes in mitochondria and lipids would eventually lead to cell death, propidium iodide (PI) staining was performed.PI is an agent that can bind to DNA and usually cannot pass through normal living cell membranes but can pass through damaged cell membranes or dead cells.As shown in Figure 4D, overexpression of EPS15-AS1 led to ferroptosis in LIHC cells, whereas expression of EPS15 alleviated the ferroptosis induced by overexpression of EPS15-AS1.Moreover, when OE_EPS15-AS1 cells were treated with ferroptosis inhibitors Ferrostain-1 and Deferasirox, the percentage of dead cells was decreased in the Ferrostain-1 and Deferasirox groups (Figure S1).These results indicated that EPS15-AS1 increases the susceptibility of LIHC cells to ferroptosis by inhibiting the transcription of EPS15.
EPS15 enhances LIHC cell activity by promoting the expression of AKR1B1
To investigate the mechanism between EPS15 and ferroptosis, an interaction network between EPS15 and ferroptosis-associated proteins was constructed (Figure 5A), and according to the interaction network, EGFR, ARF6, GJA1, NEDD4, TFRC, UBC, and TFAP2A were significantly correlated with EPS15 (marked by red circles in Figure 5A).Interestingly, EGFR is also a ferroptosis-related protein and is associated with a large number of other ferroptosis-associated proteins in the network, as shown in Figure 5A, where interacting straight lines cluster around EGFR.We then constructed a subnetwork consisting of EGFR-associated proteins from the network of Figure 5A (Figure 5B).The aldo-keto reductase family 1 member B1 (AKR1B1) gene encodes a member of the aldo/keto reductase superfamily, and this gene catalyzes the reduction of a number of aldehydes [22].Recently, AKR1B1 was reported to promote drug resistance to EGFR TKIs in lung cancer cell lines [23].The current study also found that AKR1B1 correlates with EPS15 and EGFR (Figure 5B).In addition, we observed that in the TCGA database, the expression of AKR1B1 in LIHC was higher than that in normal tissue (p < 0.05) (Figure 5C).The patients with high AKR1B1 expression had a lower survival rate than the patients with low AKR1B1 expression (log-rank p < 0.05) (Figure 5D).Then, we further found that overexpression of EPS15 in LIHC increased the expression of AKR1B1 by using western blotting analysis, whereas the expression of AKR1B1 was reduced in the OE_EPS15-AS1 group (Figure 5E).Therefore, we conclude that EPS15 can promote AKR1B1 expression in LIHC.
To further clarify whether EPS15 promotes LIHC development through AKR1B1, we constructed OE_EPS15-AS1 HepG2 cell lines and OE_EPS15-AS1 + OE_AKR1B1 HepG2 cell lines in LIHC (Figure 6A).Although EPS15-AS1 inhibited cell migration in wound healing assays, overexpression of AKR1B1 reversed the inhibitory effect of EPS15-AS1 (Figure 6B).In the Fe 2+ detection assay, overexpression of AKR1B1 significantly reduced the elevated Fe 2+ caused by overexpression of EPS15-AS1 (Figure 6C).Detection of lipid peroxidation and mitochondrial membrane potential also confirmed that EPS15-AS1 enhanced lipid peroxidation and disrupted mitochondrial membrane potential, and overexpression of AKR1B1 significantly inhibited this damage (Figure 6D and 6E).PI staining further demonstrated that AKR1B1 reduced the ratio of dead cells in the OE_EPS15-AS1 + OE_AKR1B1 group compared with the OE_EPS15-AS1 group (Figure 6F).In addition, Zhang et al. reported that AKR1B1 promotes glutathione (GSH) de novo synthesis to protects against oxidative damage, and glutathione peroxidase 4 (GPX4) is able to utilize GSH to reduce peroxidized lipids to non-toxic lipids, thereby protecting cells from ferroptosis [22].Therefore, the intracellular GSH was also detected.The results showed that intracellular GSH decreased after EPS-AS1 overexpression, increased in OE_AKR1B1 group, and the inhibitory effect of EPS-AS1 on GSH was attenuated in OE_EPS15-AS1 + OE_AKR1B1 group (Figure S2).These results suggested that AKR1B1 can promote LIHC progression and that EPS15-AS1 increases the susceptibility of LIHC cells to ferroptosis by inhibiting the transcription of EPS15 and AKR1B1.
Discussion
LIHC is one of the most common malignant tumors, but metastasis and postoperative recurrence seriously affect the long-term prognosis [3].In addition, resistance to chemotherapeutic agents is an important reason for the low efficacy of radiotherapy and chemotherapy in hepatocellular carcinoma patients [24].Therefore, an increasing number of researchers believe that combination gene therapy may be a potential direction for the treatment of LIHC [25].
Approximately 90% of genes in eukaryotic genomes are transcribed, with only 1-2% of transcribed genes coding for proteins, while most other genes are transcribed as noncoding RNAs [26,27].Noncoding RNAs play an important role at the transcriptional and posttranscriptional levels of encoded genes [27].In the current study, we found that EPS15 was closely associated with the progression of LIHC by analyzing the TCGA database.With further analysis, we found that the expression level of EPS15-AS1 was reduced in LIHC cells.Liu et al. also found that EPS15-AS1 was expressed at low levels in liver cancer cells, and overexpression of EPS15-AS1 reduced EPS15 expression and promoted apoptosis of liver cancer cells [17].The current study showed that EPS15 was increased in LIHC cell lines, including HepG2 and Huh7, compared with the normal hepatocyte cell line HL7702.We also demonstrated that overexpression of EPS15-AS1 inhibited EPS15 expression and weakened the invasiveness of hepatocellular carcinoma cell lines.
However, we found that overexpression of EPS15-AS1 induced ferroptosis but not apoptosis in LIHC cells.This difference in conclusions may be due to the different experimental methods used to detect cell death between the two studies.Annexin V-FITC/PI was initially invented to detect the process of apoptosis [28].The mechanism of this assay is as follows: in living cells, phosphatidylserine (PS) is located on the inner side of the cell membrane, but in early apoptotic cells, the PS flips from the inner side of the cell membrane to the surface of the cell membrane.Annexin-V, a Ca 2+ -dependent PS-binding protein, can bind to the cell membrane during the early stage of apoptosis by binding to the PS exposed outside of cells.In the late stage of apoptosis, the cell membrane is severely damaged, and Annexin-V can freely pass through the cell membrane [28].In addition, propidium iodide (PI) was used to distinguish surviving cells from necrotic or late-stage apoptotic cells.PI is a nucleic acid dye that does not pass through the intact cell membranes of normal or early apoptotic cells but can pass through the cell membranes of late apoptotic and necrotic cells and stain the cell nucleus [28].Therefore, PI is excluded from living cells (Annexin V-/PI-) and early apoptotic cells (Annexin V+/PI-), while late apoptotic and necrotic cells are stained double-positive (Annexin V+/PI+).Interestingly, during ferroptosis, cell membranes are subjected to uncontrolled lipid peroxidation, ultimately causing cell membrane disruption.Finally, cells undergoing ferroptosis were stained double-positive (Annexin V+/PI+).Thus, Annexin V-FITC/PI cannot distinguish apoptosis and ferroptosis, and other experiments are needed for additional validation.
In the current study, we further examined intracellular Fe 2+ , lipid peroxidation, and mitochondrial membrane potential to determine what kind of cell death is involved.After overexpression of EPS15-AS1, intracellular Fe 2+ and lipid peroxidation were enhanced, and mitochondrial membrane potential was disrupted.Moreover, co-overexpression of EPS15-AS1 and EPS15 attenuated the damaging effects of EPS15-AS1.With bioinformatic analysis, we further found that AKR1B1, which can influence ferroptosis, was associated with EPS15.AKR1B1 was overexpressed in LIHC cells, and overexpression of EPS15-AS1 inhibited AKR1B1 expression.Moreover, overexpression of EPS15-AS1 and AKR1B1 in HepG2 cells showed similar invasiveness to normal HepG2 cells and had normal levels of Fe 2+ , lipid peroxidation, and mitochondrial membrane potential.This confirmed that AKR1B1 can promote LIHC cell activity against ferroptosis.In addition, Zhang et al. also reported that AKR1B1 has the ability to promote resistance to EGFR-targeted therapy in lung cancer by enhancing glutathione de novo synthesis [23].
However, the current study had some limitations as well.The mechanism by which EPS15 promotes AKR1B1 is still unclear.Furthermore, whether AKR1B1 also promotes LIHC cell activity by facilitating the glutathione de novo synthesis is unknown.Therefore, future studies should further clarify the exact mechanisms of EPS15 and AKR1B1 promoting hepatocellular carcinoma.
Conclusion
In conclusion, the current study showed that EPS15-AS1 expression had an inhibitory effect on hepatocellular carcinoma.Further investigation demonstrated that EPS15-AS1 reduced EPS15 expression and thus downregulated AKR1B1 expression, which finally inhibited the invasiveness of LIHC cells and induced ferroptosis in LIHC.In general, EPS15-AS1 may be a candidate target for hepatocellular carcinoma and may be a therapeutic strategy to overcome drug resistance.
Figure 1 .
Figure 1.Differential expression of EPS15 and EPS15-AS1 in liver cancer and normal liver tissue.(A) Comparison of EPS15 expression in hepatocellular carcinoma and normal tissues from the TCGA cancer genome database (P=0.055).(B) Kaplan-Meier survival analysis of patients with high and low expression of EPS15 (log rank p=0.059).(C) RT-qPCR analysis of EPS15 expression in HL7702, HepG2 and Huh7 cell lines (GAPDH was set as internal control).(D) Western blot analysis of EPS15 expression in HL7702, HepG2 and Huh7 cell lines (β-actin was set as internal control).(E) RT-qPCR analysis of LncRNA EPS15-AS1 expression.(*p<0.05 and **p<0.01,n=3 each group).
Figure 5 .
Figure 5. AKR1B1 is involved in the promotion effects of EPS15 on LIHC.(A) Interaction network between EPS15 and ferroptosis-associated proteins.(B) The subnetwork of EGFR and ferroptosis-associated proteins was extracted from the EPS15 network.(C) AKR1B1 expression in hepatocellular carcinoma and normal tissues from the TCGA cancer genome database.(D) Kaplan-Meier survival analysis of patients with high and low expression of AKR1B1.(E) Western blot analysis of AKR1B1.(*p<0.05 and **p<0.01,n=3 each group). | 5,376 | 2024-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Comparative study on compact quantum circuits of hybrid quantum-classical algorithms for quantum impurity models
Predicting the properties of strongly correlated materials is a significant challenge in condensed matter theory. The widely used dynamical mean-field theory faces difficulty in solving quantum impurity models numerically. Hybrid quantum--classical algorithms such as variational quantum eigensolver emerge as a potential solution for quantum impurity models. A common challenge in these algorithms is the rapid growth of the number of variational parameters with the number of spin-orbitals in the impurity. In our approach to this problem, we develop compact ansatzes using a combination of two different strategies. First, we employ compact physics-inspired ansatz, $k$-unitary cluster Jastrow ansatz, developed in the field of quantum chemistry. Second, we eliminate largely redundant variational parameters of physics-inspired ansatzes associated with bath sites based on physical intuition. This is based on the fact that a quantum impurity model with a star-like geometry has no direct hopping between bath sites. We benchmark the accuracy of these ansatzes for both ground-state energy and dynamic quantities by solving typical quantum impurity models with/without shot noise. The results suggest that we can maintain the accuracy of ground-state energy while we drop the number of variational parameters associated with bath sites. Furthermore, we demonstrate that a moment expansion, when combined with the proposed ansatzes, can calculate the imaginary-time Green's functions under the influence of shot noise. This study demonstrates the potential for addressing complex impurity models in large-scale quantum simulations with fewer variational parameters without sacrificing accuracy.
I. INTRODUCTION
Accurately predicting the properties of strongly correlated materials poses a significant challenge in condensed matter theory, including long-standing challenges in the field, such as the mechanism of high-temperature superconductivity [1,2].Simulating these strongly correlated materials is difficult due to quantum superposition, which exponentially increases the accessible Hilbert space with the number of particles.Even with quantum computers with more than a hundred logical qubits, simulating solids with large numbers of degrees of freedom is still challenging.Quantum embedding theories, such as dynamical mean-field theory (DMFT) [3,4] or density matrix embedding theory (DMET) [5,6], aims to address this issue by limiting the correlated degrees of freedom in solid materials based on local approximation.
In DMFT, widely used in condensed matter physics, the original lattice system is divided into impurities with local interactions and a dynamical environment called a bath.This model is called a quantum impurity model.A self-consistent calculation is performed to update the parameters associated with the bath until the local Green's function defined on impurity matches that of the original lattice system with the dynamical mean-field.DMFT allows us to compute the single-particle excitation spectrum and successfully describes transitions from metallic *<EMAIL_ADDRESS>to Mott insulating behavior.The biggest numerical bottleneck in DMFT calculations is solving the correlated quantum impurity models, specifically computing local Green's functions for these interacting problems.While state-of-the-art classical algorithms have been adapted for use as impurity solvers, such as tensor networks [7][8][9] or quantum Monte Carlo methods [10], their applications are limited to models with only a few impurity and/or bath orbitals [7][8][9].This challenge stems from the exponential increase in quantum entanglement entropy and the notorious negative sign problem.
To exploit the growing potential for solving quantum impurity models on quantum devices, quantum algorithms based on quantum phase estimation [11,12] and adiabatic algorithms [13,14] have been proposed [15].Their practical implementation, however, may take decades because it requires large-scale error correction schemes.This led to a growing interest in variational quantum algorithms [16,17] for near-term quantum computers with limited hardware resources, often dubbed 'noisy intermediate-scale quantum' (NISQ) devices [18].A number of proof-of-principle demonstrations of solving quantum impurity models using NISQ devices have been conducted [19][20][21][22][23].
In near-term quantum algorithms, such as those in the NISQ era, it is crucial to utilize limited hardware resources effectively.Therefore, there is a need to discretize a continuous bath with fewer bath sites.This reduction can be achieved through the use of imaginary time formalism in DMFT [24][25][26][27][28].For example, a recent estimate for 20-orbital impurity models for iron-based supercon-ductors indicates that about 300 bath sites are sufficient for accurate discretization in the imaginary-time formalism [26].
Once this finite Hamiltonian representation of the quantum impurity model has been found, it is now in principle amenable to solution on a quantum device.For variational quantum algorithms, the first challenge is to define an appropriate ansatz which is flexible enough to span the solution to the problem, able to be efficiently evaluated via unitary quantum gates, and where the number of variational parameters, N P , does not grow prohibitively as the number of spin-orbitals N SO increases.Physics-inspired ansatzes based on unitary coupled cluster (UCC) methods [29][30][31] are widely used in previous studies for quantum impurity models [23,32,33].Among the family of UCC methods, for the unitary coupled cluster with generalized singles and doubles (UCCGSD) [34], N P grows as O(N 4 SO ).The computational times for computing imaginary-time Green's function grow even more rapidly, e.g., as O(N depth N 2 P ) [35] using the UCCGSD [34] and the variational quantum simulation (VQS) [17,36], where the depth of the circuit N depth ∝ N P .Thus, more compact ansatzes (circuits) are an important research direction for the success of simulating impurity models on quantum devices.
In this study, we develop compact ansatzes using a combination of two different strategies.First, we employ the k-unitary coupled Jastrow (k-uCJ) ansatz originally proposed for quantum chemistry, where N P scales only as O(N 2 SO ) [37].Second, we drop largely redundant variational parameters in both the UCCGSD and the k-uCJ ansatz based on physical intuition.This exploits structures in the Hamiltonian which are specific to quantum impurity models with a star-like bath geometry where the bath sites are connected via the Hamiltonian only through the impurity (see Fig. 1).In particular, we eliminate part of the two-particle excitations associated with direct excitations between bath sites, which does not change the scaling of N P but reduces the coefficient for a large number of bath sites.The scalings of the proposed ansatzes are summarized in Table I.We numerically demonstrate that the compact ansatzes describe ground-state energies and dynamic quantities, especially imaginary-time Green's functions, without compromising accuracy for typical quantum impurity models with/without shot noise, validating their potential in quantum impurity models.
The following outlines the contents of each section.Section II provides an overview of Green's functions and variational quantum algorithms for computing groundstate energy and dynamic quantities.This section also introduces the physics-inspired ansatzes used in this study.Section III introduces compact quantum circuits for quantum impurity models and compares the scaling of their variational parameters to those of the original ansatzes.Section IV compares the accuracies of groundstate energy and dynamic quantities such as spectral functions and imaginary-time Green's functions among ansatzes for typical quantum impurity models.Section V explores the effect of finite shot noise within the singlesite impurity model.Section VI reviews our results, compares them to existing methods, and highlights areas for future research.
II. REVIEW OF GREEN'S FUNCTIONS AND VARIATIONAL QUANTUM ALGORITHMS
A. Green's function We study a fermionic system in the grand-canonical ensemble, represented by the Hamiltonian H, with where c i /c † i are annihilation/creation operators for spinorbital i, and N represents the total number of spinorbitals.The hopping matrix, Coulomb interaction tensor, and chemical potential are denoted by t ij , U ijkl , and µ, respectively.The retarded (fermionic) Green's function is defined as where ĉa (t) = e iHt ĉa e −iHt and ĉ † b (t) = e iHt ĉ † b e −iHt represent the annihilation and creation operators for the spinorbitals a and b, respectively, in the Heisenberg representation.The θ(t) denotes the Heaviside step function.In this paper, we use ℏ = k B = 1.The thermal expectation, symbolized by ⟨• • •⟩, is evaluated in the grand canonical ensemble.
The retarded Green's function can be continued to the real frequency axis as where ω is a real frequency, while the imaginary-time Green's function is defined as where ĉa (τ ) = e τ H ĉa e −τ H .Note that the imaginarytime Green's function is anti-periodic as G ab (τ + β) = −G ab (τ ).The Fourier transform of the imaginary-time Green's function, known as the Matsubara Green's function, is given by where ω = (2n + 1)π/β and n ∈ N and β = 1/T .
The Matsubara Green's function G(iω) can be analytically continued from the imaginary axis to the full complex plane as G ab (z).The analytically continued G ab (z) has the spectral representation with where z is a complex number and n, m runs over all eigenstates of the system with E m and E n being corresponding eigenvalues of H. On the real axis, these eigenvalues define individual poles for a finite system, or combine to form a branch cut for an infinite system.The retarded and advanced Green's functions are given by the value of G ab (z) just above/below the real axis.
Due to the branch cut on the real axis, G R ab (ω) ̸ = G A ab (ω) in general.The following relationship holds between the spectral function and the retarded and advanced Green's functions: where we used the formula 1/(x+i0 + ) = P(1/x)−iπδ(x), and P stands for the principal value.We now consider the limit of T → 0, where the ensemble average is restricted to the ground state(s) Ψ G .At sufficiently low temperatures, Eq. ( 4) can be rewritten as where A + = ĉa and B + = ĉ † b for 0 < τ < β/2, and A − = ĉ † b and B − = ĉa for β/2 < τ < 0. The signs ∓ are for τ > 0 and τ < 0, respectively, and In the presence of degenerate ground states, Eq. ( 11) should be averaged over all such states.In general, |G ab (τ )| decays exponentially in an insulating system, while algebraic in a metallic system.To ensure that G ab (τ ) is sufficiently small at the boundary, we need to increase β, which determines the upper limit of time evolution.
B. Variational quantum algorithms
In quantum computing, it is necessary to convert fermionic operators into qubit representations.There are several methods for this, such as the Jordan-Wigner transformation [38], and the Bravyi-Kitaev transformation [39,40].In this study, we use the Jordan-Wigner transformation given by ĉ 1. Ground-state calculation using VQE We use variational quantum eigensolver (VQE) [16,41].It begins by preparing an initial state |Ψ init ⟩ on a quantum computer.Then, a unitary operator described by a parameterized circuit with variational parameters θ, denoted as U (θ), is applied to the initial state, producing a quantum state, |Ψ(θ)⟩.Subsequently, the expectation value of each term in the Hamiltonian is measured using the quantum computer.This measured data is accumulated to compute the total expectation value of the Hamiltonian, ⟨H⟩, on a classical computer.The variational parameters are updated on the classical computer to minimize ⟨H⟩, and the process is iterated until the parameters are stably minimized.Provided the ansatz has sufficiently high expressive power and the optimization is carried out well using an appropriate initial state, the variational quantum state |Ψ(θ * )⟩ with optimized variational parameters θ * approximates the ground state |Ψ G ⟩ accurately.The success of the VQE therefore relies on finding an appropriate representation of the quantum state in terms of a sufficiently compact parameteric quantum circuit that can be optimized classically.
Recursive VQE for spectral moments
We detail here an approach to extend the scope of VQE to optimize the dynamics of the single-particle excitation spectrum via a compact moment expansion.This expansion allows access to a causal imaginary-time Green's function directly at zero temperature and in a fashion that allows for efficient quantum computation via a modified VQE [42][43][44].In a recent paper, direct measurements of the moment expansion expectation values via VQE have been proposed to compute the Green's functions [22].However, the proposed approach required measuring an increasing number of Pauli terms at higherorder moments and as systems increase in size, which we aim to mitigate via a recursive VQE approach to avoid this issue, as we will detail below.
The key physical quantities we aim to compute on the quantum device are the spectral moments of the Green's function.This quantity, which is classified as either hole or particle type at zero temperature, is defined in each case at the order m as follows: where These can be related to the matrix-valued spectral function, A(ω) rs defined in Eq. (10), as: The spectral moments defined in Eqs. ( 14) and (15) correspond to the Taylor expansions of the imaginarytime Green's function at the discontinuity points τ = 0 − and τ = 0 + , respectively.By increasing the number of moments, the imaginary-time Green's function can be systematically approximated over longer times τ .
Once the spectral moments for the particle and hole sectors are determined up to a maximum order N mom , we can appeal to the block Lanczos algorithm [45] to constructively build an effective single-particle Hamiltonian from these moments.This single-particle Hamiltonian spans the physical system and couples to it an auxiliary system whose dimensionality grows linearly with the number of system degrees of freedom and N mom .This auxiliary system acts as a zero-temperature dynamical self-energy, allowing correlation-driven changes to the original spectrum.These changes result from the projection of the eigenstates of this effective Hamiltonian back into the physical system.This auxiliary space is built in such a way that the resulting spectrum is causal, obeys required sum rules, and exactly preserves the initially provided moments, according to Eqs. ( 16) and (17).The resulting Green's function can be obtained directly in the Lehmann representation from the diagonalization of this effective Hamiltonian, providing the residues and energies of all the poles and allowing the Green's function to be easily transformed into any domain, including imaginary time.For more details of this procedure, see Refs.42-44, while similar approaches has also recently been applied in classical perturbative electronic structure methods to expand the self-energy [46,47].
We describe the procedure for calculating the moments defined by Eqs. ( 14) and (15) using a hybrid quantumclassical optimization algorithm, similar to VQE approach for the ground state.We assume that approximated |Ψ G ⟩ and E G are already computed using VQE.To simplify the exposition, we describe the construction of the particle sector moments, with the hole moments computed analogously.
First, we prepare a variational quantum state for the single-particle excited state ĉ † s |Ψ G ⟩.Because the operator is not unitary, we represent the resultant state as the action of a unitary multiplied by a scalar as where d 0 is a coefficient and the parametrized quantum state ϕ EX (θ 0 EX ) is defined by We choose to construct this state by defining an initial state ϕ 0 EX with N + 1 electrons and ensure that our parameterization for U (θ 0 EX ) conserves the electron number of the state.
The variational parameters θ 0 EX and coefficient d 0 can be computed as follows: After transforming ĉ † s into the qubit representation, we measure the cost function on the quantum computer via a circuit similar to a Hadamard test [48,49] (see Appendix A).The variational parameters are optimized to minimize the cost function C until convergence is achieved.After this optimization, the scaling coefficient measured on the quantum device.Finally, the zeroth order moment can be computed via the sampling of M p,(0) rs We can then subsequently compute the higher order moments up to N mom with (1 ≤ m ≤ N mom ) via a recursive approach, avoiding the need to measure over increasingly large numbers of Pauli strings for higher-order moments, as considered in Ref. 22. Using ϕ The variational parameters θ m EX and constant coefficient d m are determined by minimizing the cost function, . By performing m VQE steps optimizing these states, we can calculate the moments of order m as Similar ideas of hybrid quantum-classical variational optimization of alternative functionals for computing other (e.g.dynamical) properties have also been considered in other works [17,48,[50][51][52][53] As the ansatz used in optimizing all m states |ϕ m EX (θ m EX )⟩ becomes complete, it should enable the computation of the exact moments up to order m using the described approach.However, this optimization is also subject to various types of noises, including finite sampling errors of expectation values in a physical device, as well as optimization bottlenecks.This can result in numerical errors, which would likely accumulate exponentially at high orders of m.Nevertheless, as the magnitude of the moment also increases exponentially with respect to its order, we find that the numerical relative error in these moments compared to their exact benchmarks remains almost constant (see Appendix B).Finally, we note that while this approach has been presented for the computation of single-site Green's functions and moments, off-diagonal elements corresponding to matrix-valued Green's functions are possible, analogously to the approaches in Refs.22 and 48.
UCCGSD
The UCCGSD is a generalization of a unitary coupled cluster (UCC) [56][57][58][59][60][61] written as the exponential of an antisymmetric sum of excitation operators.The UCCGSD is formulated as follows: where |Ψ init ⟩ represents a product state, while Tn (n = 1, 2) and their respective conjugates T † n are excitation operators.The excitation operators Tn are T2 = 1 4 pqrs,αβγζ where T1 is a single-particle excitation operator, and T2 is a two-particle excitation operator.The indices p, q, r, s represent spatial orbitals, and α, β, γ, ζ represent spin.The composite indices pα, qβ, rγ, sζ span all spin-orbitals N SO .In this study, we removed one-particle and twoparticle excitations that change total S z .The t αβ pq and t αβγζ pqrs are complex-number variational parameters.The number of variational parameters 4 ), where N imp represents the number of spin-orbitals of the impurity and N bath the number in the bath.
Computing ⟨Ψ UCCGSD | H |Ψ UCCGSD ⟩ is exponentially expensive on classical computers because it results in a non-truncating Baker-Campbell-Hausdorff expansion.In contrast, quantum computers can compute this expectation value directly.We use a Trotter decomposition to implement Eq. ( 22) on a quantum computer.Classical optimization of variational quantum algorithms can partially mitigate the Trotterization error [62,63], but does result in a dependence of the final state on the ordering of the individual excitation operators.As commonly done, we set the Trotter step to 1, resulting in where qβ ĉpα } |Ψ init ⟩, demonstrating that the UCCGSD ansatz incorporates single-particle basis rotations into its definition [64].
k -uCJ
Let us first define the unitary cluster Jastrow (uCJ) ansatz and then the k-uCJ ansatz [55].The uCJ ansatz is defined as follows: The matrix K is complex and anti-Hermitian.The matrix J is symmetric, and its elements are purely imaginary.The |Ψ orb ⟩ is the single-particle basis rotated state defined in Eq. ( 25).This ansatz preserves the particle's number and total S z .The scaling with ).The uCJ ansatz is motivated via a tensor decomposition process that compresses the generalized two-particle excitation operators in the coupled cluster method.This compression results in a set of operators with only two indices.Similar approaches based on tensor decomposition have been proposed in Refs.65-69.Equation ( 28) can be implemented without Trotterization, as it involves only commuting number operators.By performing the Jordan-Wigner transformation on the equation, this term ĉ † pα ĉpα ĉ † qβ ĉqβ can be simplified to 1 4 (1 − Z pα )(1 − Z qβ ).The k-uCJ ansatz differs from the uCJ ansatz in that the operators J and K are applied multiple times, resulting in the k-uCJ ansatz, where variational parameters for different i are independently optimized.In a quantum embedding calculation, a continuous hybridization can be discretized with a finite number of bath sites.In particular, for a star-like geometry, the bath sites are connected only through the impurity.The number of bath sites, N bath , required for an accurate discretization scales linearly with N imp , albeit with a significant prefactor (on the order of ten [26]).Given the significant number of variational parameters associated with the bath sites, reducing the number of these parameters is critical for efficient quantum simulation of impurity models.
We propose compact ansatzes for quantum impurity models with a star-like bath geometries.We assume that two-particle excitation operators associated with two-body coupling between bath sites are not critical in the description of the ground states and spectral moments, given that two-particle interaction terms in the Hamiltonian are localized to the impurity space, and no Hamiltonian terms directly couple the bath sites.The ansatzes incorporating this assumption are referred to as "sparse ansatzes".In the present study, we construct sparse ansatzes based on the UCCGSD and the k-uCJ.We call them sparse UCCGSD and sparse k-uCJ, denoted UCCGSD(S) and k-uCJ(S), respectively.
For the UCCGSD, we remove two-particle excitation operators that involve more than three bath orbitals.Examples of such operators that involve three or four bath orbitals are ĉ † for the sparse variant (refer to Table I).Although N bath is proportional to N imp [26], ensuring that the scaling with respect to impurity size remains the same, the significant computational savings still result since N bath ≫ N imp .
For the k-uCJ, we apply a similar motivation to remove the operators acting between different bath sites while keeping the two-particle excitation operators between the impurity and the bath.For example, ĉ † 1 ĉ1 ĉ † 2 ĉ2 is dropped, as illustrated in Figs.1(c) and (d).As summarized in Table I, the scaling of N P in the k-uCJ ansatz scales as O (N imp + N bath ) 2 , while N P in the corresponding k-uCJ(S) sparse ansatz scales as O N 2 imp .Again, the prefactor is substantially reduced when TABLE I: Number of variational parameters for the UCCGSD, UCCGSD(S), k-uCJ, and k-uCJ(S).
IV. STATE VECTOR SIMULATION
In this section, we benchmark the k-uCJ and the proposed sparse ansaztes for typical quantum impurity models.We consider both single-site and two-site impurity models with N bath = 3 and N bath = 6, respectively.All calculations in this section are based on state vector simulations of quantum circuits.
A. Numerical details
The calculations were performed using the following libraries: QCMaterialNew [70] was used as a quantum circuit simulator, which is a Julia wrapper of Qulacs [71].We used Openfermion [72] for the Jordan-Wigner transformation and to calculate the exact eigenvalues of Hamiltonians.We performed DMFT calculations using DCore [73] to generate the single-site impurity models.We used dyson [74] library, in order to compute the Green's functions poles and residues from the spectral moments, as well as benchmark exact spectral moments via exact diagonalization.
For optimizing the variational parameters, we used the BFGS algorithm.We initialized variational parameters with random numbers.We observed that setting the initial guess to zero could lead the optimization to converge to a metastable solution.For ground-state calculations with VQE using the k-uCJ, we increased the number of terms k in the ansatz one by one, reusing the optimized variational parameters.In practice, at the beginning of the VQE calculations with k terms, we randomized the variational parameters in K1 and Ĵ1 but set those in Ki and Ĵi (2 ≤ i ≤ k) to the optimized variational parameters obtained in the previous calculation with k−1 terms.This procedure ensures that the optimized energy decreases or remains nearly stable with an increasing number of terms in the k-uCJ.
It is worth noting that the initial parameters significantly influence the accuracy of the optimized ground state and spectral moments.For ground-state calculations, we conducted VQE multiple times, each with a different set of initial parameters, to find the best variational state for the ground state.We used this best variational state for computing spectral moments.
Simulations were executed using an MPI parallelized program on a workstation with an AMD EPYC 7702P 64-core processor.Solving the largest model with 16 qubits and about 750 variational parameters in the k-uCJ took about five days on 55 cores using VQE and the recursive approach.
B. Single-site impurity model
We consider the single-site impurity model with particle-hole symmetry and N bath = 3 illustrated in Fig. 2
(a). The Hamiltonian is given by
where d † 1σ (ĉ † kσ ) are the impurity (bath) degrees of freedom of the fermionic creation operator with σ =↑, ↓, and k is an index for bath sites.The U represents the onsite Coulomb repulsion, V k is the hybridization term, µ (= U/2) is the chemical potential, and ϵ k denotes the bath energy.
We obtained the bath parameters using self-consistent DMFT calculations on a square lattice at zero temperature for U = 4 (metallic phase) and U =
(insulating phase).
The nearest neighbor hopping parameter was set to 1.
For U = 4, we obtained V k = {−1.26264,0.07702, −1.26264} and ϵ k = {1.11919,0.0, −1.11919}.For U = 9, we obtained V k = {1.31098,0.07658, −1.38519} and ϵ k = {−3.26141,0.0, 3.26141}.For the k-uCJ and the k-uCJ(S), we varied k from 1 to 5 to check convergence.The markers represent the best results obtained by varying the initial parameters 50 times for each ansatz.The lightly shaded areas indicate the variation in converged results depending on the choice of initial parameters for each ansatz.In all four ansatzes, the best ground-state energies are well reproduced.We also confirmed that the k-uCJ reproduces the ground-state energy with smaller N P than the UCCGSD.Also, the results for the sparse ansatzes in Figs.3(a) and (b) show that reducing the variational parameters associated with bath sites does not compromise the accuracy of the ground-state energies.It should be noted that the sparse ansatzes are efficient even for the metal-like system (U = 4), where the electronic structure is very much delocalized across the bath sites.
The following summarizes the reduction in N P for each ansatz by using the sparseness.In the UCCGSD, N P is reduced from 334 to 104.In the k-uCJ, N P is reduced from {64, 96, 128, 160, 192} to {58, 84, 110, 136, 162} for k = 1, 2, • • • , 5. The k-uCJ(S) has a small reduction in the number of parameters for this system, but this reduction becomes more significant with increasing system size and complexity (see Sec. IV C 1).
Figures 4(a) and (b) show the reconstructed spectral functions using the moment expansion for U = 4 and U = 9, respectively.For the k-uCJ, we set k = 5.We computed the exact values of the moments using exact diagonalization (ED).As shown in Fig. 4(a), for U = 4, all the ansatzes can reproduce the peaks around ω = 0.However, the quality of reproduction drops for ω ≥ 2. These discrepancies primarily arise from numerical errors during the moment computations via recursive VQE due to the limited representational ability of ansatzes and the optimization issues.This fitting error in the recursive approach grows exponentially with N mom , which prevents systematic improvement of reconstructed spectral functions with increasing N mom .Indeed, we observed no improvement for N mom > 7, although knowledge of the exact moments up to order N mom = 5 is largely sufficient to converge the spectral function over all frequencies.
As shown in Fig. 4(b), for U = 9, by increasing N mom up to 7, all the ansatzes accurately reproduced the positions of peaks for ω ≲ 6.In general, an insulating system has fewer spectral peaks than metallic cases, allowing the moment expansion by the recursive approach to reproduce the peak positions more accurately.Still, there is some variation among the ansatzes, likely due to the fitting error, especially around the small peak near ω = 4.The spectral function shows a tiny peak near ω = 0 as shown in the inset of Fig. 4(b).This is due to the k = 3 bath site nearly decoupled from the impurity, being physically irrelevant.
Here, we aim to quantify the difference between the spectral functions reconstructed from the exact moments and those calculated via the recursive approach.To this end, we utilize the Wasserstein metric, quantifying a difference between two distributions [75,76].Figures 6(a) and (b) show the imaginary-time Green's functions computed from the reconstructed spectral function by the moment expansion for U = 4 and U = 9, respectively.We use the reconstructed spectral function from the exact moments for each N mom as reference.In the k-uCJ, we set k = 5.In computing the reference data, we filtered out peaks below ω ≤ 10 −2 that are physically irrelevant.
In Figs. 6, for both U = 4 and U = 9, the differences among the ansatzes become less pronounced in the imaginary-time Green's functions compared to the differences in the spectral function.In Fig. 6(a), for U = 4, imaginary-time Green's functions exhibit a power-law decay.This necessitates a higher N mom in the moment expansion.However, for τ > 5, we observed that increasing N mom did not improve the accuracy due to the exponential growth in the fitting error with N mom in the recursive approach.Only the UCCGSD(S) result seems to diverge from the rest.Nonetheless, its deviation starting at τ = 5 aligns with the trends observed in other ansatzes, displaying a comparable pattern.In Fig. 6(b), for U = 9, imaginary-time Green's functions exhibit an exponential decay.The Green's functions computed by the recursive approach, even at N mom = 5, match the reference data, suggesting a smaller N mom achieves convergence compared to the metallic case.
C. Two-site impurity model
We consider the two-site impurity model with particlehole symmetry and N bath = 6, shown in Fig. 2
(b). The Hamiltonian is given by
where t represents the hopping between the two impurities.For V = 0.5 and V = 0.1, we use common bath parameters: U = 4, µ = U/2, t = 1, and ϵ k = {1, 0, −1, 1, 0, −1}.The case of V = 0.5 is expected to be more metallic than V = 0.1.Figures 7(a) and (b) show |δE G | for V = 0.5 and V = 0.1, respectively.For the k-uCJ and the k-uCJ(S), k was varied from 1 to 5 to check convergence.We omitted the VQE calculation with the UCCGSD because of its prohibitively large number of variational parameters.As before, the markers represent the optimal results obtained from 20 variations of the initial variational parameters for each ansatz.The lightly shaded areas highlight the dependency of each ansatz on initial guesses.
In the three ansatzes, the ground-state energies are reproduced with comparable accuracy.
Considering N P , both k-uCJ and k-uCJ(S) are more efficient than UCCGSD(S).The results for the sparse ansatzes in Figs.7(a) and (b) show that we can reduce the number of the variational parameters associated with bath sites without sacrificing ground-state accuracy in the cluster impurity model.The sparse ansatzes are also applicable for the case of V = 0.5, which exhibits more metallic characteristics.For the k-uCJ, N P is reduced from [256,384,512,640,768] to [226,324,422,520,618]
Spectral functions
Figures 8(a) and (b) show the reconstructed spectral functions using the moment expansion for V = 0.5 and V = 0.1, respectively.We computed the reference data from the exact moments for each N mom using exact diagonalization.In the k-uCJ, we set k = 5.Similar to the previous subsubsection, the UCCGSD and UCCGSD(S) calculations were omitted due to the prohibitive number of variational parameters.
In Fig. 8(a), for V = 0.5, increasing N mom tends to enhance the representation of several spectral peaks.Yet, it remains challenging to comprehensively capture the entire structure, mainly due to the fitting error, observing no improvement beyond N mom = 7.In Fig. 8(b), for V = 0.1, by increasing N mom up to 5, all the ansatzes accurately reproduced the positions of several peaks for ω ≲ 4.This indicates that an insulating system with fewer spectral peaks offers the advantage of accurately determining peak positions.However, variations around the small peak near ω = 1 among the ansatzes likely result from the fitting error.The spectral function shows a small peak around ω = 0 as shown in Fig. 8(b).This originates from bath sites weakly coupled with the impurity, being physically insignificant.
Figures 9(a) and (b) show the computed Wasserstein FIG. 8: Computed A 1↑,1↑ (ω) for each N mom .Panels (a) and (b) show the results for V = 0.5 and V = 0.1, respectively.In the k-uCJ and the k-uCJ(S), we set k = 5.ED refers to the spectral functions constructed from exact moments using exact diagonalization.The spectrum for V = 0.1 has a tiny peak around ω = 0, as shown in the inset.
metrics between the spectral functions reconstructed from the exact moments at N mom = 7 and those computed using the ansatzes at each N mom for V = 0.5 and V = 0.1, respectively.Due to the influence of noise, the distances, especially for N mom ≧ 5, stay at higher values than those without shot noise.Still, the Wasserstein metric tends to decrease as N mom increases, which is consistent with the improved reproducibility of the spectral functions reconstructed by the moment expansion.
Figures 10(a) and (b) show the imaginary-time Green's functions computed from the moment expansion for V = 0.5 and V = 0.1, respectively, with the k-uCJ ansatz with k = 5.We computed the reference data from the reconstructed spectral function using exact moments for each N mom .In this computation, we removed the physically irrelevant peaks below ω = 10 −2 in the spectrum (see the inset of 8).In Fig. 10, for both V = 0.5 and V = 0.1, the differences among the ansatzes become less pronounced in imaginary-time Green's functions compared to the cases of spectral functions.In Fig. 10 power-law decay.We observed no improvement by increasing N P , likely due to the fitting error in computing spectral moments.In Fig. 10(b), for V = 0.1, imaginarytime Green's functions exhibit an exponential decay.The results with N mom = 5 agree with the reference data.
V. FINITE SHOT SIMULATIONS
This section investigates the effects of shot noise for the single-site impurity model with N bath = 3.We first optimize variational parameters for the ground state and the intermediate states in the computation of the spectral moments [Eqs.(18), (20)] using state vector simulations as detailed in Sec.IV.Then, we measure the expectation values of the Hamiltonian and the transition amplitude (21) for each order of the moment m with a finite number of measurements.It should be noted that the effect of the shot noise was not considered during the optimization steps.This noise affects the measured scalar values, the ground-state energy E G and coefficients d 0 , d 1 , • • • , d mom in the recursive approach [Eq.(21)].We set the number of measurements to 30,000.In all four ansatzes, statistical errors with a finite number of measurements reduce the overall accuracy compared to the results without shot noise (see Fig. 3).Still, the ground-state energies can be reproduced with comparative accuracy among ansatzes.The results for the sparse ansatzes in Figs.11(a) and (b) show that reducing the variational parameters associated with bath sites does not compromise the accuracy of E G .The accuracy of the k-uCJ (S) is lower than that of the k-uCJ for the metallic system (U = 4), which may be attributed to statistical error.show the reconstructed spectral functions using the spectral moment computed with shot noise for U = 4 and U = 9, respectively.We set k = 5 in the k-uCJ.In Fig. 12(a), for U = 4, none of the ansatzes reconstruct the spectral peaks.These discrepancies primarily stem from numerical errors in the moment calculations.It should be noted that reconstructing a spectral function from the moments is not a well-conditioned problem (although more robust than traditional numerical analytic continuation from imaginary time due to the analytic procedure).Specifically, in the shot noise simulation, such errors are attributed to statistical error, the limited representational capability of the ansatzes, and optimization issues.The effect of statistical noise is dominant when comparing the calculation results to the case without shot noise Fig. 4. In Fig. 12(b), for U = 9, the shot noise induces small shifts in the positions of several peaks for ω ≲ 6 compared to the results computed without the shot noise.There are some variations among the ansatzes, likely due to the fitting error, but generally, the agreement is much improved compared to the more metallic U = 4 results.
C. Imaginary-time Green's functions
We now compute the imaginary-time Green's functions from the reconstructed spectral functions by the moment expansion with the shot noise.Figures 13(a) and (b) show the results for U = 4 and 9, respectively.In the k-uCJ, we set k = 5.
For both U = 4 and U = 9, despite the large deviations in the spectral functions due to the fitting error, these variations are suppressed in the reconstructed imaginary-time Green's functions.The results from all the ansatzes are consistent up to τ ≈ 1; then they start to deviate.This is because the imaginary-time Green's function is insensitive to changes in the associated spectral function.In Fig. 13(a), for U = 4 with N mom = 7, due to the shot noise, the black vertical line at τ = 1 marks the earlier start of deviation, while the gray vertical line at τ = 5 indicates the start without shot noise (see Fig. 6).In Fig. 13(b), for U = 4 with N mom = 5, the results with shot noise are in good agreement with the reference data.These results indicate the moment expansion can successfully calculate the imaginary-time Green's functions under the influence of shot noise.The imaginary-time Green's function, as calculated in this way, is sufficient for performing self-consistent calculations of DMFT.After convergence, some quantities com- puted from the imaginary-time Green's function (e.g., electron occupancy) are expected to be less sensitive to noise than real-frequency spectral functions.
VI. SUMMARY AND DISCUSSION
In this paper, we proposed compact quantum circuits for quantum impurity models with a star-like bath geometry by sparsifying the UCCGSD and k-uCJ ansatz.These forms have a significant parameter scaling of N 4 SO and N 2 SO respectively, which are reduced by removing insignificant variational parameters associated with two-body coupling between bath sites.This results in a reduced number of variational parameters scaling as O(N 4 imp ) and O(N 2 imp ) for the UCCGSD(S) and k-uCJ(S) ansatz respectively.We numerically demonstrated that the compact ansatzes can accurately reproduce the ground-state energies for typical quantum impurity models, with and without shot noise.In the moment calculations for dynamic quantities, to avoid measuring more Pauli-operator terms at higher orders, we proposed a recursive method similar to VQE.We also demonstrated that, when combined with the suggested ansatzes, the moment expansion effectively computes the imaginary-time Green's function, even in the presence of shot noise.
Before concluding this paper, we consider the proposed ansatzes and the spectral moments in the context of other approaches.A previous study utilized an adaptive variational quantum eigensolver (ADAPT-VQE) for impurity models [33,77,78].While ADAPT-VQE can provide near-exact solutions with a deep circuit, it demands more measurements for gradient computation than traditional VQE.Also, its success depends on the selected operator pool, which makes it hard to compare it to other approaches.In addition, it is instructive to compare the moment expansion to alternative approaches such as the VQS approach [35], with which the method bears many similarities.The moment expansion preserves the causal nature of the spectral functions; however, it encounters growing fitting errors in the recursive approach, most significantly in metallic systems.The VQS method might handle these systems more effectively via time evolution over a longer time span.Still, it could be costly since it requires computing all variational parameters at every time step.A more detailed comparison is left in future studies.
Finally, we discuss the potential future research directions.The initial parameter selection plays a crucial role in the accuracy of ground-state energies and moments.Specifically, the accuracy of the moment is closely tied to that of the ground state.The selection of optimal initial parameters to avoid local minima should be a critical area of future studies.Moreover, minimizing the number of measurements in VQE and the recursive approach is essential in the context of utilizing near-term quantum devices.One viable solution is the efficient grouping of observables for simultaneous measurement [79].It is also important to investigate how the noise in the measured Green's function and discretization errors of the bath propagate during self-consistent calculations in DMFT and affect quantities of interest, e.g., momentum-resolved spectrum.Developing methods for suppressing such errors is crucial.When optimizing in VQE under the presence of noise, it is crucial to employ optimization methods resilient to noise [80,81].At the same time, noise mitigation techniques [82,83] are essential.Furthermore, the potential applicability of sparse ansatzes to other impurity models with star-like geometry, such as multi-orbital systems, requires further investigation.Lastly, incorporating the concept of sparsity into classical variational algorithmic approaches, such as machine learning wave functions [84,85], may improve computational efficiency.In Fig. 15(a), for U = 4, no significant difference in relative error between ansatzes was observed due to the shot noise.Still, the relative error for each ansatz remains nearly constant.
FIG. 1: Schematic illustrations for the construction of sparse ansatzes.Panels (a) and (b) show the eliminated operators that involve more than three bath orbitals when constructing the UCCGSD(S) from the UCCGSD.Panels (c) and (d) show the eliminated operators acting between the different bath sites when constructing the k-uCJ(S) from the k-uCJ.
1 ĉ † 1 N 2 bath≃ O N 4 imp
ĉ2 ĉ2 and d † ĉ † 3 ĉ3 ĉ4 , where d † (ĉ † ) are fermionic creation operators for the impurity (bath) degrees of freedom, respectively.We illustrate these operators in Figs.1(a) and (b).For the UCCGSD, this reduces the number of variational parameters N P from O (N imp + N bath ) 4 to O N 4 imp +
FIG. 2 :
FIG. 2: Two quantum impurity models used in this study.Panels (a) and (b) show the single-site impurity model with N bath = 3, and the two-site impurity model with N bath = 6, respectively.
= 9 FIG. 3 :
FIG. 3: Computed |δE G | for the single-site impurity model.Panels (a) and (b) show the results for U = 4 and U = 9, respectively.In the k-uCJ and the k-uCJ(S), k increases from 1 to 5. The markers represent the smallest errors when the initial parameters are changed 50 times.The lightly shaded areas in the figure illustrate the dependency of the absolute errors on the initial parameters.
Figures 3 (
Figures 3(a) and (b) show the absolute errors in ground-state energies (|δE G |) for U = 4 and U = 9, respectively, compared to exact diagonalization results.For the k-uCJ and the k-uCJ(S), we varied k from 1 to 5 to check convergence.The markers represent the best results obtained by varying the initial parameters 50 times for each ansatz.The lightly shaded areas indicate the variation in converged results depending on the choice of initial parameters for each ansatz.In all four ansatzes, the best ground-state energies are well reproduced.We also confirmed that the k-uCJ reproduces the ground-state energy with smaller N P than the UCCGSD.Also, the results for the sparse ansatzes in Figs.3(a) and (b) show that reducing the variational parameters associated with bath sites does not compromise the accuracy of the ground-state energies.It should be noted that the Figures4(a) and (b) show the reconstructed spectral functions using the moment expansion for U = 4 and U = 9, respectively.For the k-uCJ, we set k = 5.We computed the exact values of the moments using exact diagonalization (ED).As shown in Fig.4(a), for U = 4, all the ansatzes can reproduce the peaks around ω = 0.However, the quality of reproduction drops for ω ≥ 2. These discrepancies primarily arise from numerical errors during the moment computations via recursive VQE due to the limited representational ability of ansatzes and the optimization issues.This fitting error in the recursive approach grows exponentially with N mom , which prevents systematic improvement of reconstructed spectral functions with increasing N mom .Indeed, we observed no improvement for N mom > 7, although knowledge of the exact moments up to order N mom = 5 is largely sufficient to converge the spectral function over all frequencies.As shown in Fig.4(b), for U = 9, by increasing N mom up to 7, all the ansatzes accurately reproduced the positions of peaks for ω ≲ 6.In general, an insulating system has fewer spectral peaks than metallic cases, allowing the moment expansion by the recursive approach to reproduce the peak positions more accurately.Still, there is some variation among the ansatzes, likely due to the fitting error, especially around the small peak near ω = 4.The spectral function shows a tiny peak near ω = 0 as shown in the inset of Fig.4(b).This is due to the k = 3 bath site nearly decoupled from the impurity, being physically irrelevant.Here, we aim to quantify the difference between the spectral functions reconstructed from the exact moments and those calculated via the recursive approach.To this end, we utilize the Wasserstein metric, quantifying a difference between two distributions[75,76]. Figures 5(a) and (b)show the computed Wasserstein metric between the spectral functions from the exact moments at N mom = 7 and those using ansatzes at each N mom for U = 4 and U = 9, respectively.As N mom increases, the distance between the two distributions decreases, consistently with the enhanced reproducibility of the spectrum at large N mom .
7 FIG. 4 := 9 FIG. 5 :
FIG.4: Computed A 1↑,1↑ (ω) for each N mom .Panels (a) and (b) show the results for U = 4 and U = 9, respectively.In the k-uCJ and the k-uCJ(S), we set k = 5.ED refers to the spectral functions constructed from exact moments using exact diagonalization.The spectrum of V = 0.1 around ω = 0 has a tiny magnitude of 10 −2 , as shown in the inset.
7 FIG. 6 :
FIG.6: Computed G (τ ) for each N mom .Panels (a) and (b) show the results for U = 4 and U = 9, respectively.In the k-uCJ and the k-uCJ(S), we set k = 5.ED refers to the spectral functions constructed from exact moments using exact diagonalization.The black vertical lines in panel (a) for N mom = 7 show where the deviation of the reconstructed spectral functions from the reference data starts.
1 FIG. 7 :
FIG. 7: Computed |δE G | for the two-site impurity model.The markers represent the smallest errors when the initial parameters are changed 20 times.In the k-uCJ and the k-uCJ(S), k increases from 1 to 5. The lightly shaded areas in the figure illustrate the dependency of the absolute errors on the initial parameters.Panels (a) and (b) show the results for U = 4 and U = 9, respectively.
1 FIG. 9 :
Figures10(a) and (b) show the imaginary-time Green's functions computed from the moment expansion for V = 0.5 and V = 0.1, respectively, with the k-uCJ ansatz with k = 5.We computed the reference data from the reconstructed spectral function using exact moments for each N mom .In this computation, we removed the physically irrelevant peaks below ω = 10 −2 in the spectrum (see the inset of 8).In Fig.10, for both V = 0.5 and V = 0.1, the differences among the ansatzes become less pronounced in imaginary-time Green's functions compared to the cases of spectral functions.In Fig.10(a), for V = 0.5, the imaginary-time Green's functions exhibit a
7 FIG. 10 :
FIG. 10: Computed G 1↑,1↑ (τ ) for each N mom .Panels (a) and (b) show the results for V = 0.5 and V = 0.1, respectively.In the k-uCJ and the k-uCJ(S), we set k = 5.ED refers to the spectral functions constructed from exact moments computed by exact diagonalization.The black vertical lines in panel (a) for N mom = 7 show where the deviation of the reconstructed spectral functions from the reference data starts.
= 9 FIG. 11 :
FIG. 11: Computed |δE G | with 30000 measurements for the single-site impurity model.In the k-uCJ and the k-uCJ(S), k was varied from 1 to 5. The markers represent the best result obtained by varying the initial parameters 50 times.The lightly shaded areas in the figure illustrate the dependency of the absolute errors on the initial parameters.Panels (a) and (b) show the results for U = 4 and U = 9, respectively.
7 FIG. 12 :
FIG. 12: Computed A 1↑,1↑ (ω) with measurements for each N mom .In the k-uCJ and the k-uCJ(S), we set k = 5.ED refers to the spectral functions constructed from exact moments using exact diagonalization.Panels (a) and (b) show the results for U = 4 and U = 9, respectively.
7 FIG. 13 :
FIG.13: Computed 1↑,1↑ (τ ) 30000 measurementsfor each N mom .Panels (a) and (b) show the results for U = 4 and U = 9, respectively.In the k-uCJ and the k-uCJ(S), we set k = 5.ED refers to the spectral functions constructed from exact moments using exact diagonalization.The black vertical lines in panel (a) for N mom = 7 indicate where the reconstructed spectral functions with shot noise begin to differ from those derived from exact moments.The gray line indicates the case without shot noise (see Fig.6).
Figures 16 (
Figures 16(a) and (b) show the relative errors of the spectral moments |δM p rs |/|M p rs | with a finite number of measurements, 30000 for U = 4 and U = 9, respectively.The markers in the figure denote the mean, and the lightly shaded areas indicate the standard deviation derived from the calculation repeated ten times with shot noise for each ansatz.The sparse ansatz is generally less accurate than the original ansatz due to the shot noise.In Fig.15(a), for U = 4, no significant difference in relative error between ansatzes was observed due to the shot noise.Still, the relative error for each ansatz remains nearly constant. qβ,rγ,sζ rγ ĉqβ ĉpα } |Ψ orb ⟩ , | 11,597.6 | 2023-12-07T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Descriptive Histopathological and Ultrastructural Study of Hepatocellular Alterations Induced by Aflatoxin B1 in Rats
Simple Summary Aflatoxins can affect hepatocytes, which results in a series of histological and ultrastructural changes to the cells. We investigated the hepatocellular alterations induced by aflatoxin B1 in rats. Interestingly, we observed several histopathological and ultrastructural alterations in hepatocytes, including necrotic changes and massive vacuolar degeneration. Ultrastructural examinations of treated groups revealed damage to the sinusoidal endothelium, as well as aggregations of hyperactive Kupffer cells in the space of Disse and damaged telocytes. Our findings provide novel insights into the induction of a series of irreversible adverse effects on hepatocytes by aflatoxin B1. Based on our results, we suggest future investigations for the exploration of mechanistic pathways related to these induced hepatocellular alterations. Abstract Liver sinusoids are lined by fenestrated endothelial cells surrounded by perisinusoidal cells, Kupffer cells, and pit cells, as well as large granular lymphocytes. The functional ability of the liver cells can be substantially modified by exposure to toxins. In the current work, we assessed the histopathological and ultrastructural effects of a time-course exposure to aflatoxin B1 (AFB1) on the hepatic structures of rats. A total of 30 adult female Wistar rats were randomly divided into three groups: a control group, a group orally administered 250 µg/kg body weight/day of AFB1 for 5 days/week over 4 weeks, and a group that received the same AFB1 treatment but over 8 weeks. Histopathological and ultrastructural examinations of hepatocytes revealed massive vacuolar degeneration and signs of necrosis. Furthermore, the rat liver of the treated group exhibited damage to the sinusoidal endothelium, invasion of the space of Disse with hyperactive Kupffer cells, and some immune cells, as well as Ito cells overloaded with lipids. In addition, damaged telocytes were observed. Taken together, our results indicate that AFB1 induces irreversible adverse effects on the livers of rats.
Introduction
Mycotoxins are secondary metabolites of toxigenic fungi. Aflatoxins are a family of mycotoxins produced by Aspergillus spp. [1,2]. According to the Food and Agriculture Organization, about a quarter of the crops in the world are affected by mycotoxins [3,4]. These health-harming toxins have been detected as pollutants during various agronomic processes in several regions that have warm and moist weather [2,5,6]. Aflatoxins are extremely toxic and can cause serious pollution to dietary sources. Worryingly, contamination
Chemicals
AFB1 (Sigma-A6636), a white to light yellow odorless powder, was dissolved in olive oil (used as a vehicle).
Animals and Experimental Design
All experimental and euthanasia procedures were performed in accordance with a protocol approved by the research ethics committee of the Faculty of Veterinary Medicine, Sohag University, Egypt. All procedures used in this study were approved by the Research Ethics Committee of the Faculty of Veterinary Medicine, Sohag University, Egypt. A total of 30 adult female Wister rats weighing 150-250 g were allowed to acclimatize for 7 days at the Faculty of Veterinary Medicine, Division of Laboratory Animal Health Housing Facility. The animals were maintained in a 12:12 h light/dark cycle and an ambient temperature of 20-23 • C; they were provided with food and water ad libitum. After the acclimation period, the rats were randomly divided into three groups. Group I, the control group, was further subdivided into two subgroups, each consisting of 10 rats, 5 of which were sacrificed after 4 weeks. The remaining five were sacrificed after 8 weeks (consistent with groups II and III below, respectively). In subgroup IA, the animals (n = 10) were provided with water ad libitum, fed a standard diet, and maintained without any treatment. In subgroup IB, the rats (n = 10) received the olive oil vehicle (0.2 mL/animal/day) orally through a gastric tube. In group II, the rats (n = 5) were orally administered 250 µg/kg body weight/day of AFB1 [18,19], which was dissolved in olive oil as a vehicle [19,20], through a gastric tube 5 days/week for 4 weeks. In group III, the rats (n = 5) received the same AFB1 treatment as in group II but for 8 weeks [18,21,22].
Specimen Processing and Staining
At the end of the respective experimental periods, liver specimens were obtained after whole-body perfusion of experimental rats with 4% paraformaldehyde (catalog no. 19200; lot no. 090820; Electron Microscopy Sciences (JEOL, Tokyo, Japan)). The samples were dissected and immediately fixed in 10% formalin for 24 h, after which they were dehydrated in a graded alcohol series, cleared in xylene, and finally embedded in paraffin. The tissue was cut into 3 µm thick sections and then stained with hematoxylin and eosin [23]. Histopathological observations were performed using an Olympus CX 41 RF light microscope (Olympus Corporation, Tokyo, Japan).
Ordinal Method for Validating Histopathologic Scoring
Each animal was assigned a score based on tissue histopathological examination [24]. The samples were scored quantitatively and semiquantitatively, with assessment based on the visual field inspection of a minimum of 10 sections from each group. Photographs were taken at a magnification of 40×, and the cell numbers of hepatocyte alterations (vacuolar degeneration, binucleated hepatocytes, and megalocytes) were counted in 10 randomized areas (each 1 mm 2 ) [16].
Semi-Thin Section Preparation and Transmission Electron Microscopy
Small liver specimens were fixed in 2.5% paraformaldehyde and glutaraldehyde in 0.1 M Na-cacodylate buffer (pH 7.2) for 24 h at 4 • C [24]. These samples were then washed in the same buffer before being postfixed in 1% osmic acid in 0.1 M Na-acodylate buffer for 2 h at room temperature. Subsequently, the samples were dehydrated in ascending grades of ethanol and embedded in an Araldite-Epon mixture. Semi-thin sections were cut at a thickness of 1 µm before being stained with 1% Toluidine blue; Suvarana et al. [25] described all the staining methods from Bancroft's theory as well as the histological techniques. The stained sections were first examined using a Leitz Dialux 20 microscope 35578 Wetzlar, Germany, and photographs were taken using a Canon digital camera (Canon PowerShot A95, China). For transmission electron microscopy (TEM), ultrathin sections were stained with uranyl acetate and lead citrate and then photographed under a JEOL 100 II transmission electron microscope (JEOL, Tokyo, Japan) at the Electron Microscopy Unit of Assiut University.
Digital Colorization of TEM Images
To increase the visual contrast between several structures on the same electron micrograph, we digitally colored specific elements to increase their visibility. All elements of interest were carefully hand-colored using Adobe Photoshop (version 6).
Statistical Analysis
Data were expressed as means ± standard deviations. Data from experimental groups were statistically analyzed using one-way ANOVA with Tukey's post hoc multiple comparisons tests using the GraphPad Prism software version 5 (San Diego, CA, USA). p < 0.05 was used to define statistically significant differences between the groups [26].
Results
Pathological changes in the liver tissue were observed in all experimental groups except for the control group, which exhibited an intact hepatic architecture in hepatic lobules with normal portal areas ( Figure 1A,B, and Figure 4A). Histological changes were not observed in the hepatic tissue of rats from control subgroup IB when compared with that of control subgroup IA across experimental durations.
Livers from group II rats (4-week AFB1 treatment) demonstrated central vein dilatation and congestion (Figure 2A,B) and enormous hepatic vacuolar degeneration across the entirety of the hepatic lobules ( Figure 2C and Figure 4B). Some cells exhibited mitotic abnormalities in the form of tripolar mitosis ( Figure 2B). Focal hepatocellular necrosis and Kupffer cell proliferation were observed ( Figure 4C). The interlobular vein was distended and congested with blood. Furthermore, interlobular bile duct hyperplasia with periportal fibrosis was observed ( Figure 2D). The rats in group III (8-week AFB1 treatment) exhibited severe vein congestion and thrombosis ( Figure 3A), as well as enormous hepatic vacuolar degeneration ( Figures 3B and 4D). The diameter of some hepatocytes was less than that of the other neighboring cells, and hypereosinophilic cytoplasm was observed due to the accumulation of pyknotic nuclei within the cytoplasm (Figures 3B and 4F,G). Some hepatocytes demonstrated pale or absent nuclei ( Figures 3B and 4D,F,G). Notably, the rats from group III exhibited megalocytes (hypertrophic hepatocytes) ( Figure 3C,D and Figure 4F). These hypertrophic cells cause hepatic cord disruption. Several binucleated hepatic cells were observed ( Figure 3E), as were some cells exhibiting a high rate of mitotic abnormalities in the form of tripolar mitosis ( Figure 3C). Spotty focal areas of necrosis, minute clusters of hepatocytes, the absence of adjacent hepatocytes/their replacement with lymphocytes, and proliferated Kupffer cells were all observed ( Figure 3E). Moreover, massive periportal fibrosis, bile duct hyperplasia, and excessive portal vein congestion with inflammatory cell infiltration were noted in all portal areas ( Figures 3F, 4H and 5D). In some areas, fibrosis was observed around the portal veins, which extended as tracts inside the hepatic lobules ( Figures 4I and 5D). A significantly high number of megalocytes, binucleated hepatocytes, and different patterns of mitotic abnormalities were apparent in group III compared with the indicated malignancies in the other groups. Vacuolar degeneration in group III was significantly higher than that in group II (p < 0.05), both relative to the control group ( Figure 5A). The number of binucleated cells in group III was significantly greater than that in group II (p < 0.05; Figure 5B), both relative to the control group. Finally, the number of megalocytes was significantly higher in group III than in group II (p < 0.05; Figure 5C).
Transmission Electron Microscopy
TEM was employed to further investigate the effects of AFB1 administration for 8 weeks on hepatic cells and sinusoids (Figures 6-9). In the control tissue, hepatocytes exhibited a quadrilateral shape with a rounded euchromatic nucleus. They had numerous endoplasmic reticula, mitochondria, and lysosomes ( Figure 6A). After 8 weeks of treatment with AFB1, the hepatocyte exhibited signs of vacuolation ( Figure 6B) and became largely necrosed, displaying ruptures of the plasma membrane, vacuolation, karyolysis, and the release of cellular contents ( Figure 6C). Ito cells had processes that contained lipid droplets and extended between the hepatocytes ( Figure 6A,B) or around blood sinusoids ( Figure 7B). In the treated group, these cells were overloaded with lipids ( Figures 6B and 7B) and exhibited collagen fibers ( Figure 6B). Under control conditions, the blood sinusoids exhibited an integral endothelial lining that was perforated by small pores ( Figure 7A). However, in the treated group, just a few fenestrae could be detected as most of the pores were disrupted, which had led to the formation of large gaps ( Figure 7B). Telocytes were observed around the blood sinusoid ( Figure 7A). They have a spindle-cell body, with an elongated euchromatic nucleus and two cytoplasmic processes known as telopodes (Tps). crosis and Kupffer cell proliferation were observed ( Figure 4C). The interlobular vein was distended and congested with blood. Furthermore, interlobular bile duct hyperplasia with periportal fibrosis was observed ( Figure 2D). The rats in group III (8-week AFB1 treatment) exhibited severe vein congestion and thrombosis ( Figure 3A), as well as enormous hepatic vacuolar degeneration ( Figures 3B and 4D). The diameter of some hepatocytes was less than that of the other neighboring cells, and hypereosinophilic cytoplasm was observed due to the accumulation of pyknotic nuclei within the cytoplasm ( 3D, and 4F). These hypertrophic cells cause hepatic cord disruption. Several binucleated hepatic cells were observed ( Figure 3E), as were some cells exhibiting a high rate of mitotic abnormalities in the form of tripolar mitosis ( Figure 3C). Spotty focal areas of necrosis, minute clusters of hepatocytes, the absence of adjacent hepatocytes/their replacement with lymphocytes, and proliferated Kupffer cells were all observed ( Figure 3E). Moreover, massive periportal fibrosis, bile duct hyperplasia, and excessive portal vein congestion with inflammatory cell infiltration were noted in all portal areas ( Figures 3F, 4H, and 5D). In some areas, fibrosis was observed around the portal veins, which extended as tracts inside the hepatic lobules ( Figures 4I and 5D). A significantly high number of megalocytes, binucleated hepatocytes, and different patterns of mitotic abnormalities were apparent in group III compared with the indicated malignancies in the other groups. Vacuolar degeneration in group III was significantly higher than that in group II (p < 0.05), both relative to the control group (Figure 5A). The number of binucleated cells in group III was significantly greater than that in group II (p < 0.05; Figure 5B), both relative to the control group. Finally, the number of megalocytes was significantly higher in group III than in group II (p < 0.05; Figure 5C). Hepatic megalocytes (red arrowheads), abnormalities in mitosis with tripolar mitosis (C: squares and black arrows, respectively), and Kupffer cell proliferation (D: black arrows). (E) Degenerated binucleated hepatocytes (red arrowheads); a focal area of necrosis; several adjacent hepatocytes are absent and replaced by inflammatory cells (double arrows). (F) Marked dilatation in the portal vein (star) with periportal fibrosis (arrow).
Transmission Electron Microscopy
TEM was employed to further investigate the effects of AFB1 administration for 8 weeks on hepatic cells and sinusoids (Figures 6-9). In the control tissue, hepatocytes exhibited a quadrilateral shape with a rounded euchromatic nucleus. They had numerous endoplasmic reticula, mitochondria, and lysosomes ( Figure 6A). After 8 weeks of treatment with AFB1, the hepatocyte exhibited signs of vacuolation ( Figure 6B) and became largely necrosed, displaying ruptures of the plasma membrane, vacuolation, karyolysis, and the release of cellular contents ( Figure 6C). Ito cells had processes that contained lipid droplets and extended between the hepatocytes ( Figure 6A and 6B) or around blood sinusoids ( Figure 7B). In the treated group, these cells were overloaded with lipids (Figures 6B and 7B) and exhibited collagen fibers ( Figure 6B). Under control conditions, the blood sinusoids exhibited an integral endothelial lining that was perforated by small pores ( Figure 7A). However, in the treated group, just a few fenestrae could be detected as most of the pores were disrupted, which had led to the formation of large gaps (Figure 7B). Telocytes were observed around the blood sinusoid ( Figure 7A). They have a spindle-cell body, with an elongated euchromatic nucleus and two cytoplasmic processes known as telopodes (Tps). The space of Disse, i.e., the space located between the hepatocytes and sinusoids, was infiltrated by some cells in the treated group. Kupffer cells were located in hepatic sinusoids ( Figures 6B, 7A, and 8A) and projected into the sinusoidal lumen. These cells have irregular surfaces and indented nuclei. They significantly differ in their diameter, density, and shape. In the treated groups, hyperactive Kupffer cells were observed in the space of Disse, which was characterized by large processes and contained lysosomes and phagosomes in addition to phagocytic materials ( Figure 8A). Mast cells, plasma cells, and dendritic cells (DCs) had also infiltrated the space of Disse (Figure 8). DCs had an irregular shape with a heterochromatic nucleus and multiple dendrites, such as cytoplasmic processes. These processes came into contact with lymphocytes ( Figure 8B). Pit cells with characteristic granules were observed around the blood sinusoid ( Figure 7B) and the space of Disse ( Figure 8C) in the treated groups. Telocytes were observed around the hepatocyte and blood sinusoids with characteristic cell bodies and cell processes (telopodes). Telocytes demonstrated some morphological changes in the treated groups, including the dissolution of the plasma membrane, which surrounded the cell bodies and contained scant perinuclear cytoplasm. In addition, their cytoplasm showed vacuoles and the dissociation of the telopodes ( Figure 8D and 8E).
The interlobular bile duct, which was lined by pyramidal cells with basally located nuclei, was resting on the basal lamina and was surrounded by a fibrous sheath that increased in thickness in the treated groups ( Figure 9). The space of Disse, i.e., the space located between the hepatocytes and sinusoids, was infiltrated by some cells in the treated group. Kupffer cells were located in hepatic sinusoids ( Figure 6B, Figure 7A, and Figure 8A) and projected into the sinusoidal lumen. These cells have irregular surfaces and indented nuclei. They significantly differ in their diameter, density, and shape. In the treated groups, hyperactive Kupffer cells were observed in the space of Disse, which was characterized by large processes and contained lysosomes and phagosomes in addition to phagocytic materials ( Figure 8A). Mast cells, plasma cells, and dendritic cells (DCs) had also infiltrated the space of Disse ( Figure 8). DCs had an irregular shape with a heterochromatic nucleus and multiple dendrites, such as cytoplasmic processes. These processes came into contact with lymphocytes ( Figure 8B). Pit cells with characteristic granules were observed around the blood sinusoid ( Figure 7B) and the space of Disse ( Figure 8C) in the treated groups. Telocytes were observed around the hepatocyte and blood sinusoids with characteristic cell bodies and cell processes (telopodes). Telocytes demonstrated some morphological changes in the treated groups, including the dissolution of the plasma membrane, which surrounded the cell bodies and contained scant perinuclear cytoplasm. In addition, their cytoplasm showed vacuoles and the dissociation of the telopodes (Figure 8D,E).
The interlobular bile duct, which was lined by pyramidal cells with basally located nuclei, was resting on the basal lamina and was surrounded by a fibrous sheath that increased in thickness in the treated groups ( Figure 9).
Discussion
In the current study, we identified the ultrastructural damage in hepatic parenchymal and nonparenchymal cells, as well as sinusoidal and biliary damage, after exposure to AFB1 in rats. The liver function depends on the interactions between nonparenchymal cells, hepatocytes, and the extracellular matrix they secrete. Thus, hepatocyte damage would not be detected without minimal sinusoidal and perisinusoidal lesions [27]. AFB1 is an extremely hepatotoxic agent that triggers numerous pathological changes in the liver. Moreover, hyperplasia of the bile duct, as well as fibrosis around the portal area, has also been observed with AFB1 exposure [5,[28][29][30].
The present study revealed that vacuolar degeneration and necrosis occurred in hepatocytes after oral administration of AFB1 for 4 or 8 weeks. Vacuolar hepatocellular degeneration was significantly high in the 8-week treatment group. This result is consistent with our ultrastructural observations of the hepatocytes, which were largely necrosed and demonstrated rupture of the plasma membrane, vacuolation, karyolysis, and release of cellular contents. Necrosis has been described as an unregulated type of cell death, with various cellular actions that inhibit the swelling of the cell and the rupture of the plasma membrane [31]. Necrosis is a mode of death that occurs due to extreme ATP exhaustion, for example, during toxic injury and oxidative stress with ROS formation [32]. It results in changes to cell membrane integrity that lead to ion pump damage, which is the initial process in vacuolar degeneration and cell swelling [33].
Substantial megalocytosis and binucleation of hepatocytes were observed in AFB1-treated groups; the number of megalocytes and binucleated cells was significantly higher in the 8-week treatment AFB1 group compared with the other groups. These results are in agreement with those of Kalengayi and Desmet [34], who reported that AFB1 induces tumor formation, in which cells demonstrate abundant eosinophilic cytoplasm, enlarged nuclei with prominent nucleoli, and abnormal mitosis. The regeneration of
Discussion
In the current study, we identified the ultrastructural damage in hepatic parenchymal and nonparenchymal cells, as well as sinusoidal and biliary damage, after exposure to AFB1 in rats. The liver function depends on the interactions between nonparenchymal cells, hepatocytes, and the extracellular matrix they secrete. Thus, hepatocyte damage would not be detected without minimal sinusoidal and perisinusoidal lesions [27]. AFB1 is an extremely hepatotoxic agent that triggers numerous pathological changes in the liver. Moreover, hyperplasia of the bile duct, as well as fibrosis around the portal area, has also been observed with AFB1 exposure [5,[28][29][30].
The present study revealed that vacuolar degeneration and necrosis occurred in hepatocytes after oral administration of AFB1 for 4 or 8 weeks. Vacuolar hepatocellular degeneration was significantly high in the 8-week treatment group. This result is consistent with our ultrastructural observations of the hepatocytes, which were largely necrosed and demonstrated rupture of the plasma membrane, vacuolation, karyolysis, and release of cellular contents. Necrosis has been described as an unregulated type of cell death, with various cellular actions that inhibit the swelling of the cell and the rupture of the plasma membrane [31]. Necrosis is a mode of death that occurs due to extreme ATP exhaustion, for example, during toxic injury and oxidative stress with ROS formation [32]. It results in changes to cell membrane integrity that lead to ion pump damage, which is the initial process in vacuolar degeneration and cell swelling [33].
Substantial megalocytosis and binucleation of hepatocytes were observed in AFB1treated groups; the number of megalocytes and binucleated cells was significantly higher in the 8-week treatment AFB1 group compared with the other groups. These results are in agreement with those of Kalengayi and Desmet [34], who reported that AFB1 induces tumor formation, in which cells demonstrate abundant eosinophilic cytoplasm, enlarged nuclei with prominent nucleoli, and abnormal mitosis. The regeneration of AFB1-damaged hepatocytes by natural proliferation is narrowed, particularly during prolonged aflatoxicosis [35,36]. Megalocytosis occurs as a consequence of the nuclear and cellular enlargement of cells, which exhibit dynamic DNA and protein biosynthesis [37].
It has previously been reported that AFB1 toxicity induces DNA damage [14,38]. In our study, mitotic abnormalities were documented in cells. In a previous study, the proportion of total abnormalities was relatively high and increased as the duration of AFB1 exposure was extended [39].
In the present study, AFB1 treatment caused abnormalities in the sinusoidal endothelium and in the sinusoidal and perisinusoidal cells. According to our ultrastructural observations, most of the endothelial fenestrae were disrupted, and large gaps were formed. This damage occurs in the endothelial lining, which leads to the disruption of the endothelial barrier; it has been previously reported as a consequence of pathological conditions, such as exposure to Kavian [40]. The liver endothelial filter is considered to be a critical factor in the distribution of chylomicron fragments, which in turn may lead to a fatty liver [41]. In the present study, we reported perisinusoidal fibrosis with AFB1 treatment; this has been previously observed with numerous pathological conditions, including alcoholic fibrosis and hepatocellular carcinoma [42].
Within and surrounding the blood sinusoids, we observed cells other than hepatocytes, e.g., Kupffer cells, fat-storing cells (Ito cells), and pit cells. Each cell type has a characteristic fine structure [41]. Kupffer cells are resident macrophages; they are located within the blood sinusoids and connect with the endothelium through their cytoplasmic processes [43]. In the current study, Kupffer cells infiltrated the space of Disse in AFB1-treated groups, which may have been caused by the destruction that occurred in the sinusoidal barrier. In other specific pathological conditions, Kupffer cells partially or completely infiltrate the space of Disse [42]. Conversely, the activated Kupffer cells were observed to contain many lysosomes and phagosomes. It has been well established that Kupffer cells act as both defenders against, and mediators of, hepatic damage. For instance, the dysfunction or exhaustion of Kupffer cells protects the liver against injury that can be caused by the alkylating agent melphalan [44]. In addition, the activation of Kupffer cells by toxic agents influences the release of certain inflammatory mediators, growth factors, and ROS. Such an activation helps in controlling the acute and chronic liver responses involved in hepatic cancer [45]. During cellular degeneration and necrosis, we found that Kupffer cells proliferated after AFB1 treatment, with proliferation increasing as the duration of toxicity increased. This observation is in agreement with a previous study [46]. The activated Kupffer cells in turn activate fat-storing cells to release their product, which will have already occurred during tissue damage [40]. Fat-storing cells, i.e., perisinusoidal cells or Ito cells, contain fat droplets; under pathological conditions, they become overloaded with these droplets [43]. This finding is contrary to our ultrastructural results in AFB1-treated groups. Nevertheless, Ito cells with overloaded lipids have been observed in patients with hypervitaminosis A and hepatocellular carcinoma [42]. In addition, the space of Disse in our AFB1-treated groups was infiltrated by immune cells such as pit cells, DCs, mast cells, and plasma cells. Pit cells are natural defense cells that show the morphology of granular lymphocytes containing granules; they have cytotoxic activity against immigrating tumor cells [47]. DCs are antigen-presenting cells that are a factor in the induction and regulation of immune responses [48]. The presence of mast and plasma cells was previously reported in alcoholic hepatitis and chronic hepatitis [42]. Taken together with our findings, these results indicate that aflatoxin hepatotoxicity may be immune-mediated.
Telocytes are an interstitial cell type found in various organs; they are involved in several tissue functions in addition to playing pathophysiological roles in several disorders [49]. Hepatic telocytes play a role in the function of adjacent hepatic stellate cells. Thus, the loss of telocyte function leads to stellate cell dysregulation [50]. In our study, telocytes were identified in the space of Disse in the liver, and exhibited morphological changes in AFB1-treated rats. A recent study demonstrated the role of telocytes in hepatic fibrosis; the disappearance of telocytes may influence liver hemostasis and its regeneration [51]. Different degrees of periportal fibrosis and bile duct hyperplasia were observed in AFB1-treated groups [52]. This reaction is assumed to restore damaged hepatocytes in the vicinity during liver injury [53]. Thus, these changes can be attributed to AFB1-induced hepatic injury [34,54].
Overall, histopathologic hepatocellular injury was severe in AFB1-treated rats administered 250 µg/kg body weight/day for 8 weeks. The liver is a complex organ; its function is regulated by complex interactions between the hepatocytes and nonparenchymal cells. When hepatocytes are damaged, the other cells in the liver will be affected; they may even proliferate, which leads to the formation of an excessive amount of connective tissue. In conclusion, AFB1 interferes with the homeostasis and cellular milieu of the liver, leading to severe liver damage.
Data Availability Statement:
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions. | 6,050.8 | 2021-02-01T00:00:00.000 | [
"Biology"
] |
Qualitative and quantitative differences between taste buds of the rat and mouse
Background Numerous electrophysiological, ultrastructural, and immunocytochemical studies on rodent taste buds have been carried out on rat taste buds. In recent years, however, the mouse has become the species of choice for molecular and other studies on sensory transduction in taste buds. Do rat and mouse taste buds have the same cell types, sensory transduction markers and synaptic proteins? In the present study we have used antisera directed against PLCβ2, α-gustducin, serotonin (5-HT), PGP 9.5 and synaptobrevin-2 to determine the percentages of taste cells expressing these markers in taste buds in both rodent species. We also determined the numbers of taste cells in the taste buds as well as taste bud volume. Results There are significant differences (p < 0.05) between mouse and rat taste buds in the percentages of taste cells displaying immunoreactivity for all five markers. Rat taste buds display significantly more immunoreactivity than mice for PLCβ2 (31.8% vs 19.6%), α-gustducin (18% vs 14.6%), and synaptobrevin-2 (31.2% vs 26.3%). Mice, however, have more cells that display immunoreactivity to 5-HT (15.9% vs 13.7%) and PGP 9.5 (14.3% vs 9.4%). Mouse taste buds contain an average of 85.8 taste cells vs 68.4 taste cells in rat taste buds. The average volume of a mouse taste bud (42,000 μm3) is smaller than a rat taste bud (64,200 μm3). The numerical density of taste cells in mouse circumvallate taste buds (2.1 cells/1000 μm3) is significantly higher than that in the rat (1.2 cells/1000 μm3). Conclusion These results suggest that rats and mice differ significantly in the percentages of taste cells expressing signaling molecules. We speculate that these observed dissimilarities may reflect differences in their gustatory processing.
Background
Mammalian taste buds are onion-shaped structures specialized for the detection of aqueous stimuli. Based on morphological criteria, rodent taste cells have been classi-fied into types I, II, III, peripheral and basal cells [1][2][3][4][5][6][7][8][9][10][11][12]. Type I cells in rodents are slender and possess an electrondense cytoplasm and several long, apical microvilli extending into the oral cavity. A distinguishing feature of a type I cell is the presence of many 100-400 nm dense granules in the apical cytoplasm. Type II cells are characterized by the presence of an electron-lucent cytoplasm and large circular or ovoid nuclei. Type II cells possess several short microvilli of uniform length extending into the taste pore. Type III cells are slender and exhibit morphology and cytoplasmic electron density intermediate between type I and type II cells. The nuclei of type III cells are slender and possess prominent invaginations. Two distinguishing features of type III cells are the single blunt microvillus that extends into the taste pore and the presence of synapses onto nerve processes [11,13,14].
Only recently are the functional differences of the cell types becoming understood. Still, it is not clear which taste cell types are the receptors. Based on the presence of synaptic foci, it was believed that type III cells were the only taste bud receptor cells [15][16][17][18]. Evidence that type II cells are associated with transduction molecules, however, suggested a sensory for this cell type. For example, some type II taste cells express the taste signaling molecules αgustducin, PLCβ2, and the type III IP 3 receptor (IP 3 R3) in rat circumvallate taste buds [19][20][21][22]. It is significant, however, that type II taste cells apparently lack classical synapses. Likewise, some type III taste cells display immunoreactivity to serotonin (5-HT) in rat and mouse circumvallate taste buds [23], neural cell adhesion molecule (NCAM) [24], and synaptosome-associated protein of 25 kDa (SNAP-25) in rat circumvallate taste buds [13]. Immunoreactivity to ubiquitin carboxyl terminase (protein gene product 9.5, [PGP 9.5]) [11] and the synaptobrevin-2 (vesicle associated membrane protein-2, VAMP-2) [14] are both found in type II and III taste cells in rat circumvallate taste buds. A small percentage (3.5%) of PLCβ2 or IP 3 R3 immunoreactive cells also display 5-HT-LIR. It is believed that PLCβ2 or IP 3 R3 is also present in a small subset of type III cells in rat circumvallate taste buds [21]. Quantitation studies have demonstrated that approximately 24% of the taste cells in rat circumvallate papillae display α-gustducin-LIR [25], whereas another study showed that α-gustducin is present in 33% of taste cells in mouse circumvallate papillae [26]. PGP 9.5 is present approximately in 14.6% of the taste cells in rat circumvallate taste buds [25] and 23% of taste cells in mouse circumvallate taste buds [26]. Based on these preliminary data, it is likely that there are differences in cell type labeling between rats and mice.
Many of the electrophysiological, ultrastructural, and immunocytochemical studies on rodent taste buds have been carried out on rat taste buds. In recent years, however, the mouse has become the species of choice for molecular and other studies on sensory transduction in taste buds. Do rat and mouse taste buds have the same cell types, sensory transduction markers and synaptic pro-teins? Recent research indicates that there are differences in electrophysiological properties, expression of markers and innervation between rat and mouse taste buds [27][28][29][30]. The acid-sensing ion channel-2 (ASIC-2) is widely believed to be a receptor for acid taste in rat taste cells, however, ASIC-2 is not expressed in mouse taste cells and ASIC-2 knock-out mice exhibited normal physiological responses to acid taste stimuli [28]. ASIC-2 is an acid taste receptor in rat taste cells, but not in mouse taste cells. Rat and mouse taste buds are innervated differently by peripheral taste neurons [29,30]. Three to five ganglion cells innervate a single bud in mice while there is a more divergent innervation of buds in the rat [29,30] In the present study we have used antisera directed against PLCβ2, α-gustducin, 5-HT, PGP 9.5 and synaptobrevin-2 to determine the percentages of taste cells expressing these markers in circumvallate taste buds of both rodent species. In addition we have determined the numerical density of taste cells and taste bud volume between rat and mouse circumvallate taste buds using serial transverse sections.
Serotonin (5-HT)
Serotonin-LIR is present in a small subset of taste cells in rodent taste buds. The animal is injected with the immediate precursor, 5-HTP, according to the method of Kim and Roper [23]. Previous studies have demonstrated that serotonin is present in a subset of type III taste cells in rat and mouse circumvallate taste buds [11,23]. Our results show that a small subset of slender taste cells display serotonin-like immunoreactivity (LIR) in both rat and mouse circumvallate taste buds.
Immunoreactivity is present in both the cytoplasm and nuclei (Fig. 1). A single taste bud profile contains approximately 2.5 taste cells in rat and 2.8 taste cells in mouse displaying serotonin immunoreactivity (Table 1). We examined 141 taste buds from 5 rats and 221 taste buds from 10 mice. A total of 353 immunoreactive cells were found in the rat taste buds and 621 immunoreactive cells in the mouse taste buds were counted. There is a significant difference between rat (13.7%) and mouse circumvallate taste buds (15.9%) in the percentage of taste cells displaying serotonin-LIR (p < 0.05) (Fig. 2).
PGP 9.5
Subsets of taste cells and nerve processes in both rat and mouse circumvallate taste buds display PGP 9.5-LIR (Fig. 3). Three subsets of PGP 9.5-LIR nerve processes are present: intragemmal, perigemmal and extragemmal. Intense immunoreactivity is associated with the nerve plexus located at the base of the taste bud. Some PGP 9.5-LIR taste cells are slender, spindle-shaped cells with irreg-ular nuclei, while others have large ovoid to round nuclei. Whereas each taste bud profile in the rat contains approximately 1.7 PGP 9.5-LIR taste cells, approximately 3 taste cells per taste bud profile are immunoreactive for PGP 9.5 in the mouse (Table 1). There is a significant difference (p < 0.001) in the percentages of PGP 9.5 immunoreactive taste cells between rat and mouse circumvallate taste buds. Approximately 14.3% of the taste cells in the mouse α-gustducin α-gustducin is a G protein believed to be involved in the transduction pathways for bitter and sweet taste [31][32][33][34]. α-gustducin may also play a role in umami taste [35,36]. α-gustducin is present in a subset of type II cells [19]. Our results show that a subset of taste cells express α-gustducin-LIR in both mouse and rat circumvallate taste buds. The α-gustducin-LIR taste cells are spindle-shaped with large, round nuclei. Immunoreactivity is cytoplasmic; no immunoreactivity is associated with the nuclei.
α-gustducin immunoreactive cells extend from the basal lamina to the taste pore (Fig. 4). We analyzed 197 taste buds from five rats and 181 taste buds from ten mice. Cells were scored as immunoreactive only if the cellular profile contained a nuclear profile. We observed 635 immunoreactive taste cells in the rat and 482 immunoreactive taste cells in mouse taste buds (Table 1). Approximately 18% of the taste cells in rat taste buds and 14.6% of taste cells in mouse taste buds displayed α-gustducin-LIR. The numbers of α-gustducin-LIR immunoreactive taste cells in the rat were significantly different from those in the mouse (p < 0.01) (Fig. 2).
PLCβ2
Phospholipase Cβ2 (PLCβ2) is thought to be essential for the transduction of bitter, sweet, and umami stimuli [37]. A large subset of taste cells in both rat and mouse circumvallate taste buds display PLCβ2-LIR. The immunoreactive cells are spindle-shaped with round nuclei resembling type II taste cells (Fig. 5). We counted 935 PLCβ2-LIR cells from 152 rat taste buds and 666 PLCβ2-LIR cells from 163 mouse taste buds. Whereas 31.8% of rat circumvallate taste cells display PLCβ2-LIR, only 19.6% of the mouse circumvallate taste cells display PLCβ2-LIR. Thus, rat taste buds contain higher percentages of PLCβ2-LIR cells than mouse taste buds (p < 0.001) (Fig. 2).
Synaptobrevin-2
Synaptobrevin-2 (VAMP-2) is a synaptic vesicle membrane protein that plays an important role in the exocytosis of neurotransmitter release at the synapse [38][39][40]. Previous studies have shown that synaptobrevin-2-LIR is present subsets of both type II and type III taste cells in rat taste buds [14]. Synaptobrevin-2 is present in a large subset of taste cells and nerve processes in both rat and mouse circumvallate taste buds (Fig. 6). Approximately 35% of the cells in taste buds from rat circumvallate papillae display synaptobrevin-2-LIR [14]. Most of the immunoreactive taste cells are spindle shaped with circular to ovoid nuclei. A smaller subset of synaptobrevin-2-LIR taste cells possessed cells that are slender in shape. We examined a total 152 taste buds from five rats and 241 taste buds from Quantitation of taste cells displaying different immunoreac-tivity to markers in rat and mouse circumvallate taste buds ten mice. We found 870 taste cells displaying synaptobrevin-2-LIR in rat circumvallate taste buds, and 1290 taste cells displaying synaptobrevin-2-LIR in mouse circumval-late taste buds (Table 1). There is a significantly higher percentage of taste cells displaying immunoreactivity to synaptobrevin-2 in rat circumvallate taste buds versus mouse taste buds (31.2% vs 26.3%) (Fig. 2).
Numerical density of taste cells
Forty-one taste buds from 3 mice and 42 taste buds from 3 rats were analyzed (Table 2). Mouse taste buds contain (Table 2).
Discussion
In the present study we have demonstrated that significant differences exist between rats and mice with regard to the presence of signaling molecules and taste bud cell markers. Using unbiased systematic sampling and immunocytochemistry we have quantified the presence of signaling molecules/taste cell markers including serotonin, PGP 9.5, α-gustducin, phospholipase C β2 (PLCβ2) and synaptobrevin-2. Our results indicate that there are significant differences (p < 0.05) between mouse and rat taste buds in the percentages of taste cells displaying immunoreactivity (IR) for all five markers. Higher percentages of rat taste bud cells exhibit immunoreactivity to α-gustducin, PLCβ2 and synaptobrevin-2 compared with the mouse. Mouse taste buds however, contain higher percentages of taste cells displaying serotonin-and PGP 9.5-LIR.
Serotonin
Serotonin is a putative neurotransmitter or neuromodulator candidate in the taste bud [41,42]. Previous studies have suggested that serotonin is present in type III taste cells in rat, rabbit, and mouse taste buds [23,43,44]. Yee et al. [11] proposed that the type III cells in rat circumvallate taste buds are two of varieties: those immunoreactive for serotonin and those immunoreactive for PGP 9.5. Taste bud synapses in rat circumvallate taste buds are only associated with the type III cells [11,13,14]. Our quantitation results indicate there is a significant difference (p < 0.05) in the percentages of taste cells displaying serotonin-LIR between mouse and rat circumvallate taste buds: 15.9% of mouse taste cells contain serotonin compared with 13.7% of rat taste bud cells. Based on previous work from our laboratory, we believe that serotonin-LIR colocalizes with SNAP-25-LIR in taste cells of rat taste buds [45].
PGP 9.5
PGP (protein gene product) 9.5 is a neuronal marker that has also been found in certain types of paraneurons [46,47]. PGP 9.5-LIR has been identified in taste buds of the rat [48,49]. Previously we found PGP 9.5-LIR in subsets of both type II and type III cells in circumvallate taste buds of the rat [11]. We also observed synapses onto nerve processes from PGP 9.5-LIR type III taste cells. Whereas one subset of type III cells in the rat accumulate serotonin but do not express PGP 9.5, the remainder of the type III cells express PGP 9.5 but do not accumulate serotonin. Similarly, two subsets of type II cells exist: those immunoreactive for PGP 9.5 and those immunoreactive for α-gustducin. Our results indicate that 14.3% of taste cells express PGP 9.5 in mouse, while 9.4% display PGP 9.5-LIR in rat. Thus, the PGP 9.5-LIR subsets of type II and type III cells may constitute small percentages of those cell types. It would be of benefit for future studies to elucidate the percentages of these subsets of type II and type III cells.
Synaptobrevin-2
Synaptobrevin-2 is a vesicle-associated membrane protein. Previous results from our laboratory indicate that synaptobrevin-2 is present in a subset of type II and type III cells. Our data suggest that taste cells with synapses express synaptobrevin-2 [14]. In rat circumvallate taste buds, a large subset of synaptobrevin-2-LIR cells (73%) also express IP 3 R3 [14]. Most all IP 3 R3 immunoreactive cells have been shown to be type II cells [21]. In the present study we have found that a greater percentage of rat taste cells display immunoreactivity for synaptobrevin-2 versus the mouse (31.2% vs 26.3%). Likewise, rats have a larger percentage of taste cells expressing α-gustducin and PLCβ2. These findings suggest that proportionally there are more type II cells in rat circumvallate papillae taste buds when compared with mouse. Although type II taste cells lack classical synapses, we do find that the type II taste cells contain some vesicles in the cytoplasm. The function of synaptobrevin-2 in type II taste cells is unclear, however, it suggests that synaptobrevin-2 may play a role in vesicle protein transportation, perhaps in the Golgi apparatus.
Several investigators have used different immunohistochemical methods to quantify taste cells displaying αgustducin or PGP 9.5 in rodent animals. Ueda et al. [25] used the avidin-biotin-horseradish peroxidase (ABC) method and concluded that approximately 24.2% of rat circumvallate papillae taste bud cells display α-gustducin-LIR and 14.6% display PGP 9.5-LIR. The results in that study were based on 320 taste cells in 20 taste buds. This contrasts with our results from the rat (α-gustducin, 18%; PGP 9.5, 9.4%). This disparity may be due to: 1) The number of taste buds we sampled (α-gustducin: 197 taste buds; PGP 9.5: 144 taste buds in the present study versus approximately 20 taste buds by Ueda et al. [25]); 2) Our use of unbiased sampling; 3) Specimen preparation techniques, e.g., the use of different fixatives; 4) Immunocytochemical imaging methods e.g., ABC method vs immunofluorescence. Smith et al. [50] reported that rat circumvallate taste buds have a mean of 8.37 α-gustducin-LIR cells per taste bud. Takeda et al. [26] found α-gustducin-LIR in 33% and PGP 9.5-LIR in 23% of mouse circumvallate taste bud cells. We account for the difference in our results for the following reasons: 1) We used unbiased systematic sampling in our study; 2) We analyzed over 140 taste buds for each antibody; 3) In our study, taste cells were counted as immunoreactive only when a nuclear profile was present; 4) We counted immunoreactive taste cells using transverse sections versus longitudinal sections. In the transverse sections, there is no overlapping in taste cells, the immunoreactive taste cell profiles are obvious, and nuclei are easier to count. Takeda et al. [26] used polyclonal PGP 9.5 antibody in their study while we used a monoclonal PGP 9.5 antibody. However, our experience with polyclonal PGP 9.5 (Code No. 7863-0507, Biogenesis) is that it completely colocalizes in taste cells and nerve processes with monoclonal PGP 9.5 antibody (Code No. 7863-1004, Biogenesis). Finally, we conclude that a higher percentage of rat taste cells express α-gustducin (18%) than in the mouse (14.6%); while a smaller percentage of rat taste cells express PGP 9.5 (9.4%) versus the mouse (14.3%).
Numerical density and size of taste buds
It is generally accepted that a rodent taste bud contains 50 -150 taste cells. We were curious to determine if there are differences in the numbers of cells in circumvallate taste buds between the rat and mouse. Our results clearly demonstrate that mouse taste buds are smaller in volume, but contain a larger number of smaller taste cells when compared with rat.
Conclusion
We have provided evidence that the rat and mouse differ in the percentages of taste cells expressing each of five taste signaling molecules: serotonin, PGP 9.5, α-gustducin, PLCβ2 and synaptobrevin-2. These results, taken together with the differences taste cell size and numbers, suggest that rats and mice may possess different sensitivities to gustatory stimuli.
Methods
Adult Sprague-Dawley male rats (250-350 g, 45 days) and CF-1 male mice (25-30 g, 49 days) purchased from Charles River were used for these studies. Animals were cared for and housed in facilities approved by the Institutional Animal Care and Use Committee of the University of Denver. For studies involving serotonin, animals were injected with 5-hydroxytryptophan (5-HTP, 80 mg/kg, i.p.) one hour before sacrifice. All animals were anesthetized with ketamine HCI about 270 mg/kg body weight for rats and 370 mg/kg body weight (i.p.) for mice. Animals were perfused for 10 seconds through the left ventricle with 0.1% sodium nitrite, 0.9% sodium chloride and 100 units sodium heparin in 100 ml 0.1 M phosphate buffer (pH 7.3). This was followed by perfusion with 4% paraformaldehyde in 0.1 M phosphate buffer for 10 minutes [51]. All perfusates were warmed to 42°C before use. After perfusion the excised circumvallate papillae were fixed in fresh fixative for 3 hours at 4°C. The tissues were cryoprotected with 30% sucrose in 0.1 M phosphate buffer overnight at 4°C.
Unbiased systematic sampling method
Five adult Sprague-Dawley male rats and ten CF-1 male mice were perfused as for immunohistochemistry. Serial transverse sections (20 μm thickness) were cut from the tissues containing circumvallate taste buds using a cryostat (HM 505E, MICRON, Laborgeräte GmbH, Germany). In order to obtain a systematic sample without bias throughout the papilla, each papilla was exhaustively sectioned. The serial sections were placed sequentially into individual wells in a 36-well culture dish. Every fifth section was saved starting with section 1, 2, 3, 4, or 5. The beginning section number was determined using a new random number for each rat (e.g., sections 3, 8, 13, 18, and 23). Assuming that a taste bud is 80-100 μm in length, sampling every fifth section will assure that no two sections will be from the same taste bud. Each group of sections contains 25-30 sections from five rat circumvallate papillae. For the sections from the mouse circumvallate papilla, every third section was saved using the sampling method described above.
Immunofluorescence and nuclear staining
Cryostat sections were blocked in 5% normal goat serum and 0.3% Triton X-100 in 0.1 M phosphate buffered saline (PBS) (pH 7.3) for one hour at room temperature, followed by incubation in a primary antibody (
Controls
Primary antibodies were excluded from the processing to check for cross-reactivity. No immunoreactivity was observed under these conditions.
Quantification of immunoreactive taste cells
Confocal images were collected using a Zeiss Axioplan II with an Apotome attachment (Carl Zeiss Advanced Imaging Microscopy, Germany). Approximately 140-200 rat taste buds and 150-240 mouse taste buds per group were analyzed. Cells were scored as immunoreactive only if a nuclear profile was present in the cell. The total number of cells in the slice was determined by counting the number of Sytox stained nuclei for each taste bud. Finally, the percentage of immunoreactive taste cells was calculated by dividing the number of immunoreactive taste cells by the total number of the taste cells in each taste bud.
Determination of numerical density of taste cells in rat and mouse taste buds
After perfusion, the excised circumvallate papillae were fixed with fresh fixative for 3 hours at 4°C. The tissues were then postfixed and stained for two hours in 1% osmium tetroxide (OsO 4 ) in 0.1 M PO 4 buffer followed by a rinse in 0.05 M sodium maleate buffer (pH 5.2). The blocks were then stained en bloc in 1% uranyl acetate in 0.025 M sodium maleate buffer (pH 6.0) overnight at 4°C, followed by dehydration and embedding in Eponate 12. The blocks were the re-embedded using the technique of Crowley and Kinnamon [52].
Serial thin sections (1 μm) were cut with a Diatome Histo-Jumbo Knife using a Leica Ultracut UCT Ultramicrotome.
Typically a ribbon of about 20 sections was collected onto a glass slide. After drying on a hot plate the sections were stained with toluidine blue for 5 minutes. Images of taste buds were recorded using a Zeiss Axioplan II with an Apotome attachment. The images of taste buds were collected from every other section. Using Adobe Photoshop we compared every two adjacent images and identified the number of newly occurring taste cell nuclei. The number of taste cells in a taste bud was the sum of newly occurring taste cell nuclei that appeared in every other image in the series.
The volume of a taste bud was calculated according to following formula: Volume (μm 3 ) = Σ 1-n (37.2 × 2 × C n ) (C: number of crosses on taste bud image; n: image number; 2: the thickness is 2 μm between two adjacent images). We superimposed an image of grids (20 × 20 grids, 1 cm/grid) over the image of a taste bud profile and counted the number of crosses within a taste bud profile. Each cross represents an area of 37.2 μm 2 . Every taste bud area was multiplied by the thickness between two adjacent sections and summed to determine the volume of the section. The volumes of all of the sections were summed to obtain the volume of the taste bud.
Numerical density of taste cells in a taste bud was calculated by dividing the number of taste cells by the volume of the taste bud.
Statistical analysis
Statistical analysis for the percentages of immunoreactive taste cells in Figure 2 and the numerical density of taste cells in Table 2 were performed using the Student t-test.
Publish with Bio Med Central and every scientist can read your work free of charge | 5,411 | 2007-01-05T00:00:00.000 | [
"Biology"
] |
Scale dependence and collinear subtraction terms for Higgs production in gluon fusion at N3LO
The full, explicit, scale dependence of the inclusive N3LO cross section for single Higgs hadroproduction is obtained by calculating the convolutions of collinear splitting kernels with lower-order partonic cross sections. We provide results for all convolutions of splitting kernels and lower-order partonic cross sections to the order in epsilon needed for the full N3LO computation, as well as their expansions around the soft limit. We also discuss the size of the total scale uncertainty at N3LO that can be anticipated with existing information.
Introduction
During the past year both multi-purpose experiments at CERN's Large hardon collider (LHC), CMS [1] and Atlas [2], have observed a new boson with a mass of about 125 GeV, which is strongly believed to be the long-sought Higgs boson. The couplings of the new boson to Standard Model (SM) particles are currently compatible to the SM predictions with a minimal Higgs sector. Nevertheless, effects from physics beyond the Standard Model (BSM) may reside in small deviations of the couplings from the SM values, effects that will be, to a certain extent, accessible with the increased statistics and energy reach of the LHC high energy run starting in 2015.
The dominant production mode for the Higgs boson at the LHC is gluon fusion, accounting for about 90% of the total production cross section at the observed mass of about 125 GeV. Indeed, the Higgs boson has, up to now, been observed in channels in which its production is gluon induced. Next-to-leading order (NLO) QCD corrections for gluon fusion, in the five-flavour heavy-quark effective theory (HQET) were computed at the beginning of the 1990s [3,4]. Since then NLO corrections in the full theory including top-mass, and top-bottom interference effects were calculated by [5][6][7][8][9], and next-to-next-to-leading order (NNLO) corrections in HQET by [10][11][12]. Electroweak corrections are also available at the NLO level [13][14][15][16], and so are mixed QCD-EW corrections [17], and EW corrections to higgs plus jet including top and bottom quark contributions [18]. Recently the NNLO cross section for gluon induced higgs production in association to a jet was calculated, in a way that also allows for differential distributions to be produced [19]. All available fixed order contributions to Higgs production via gluon fusion were recently included in the program iHixs [20], which, moreover, allows for the incorporation of BSM effects through modified Wilson coefficients within the effective theory approach. The latter has been explicitly shown to be an excellent approximation for Higgs masses below the top-antitop threshold [21][22][23], and even more so for the Higgs boson at 125 GeV. Despite these advances and due to the slow perturbative convergence of the gluon fusion cross section, the remaining uncertainty due to variation in renormalisation and factorisation scale still amounts to about ∼ 9% for a 125 GeV Higgs boson at the LHC with 8 TeV centre-of-mass energy.
Beyond fixed order, threshold resummation has been performed to NNLL accuracy by traditional resummation methods [24] leading to a ∼ 7.5% uncertainty [25], and within the SCET framework [26][27][28] leading to a ∼ 4% scale uncertainty. The latter is generally considered too optimistic.
Information from the LHC high energy and high luminosity data set is projected to allow the determination of the Higgs couplings with precision of ∼ 10% or better [29][30][31]. This uncertainty includes experimental systematics and statistics, but also errors from the determination of parton distribution functions and of the strong coupling, as well as theory systematics, the latter being the limiting factor in several cases. It is evident that a prerequisite to this goal is the reduction of the theory scale uncertainty to the ∼ 5% level or lower. The question arises then, whether computing the cross section to the next order in perturbation theory, N 3 LO, within the EFT approach, an admittedly formidable task, would achieve this goal.
Information about certain N 3 LO contributions has been available for several years. The three-loop, virtual contributions have been calculated and were part of the full N 3 LO Higgs decay to gluons in [32]. However, disentangling the pure virtual contributions from this computation is not possible. The quark and gluon form-factors are known up to threeloop order [33][34][35][36]. In [37] the soft 'plus'-contributions to the N 3 LO cross section were derived using mass factorisation constraints. This allowed the authors of [37] to derive a soft approximation of the N 3 LO cross section whose renormalisation scale dependence is rather mild, resulting in 4% renormalisation scale uncertainty (keeping the factorisation scale equal to the Higgs mass). Recently further attempts to modify the resummation procedure such that its prediction at fixed order better matches the threshold and high energy limits of the known fixed order results, were made [38], resulting in another soft approximant with a scale uncertainty of 7%. It still remains true that without the full N 3 LO expression, it is difficult to judge which of these prescriptions is closer to reality.
Recently, some new ingredients of the full N 3 LO cross section have appeared. In [39,40], the real-virtual and double-real master integrals of the NNLO cross section have been calculated to higher orders in ǫ. In [41], the convolutions of collinear splitting kernels with lower-order partonic cross sections have been computed, which is also an ingredient for our result and has been re-derived in this work. Very recently, the soft limits of all master integrals appearing in triple-real radiation corrections (i.e. the emission of three additional partons) have been worked out [42].
In this paper, we compute the full dependence of the N 3 LO cross section on the factorisation and renormalisation scales, which can be obtained from lower-order results. Furthermore, we provide the soft limits of all convolutions that we calculated, which may become useful when expanding the full N 3 LO corrections around threshold. In section 2 we review how the dependence on factorisation and renormalisation scales enters higher-order calculations. In sections 3 and 4 we list the splitting kernels and partonic cross sections needed for our results and present the method used to compute their convolutions, respectively. In section 5 we give results for the estimated scale uncertainty of the N 3 LO gluon fusion cross section and conclude in section 6.
Sources of explicit scale dependence
Predictions for observable quantities in quantum field theory are independent of arbitrary scales, when calculated at all perturbative orders. The scale dependence of all predictions is an artefact of the truncation of the perturbative series, and is usually considered a measure of the effect of missing higher orders in any given computation. This dependence occurs explicitly, through terms in the final result that depend on logarithms involving the scale, and implicitly, through the running of α s and the evolution of the parton distribution functions. In this section we describe the occurrence of the explicit scale dependence.
Let us, for the moment, introduce only one scale, In dimensional regularisation the scale µ appears during renormalisation, when the bare coupling is replaced by the renormalised one, where we have chosen the MS-scheme. Z α is the renormalisation constant of the strong coupling, and the factor of µ 2ǫ ensures that the coupling and thus the action remain dimensionless in D = 4 − 2ǫ dimensions as well. We define a(µ) ≡ αs(µ) π throughout the paper. Divergences, of UV or IR nature, manifest themselves as poles in the regularisation parameter ǫ. The leading divergences, ǫ −2n , . . . , ǫ −n−1 for the n-th order correction, vanish, among real and virtual contributions, after renormalisation counter terms are included. The remaining poles of the UV-renormalized partonic cross section, starting from ǫ −n , vanish only after subtraction of collinear counterterms.
Specifically, let us denote byσ ij the partonic cross section after renormalisation 1 (which still contains divergences of infrared (IR) origin), spin,col The expansion ofσ ij can be written aŝ where we have written explicitly the pole coefficients at every order in a and the associated logarithms L f ≡ log(µ 2 /s). The relation ofσ ij to the total, inclusive cross section is given by convolution with the parton distribution functions, f i (x), and the collinear counter terms, Γ ij (x), by where summation of repeated indices is implied and τ = m 2 h /S with S the total centre-ofmass energy of the collision.
The convolution is defined by 1 Note the 1/z factor in the definition ofσij that is necessary to make eq. 2.6 work. and the collinear counter term reads The P
(n)
ij are the Altarelli-Parisi splitting kernels which govern the emission of collinear partons (see section 3.2).
Within the renormalized N n LO partonic cross section,σ ij , the logarithmic dependence on the scale µ arises when residual poles of order up to ǫ −n are multiplied with the expansion of the factor µ 2ǫ /s ǫ (the s −ǫ originating from the d-dimensional phase-space measure), These poles are required to cancel against the poles from the collinear counter terms convoluted with lower order partonic cross sections, see eq. (2.6) and (2.8). This requirement fixes the coefficientsσ (n,r) ij for −n − 1 < r < 0 which are also the coefficients of the logarithmic terms. In summary, all contributions to the N n LO cross section that are proportional to a power of log(µ 2 /s) can be obtained from calculating the convolutions of splitting kernels and lower-order, N m<n LO partonic cross sections.
The computation of all combinations of splitting kernels and partonic cross sections relevant for the N 3 LO corrections to the gluon fusion process is the main work of this publication. We calculate also all pieces of higher orders in ǫ that will add to the finite part of the N 3 LO corrections (but not necessarily to the scale dependent parts).
With the pole cancelation achieved, let us now define the finite, mass-renormalised partonic cross section withσ km now explicitly dependent on log(µ 2 /s). Alternatively, the relation above can be inverted,σ and solved for the highest order of σ one is interested in. For example, at NLO, this yields (using Γ This step-by-step procedure provides an additional test on the lower-order cross section, since a mistake in their mass-renormalisation will result in uncancelled poles at a higher order. We provide results in this framework, i.e. our convolutions involve splitting kernels and the finite partonic cross section σ ij . We can, then, set µ = µ f and use the renormalisation group equation for the strong coupling constant, to change the scale at which α s is evaluated in σ ij (µ r , µ f , z).
The third-order expansion of a(µ f ) in terms of a(µ r ) reads with L ≡ log(µ 2 r /µ 2 f ). The explicit logarithm L essentially counters the running of the coupling constant up to the order considered, such that the effect of varying the unphysical scales is weakened for higher orders in perturbation theory, i.e. the perturbative series is converging to its all-order value, as can be seen by taking the total derivative of the partonic cross section, If there are more scale dependent quantities entering the cross section, such as running MS-masses, their scale translations have to be included as well. The full dependence on the scale µ will, by construction, be of the next order in a once again.
Ingredients
From the previous section, we conclude that we need the following ingredients to obtain all convolutions required for the N 3 LO gluon fusion cross section: • The LO partonic cross section through O(ǫ 3 ).
• The LO splitting kernels P qq .
• The NLO splitting kernels P • The NNLO splitting kernels P (2) gg and P (2) gq (owing to the fact that at LO, only the gg-channel is nonzero).
Partonic cross section
We work in the effective five-flavour theory with the top quark integrated out. This approximation has been shown to be very good (less than 5%) for light Higgs masses, as can be seen by comparing the NLO results in effective and full six-flavour theory and by studying the importance of 1/m t corrections of the effective NNLO cross section [22,23]. We expect this behaviour to persist at N 3 LO.
The effective Lagrangian describing the interaction between gluons and the Higgs boson is given by where G a µν denotes the gluonic field-strength tensor. The Wilson coefficient C 1 which starts at O(a) has been computed perturbatively to four-loop accuracy [43,44] in the SM as well as to three-loop accuracy for some BSM models [45][46][47]. Through O(a 4 ), the Wilson coefficient in the SM reads with L t = log(µ 2 /m 2 t ). N F denotes the number of light flavours set to 5. The above expression denotes the renormalised Wilson coefficient, which is related to the bare one through the renormalisation constant, where we have suppressed the scale dependence of the strong coupling constant. The partonic cross section for the production of a Higgs boson through gluon fusion can then be cast in the form where we kept the squared Wilson coefficient factorised and pulled out all dimensionful prefactors, such that the ǫ 0 -piece of the leading order cross section becomes just All convolutions calculated in this work are done using theσ (n,m) ij and from here on, the term "cross section" will refer to these objects.
The sole dependence of the LO cross section on ǫ is an overall factor of (1 − ǫ) −1 = ∞ n=0 ǫ n from averaging over the D-dimensional polarisations of the initial gluons. Thus, the LO partonic cross section through O(ǫ 3 ) is trivially found to bẽ for all m = 0, . . . , 3.
At NLO, the dependence on ǫ is still fairly simple. There are only two master integrals and they are easily computed to all orders in ǫ.
The NNLO cross section through O(ǫ) necessitated the knowledge of the 29 master integrals to sufficiently high order in ǫ. The double-virtual master integrals can be found in work on the two-loop gluon form factor [48][49][50]. The real-virtual and double-real master integrals were computed by two groups independently during the last year [39,40]. The expression for the bare NNLO cross section in terms of master integrals was kindly provided to us by an author of [11].
In general, the partonic cross sections consist of three types of terms, delta-, plus-and regular terms.σ is defined via its action on a test function f (x) with a finite value at x = 1, The full expressions for the partonic cross sections through NNLO can be found in the ancillary files accompanying this arXiv publication. They agree with the ones given in [41] after compensating for the factor of 1/z that was not included in that publication.
Splitting kernels
The splitting kernel describes the probability of a parton j emitting a collinear parton i carrying a fraction x of the momentum of the initial parton. The splitting kernels are known up to three loops (P ij ) and may all be found in [51,52]. Note some different conventions that we use, though. Since we chose to expand all our results in a = αs π as opposed to αs 4π as in [51,52], our kernels P Also, since by P (n) qg we mean the emission of a single quark of a given flavour, we differ from the expression in [51,52] which parametrises the emission of any quark, by a factor of 1 2N F . Furthermore, there is also a conventional difference to the splitting kernels used in [41]. The authors of that publication use the quark-quark splitting kernel as defined in eq. (2.4) of [51]. This kernel, which we shall denote byP qq is used in the DGLAP evolution of pdfs. To compute all contributions to the N 3 LO gluon fusion cross section, we have to distinguish different initial-state channels such as qq (quark-antiquark), qq (identical quarks) and qQ (quarks of different flavour) which are convoluted with different combinations of pdfs. Thus, for channel-by-channel collinear factorisation, we require the three distinct, flavourdependent quark-quark kernels P qq , P qq , P qQ , (3.11) which describe the emission of an identical quark, the emission of an antiquark of the same flavour, and the emission of a quark or antiquark of a different flavour, respectively. The latter two kernels vanish at the one-loop order, P qQ but are nonzero for higher orders. In the notation of [52], this corresponds to the kernels P q i q j and P q iqj . The relation betweenP qq and our kernels is given bỹ We are not aware of results in [41] that involve the flavour-dependent quark-quark kernels. We close this section by giving the expressions for the four LO splitting kernels. For the lengthy higher-order kernels, we again refer to the machine-readable files accompanying this publication. The two NNLO kernels were taken from [53] in Form format and then translated to Maple input. Their regular parts were tested against the Fortran routine in [53], and their δ(1 − z) and D n (1 − z) parts were checked against [51]. (3.14)
Computation of the convolutions
In this section we will describe the method we used to compute the convolutions of splitting kernels and partonic cross sections that are needed to cancel collinear divergences at N 3 LO. Let us remark that our method is different from the technique used in [41], where the convolutions were calculated in Mellin space (where convolutions turn into ordinary multiplications) and the problem was essentially the calculation of the inverse Mellin-transform.
In the following, we restrict ourselves to a single convolution. Since the convolution product is associative, any multiple convolution appearing in the N 3 LO cross section can be obtained by repeating the steps using the result of the first convolution and the next convolutant. As already mentioned, both the splitting kernels and the partonic cross sections consist of three types of terms, delta-, plus-and regular terms.
where the regular pieces c where Li 1 (z) = − log(1 − z). HPLs can be defined recursively via the integral H(a 1 , a 2 , . . . , a n ; z) = z 0 dtf a 1 (t) H(a 2 , . . . , a n ; t) , a i ∈ {−1, 0, 1} , and in the special case where all a i = 0, the HPL is defined as H( 0 n ; z) = 1 n! log n (z) . For more comprehensive information about harmonic polylogarithms, we refer to [54][55][56][57][58][59][60]. Any convolution involving a delta function trivially returns the other convolutant (whether it be another delta function, a plus distribution or a regular function), (4.7) Convolutions involving two plus-distributions are more involved, yet no integral actually has to be solved. We comment on their calculation and list results for the required plus-plus convolutions in appendix A. For the remaining two types of convolutions, we end up with an actual integral that we need to compute, Specifically, a MPL may be a function of multiple variables that appear anywhere in the index vector (x 1 , . . . , x n ). The relation to HPLs reads H(a 1 , . . . , a n ; x) = (−1) k G(a 1 , . . . , a n ; x) , where k is the number of +1 indices in (a 1 , . . . , a n ). This sign difference is due to the fact that HPLs historically use 1 1−t as the weight function when adding a +1 to the index vector.
For more detailed information on multiple polylogarithms, see references [61][62][63] and references therein. Note that the order of the MPLs indices is often reversed. We follow the convention of [63]. The subsequent steps to solve the integrals are as follows: 1. We first remap the integral by x → 1 − x, such that the integration region becomes (0, 1 − z).
2.
HPLs with argument 1 − x and z 1−x have to be written as a combination of MPLs with the integration variable x as their argument, or no x-dependence at all. For example, where we've used that G(a; b) = log 1 − b a for a = 0. For MPLs of higher weights, one can find these translations by using the recursive definition of MPLs and changing variables in the integration. This becomes very tedious, though, so it proved to be more practical to use the symbol formalism developed in recent years [59,[63][64][65] and to follow the method presented in appendix D of [42]. For technical details, we refer the reader to said appendix.
At this stage, all integrations have been performed. The result still contains MPLs
where the variable z appears multiple times in the argument vector. Using the techniques from appendix D of [42] again, we can rewrite all the expressions in HPLs. 5. The final numerical check on the result consists of the comparison of the original integral using Mathematica's numerical integration (using the package HPL [57,58] to evaluate the HPLs numerically) and our final expression, using Ginsh, the interactive frontend of the computer algebra system GiNaC [56], for a random value of z.
The full set of convolutions can be found in machine-readable form (both Maple and Mathematica) in the ancillary files accompanying this arXiv publication. They were all compared analytically in Mathematica to the expressions given in [66], and complete agreement was found for all convolutions. For convolutions involving the two-loop quarkquark splitting kernels P qq and P (1) qQ , the results had to be combined according to eq. (3.12) to find equality.
Soft expansion of the convolutions
While the full N 3 LO corrections to the gluon fusion cross section may still be out of reach for the time being, a description in the soft limit could be feasible already in the close future. Note that this was the sucession at NNLO, as well, where the expansion of the cross section up to O((1−z) 16 ) [10,67] was published before the full computation [11,12]. The numerical agreement between the two computations proved to be excellent, so, anticipating the same behaviour at N 3 LO, the soft expansion of the N 3 LO corrections would be a very important result to obtain. The first pure N 3 LO piece of the third order soft expansion, the soft triple-real emission contribution, has recently been published [42].
In the limit z → 1, the partonic cross section (and all convolutions contributing to it) can be cast in the following form (suppressing partonic indices) We thus need to expand the regular part as a polynomial in (1 − z), times log(1 − z) terms. We proceed as follows: 1. We define z ′ ≡ 1 − z. Thus, our expressions now consist of HPLs with argument 1 − z ′ times powers of z ′ . The desired limit is z ′ → 0.
2. We want to rewrite the HPLs with argument 1 − z ′ as MPLs with argument z ′ , which results in changing the array of indices from {−1, 0, 1} to {0, 1, 2}, as can be easily seen by taking the integral definition eq. (4.10) for x 1 ∈ {−1, 0, 1} and changing variables t → 1 − t. The rewriting is achieved once again with the techniques from appendix D of [42].
3. The expansion of any MPL in its argument is straightforward, since there is a connection between MPLs and multiple nested sums [61], where the translation from MPLs to nested sums is given by The specific form of the MPL on the left-hand side of the equation above can be obtained via the scaling property, G(x 1 , . . . , x n ; z) = G(λx 1 , . . . , λx n , λz), where λ = 0 = x n . MPLs with a rightmost index of 0 must be rewritten using the shuffle product, e.g.
until all rightmost zeroes have been turned into explicit logarithms. The remaining MPLs can then be safely translated to nested sums.
The crucial point is that the variable x k only appears in the outermost sum in eq. (4.14), while the inner nested sums only depend on the x i<k , which in our case are the indices a i ∈ {0, 1, 2}. We thus easily obtain the desired expansion when we just truncate the sum over n k in eq. (4.14) at the highest power of z ′ we are interested in.
4. The validity of the soft expansions of the convolutions was checked numerically for some small values of z ′ . All soft expansions of the convolutions up to O (1 − z) 12 can be found in the ancillary files accompanying this arXiv publication.
Numerical results for the gluon fusion scale variation at N 3 LO
The total cross section for Higgs production through gluon fusion at N 3 LO depends on the factorisation and renormalisation scales explicitly, through logarithmic terms that have been derived in this work, and implicitly through the µ r dependence of α s and the µ f dependence of the parton distribution functions. In principle N 3 LO parton distribution functions should be used, but in practice, not only are they not available (nor will they be in the near future), but also their deviation with respect to the NNLO pdfs available is expected to be very small. On the other hand, the full, implicit, µ r dependence through α s , can only be estimated once the N 3 LO matrix elements are known, and in particular the coefficients 2 a are known, from mass factorisation constraints [37]. The question, then, arising is whether we can anticipate the scale uncertainty at N 3 LO with the information currently available.
To this end we parametrize the unknown delta and regular coefficients by a scaling factor K times the corresponding NNLO coefficients: There is no a priori reason why the scaling factor for the delta and the regular terms should be the same. However, it turns out that the numerical impact of the delta coefficient a is negligible (for scaling coefficients that do not break by orders of magnitude the pattern observed from lower orders), in contrast with the coefficient of the regular part, so we adopt here a common scaling factor to keep the parametrisation simple. For the same reason we use the same K scaling coefficient for all initial state channels. A loose argument about the size of K can be derived if one assumes a good perturbative behaviour at µ r = µ f = m H where all other terms of order a 5 vanish. Since a(m H ) ∼ 1/30 one expects K not to be much larger than 30. For comparison, the corresponding rescaling factors between NNLO and NLO are for m h = 125 GeV and µ f = µ r = m h . In what follows we study the inclusive cross section as a function of the scales, in the HQET approach, rescaled with the exact leading order cross section. We use the framework of the iHixs program [20] a Fortran code which contains the complete NNLO cross section for gluon fusion in HQET. The coupling α s was run to four-loop order according to eq. (2.13), while for the parton distributions, the MSTW08 NNLO set was used. Furthermore, to cross-check our results, a second implementation was programmed in C++, where the convolutions of splitting kernels and partonic cross sections were performed numerically. For both codes, the numerical evaluation of HPLs was performed using the library Chaplin [60]. The two implementations agreed for all parameter configurations that were tested.
In figure 1, the different orders of the hadronic gluon fusion cross section for the 8 TeV LHC and a Higgs mass of 125 GeV, along with several N 3 LO approximants for various numerical values of K are plotted as a function of the renormalisation scale µ r , while the factorisation scale is fixed to µ f = m h . Note that the convolutions of splitting kernels and partonic cross sections do not enter in this plot, since they are proportional to log(µ 2 f /m 2 h ). The µ r scale variation for LHC with 14 TeV centre-of-mass energy is shown in fig. 3. The µ f scale dependence, shown in figure 5 for 8 TeV centre-of-mass energy, is, as expected, extremely mild, in accordance with what is observed at NNLO. Figures 2 and 4 display the overall scale dependence, with both scales set to be equal and varied simultaneously. We note that the curves for the approximate N 3 LO cross section with various Ks spread widely in the low scale region, i.e. for µ < 30 GeV. This is not unreasonable, though, as in this regime, the unknown N 3 LO contributions that are neglected become much more important due to the running of α s . Indeed, at the lowest renormalisation scale considered, µ = m h /16 ≈ 7 GeV, the coupling becomes as big as which is supposed to cancel the implicit logarithms in the running of α s , and which becomes large and negative, thus pulls the curve down for small scales, and is canceled by the currently unknown contributions whose magnitude is small at µ r = m h but is greatly enhanced due to α s at small µ r . It can hardly be overemphasised that the above prescription does not represent a proper calculation of the N 3 LO matrix elements, but just a way of parametrising their unknown numerical importance. Once the height of the N 3 LO curve at (µ r , µ f ) = (1, 1) is set, the shape of the full curve only depends on lower order cross sections (which we know exactly), the running of α s and the parton distribution functions, respectively. As mentioned above, the unknown, numerically important coefficient functions c (3,0) gg (z) contain logarithmic contributions that are singular at threshold, log(1 − z), contributions that are regular and contributions that are singular at the opposite, high energy limit log(z). The leading and several, but not all, subleading threshold contributions are associated with multiple soft emissions and can be recovered by resummation techniques. The Indeed, by comparing our results for the µ r -dependence of the N 3 LO cross section for the dominant gluon gluon initial state, with the numbers obtained via the recently released numerical program gghiggs [38], we find agreement between the two curves when setting K to 25, as is displayed in figure 6.
While it is plausible that the leading logarithmic contributions, being threshold enhanced, capture the bulk of the cross section, it is unclear whether the unknown subleading contributions, as well as the non-logarithmic terms, are really negligible. Their importance certainly rises for the LHC at 14 TeV, as the luminosity function suppresses the region away from threshold less, resulting in more phase space for real radiation. One might, therefore, want to be conservative about their magnitude, and hence on the size of the scale uncertainty to be anticipated before the full N 3 LO result is available. Table 1 shows the estimates for various values of the rescaling factor K, covering the range from relatively mild to extremely strong N 3 LO corrections, resulting in scale uncertainties varying from 2% to as large as 8% or more. The scale uncertainties cited here are evaluated by varying the scales in the interval [m h /4, m h ]. µ f is fixed to m h and only µ r is varied. K is varied from 0 to 30. Only the gg channel is plotted, and compared to the results obtained with [38].
The choice of the central scale around which the variation is performed has been an issue of debate lately, since different choices result in slightly different scale uncertainty estimates but also in different central values for the cross section. The choice is largely arbitrary, but various indications (like improved perturbative convergence, typical transverse momentum scales for radiated gluons, average Higgs transverse momentum etc.) point to a central scale choice that is lower than the traditional one at m h , closer to m h /2. An alternative indication comes from the considerations of [68], where it is argued, looking at examples from jet physics, that a reasonable indication would be the position of the saddle point in a contour plot of the cross section as a function of µ r and µ f . In figs. 7 and 8 we show such contour plots for Higgs production at LO, NLO, NNLO and N 3 LO (for three values of the parameter K). In the cases where a saddle point exists, its position points indeed to lower scale choices, and in the cases without a saddle point the plateau region is also located in lower scales. Given the extremely mild factorisation scale dependence, the saddle point or plateau region is largely determined by the µ r plateau in all previous figures. The value on the contours is the cross section in picobarns. The x-axis is log 2 (µ f /m h ), the y-axis log 2 (µ r /m h ). Our preferred central scale choice is located at (-1,-1).
Conclusions
In this work we have presented all convolutions of lower-order partonic cross sections and splitting kernels that contribute to order a 5 to Higgs production in gluon fusion. The results agree with the ones previously published in [41]. Apart from the full expressions, we also provide all convolutions expanded around threshold, as the full N 3 LO corrections in this limit seem to be feasible in the near future.
We have also anticipated the scale dependence of the N 3 LO gluon fusion cross section, The value on the contours is the cross section in picobarns. The x-axis is log 2 (µ f /m h ), the y-axis log 2 (µ r /m h ). Our preferred central scale choice is located at (-1,-1).
into which the calculated convolutions enter. As is the case at NNLO, the factorisation scale dependence is extremely mild, at the per mille level or below. The overall scale uncertainty is driven by the renormalisation scale variation. The definite uncertainties depend on the size of the missing pure N 3 LO contributions. Scanning over a reasonable range for these contributions, we find that in the residual scale uncertainty can vary from 2% − 8% depending on the magnitude of the hard real corrections, whose computation is, to our view, a prerequisite for a solid estimate of the N 3 LO scale uncertainty. to thank Franz Herzog for pointing out the way of obtaining plus-plus convolutions by expanding a hypergeometric function. This work was supported by the Swiss National Foundation under contract SNF 200020-126632.
A. Convolutions of two plus-distributions
In the convolutions needed for collinear counterterms we face the problem of convolutions involving one or more plus-distributions. Here, we demonstrate how to obtain all convolutions containing two plus-distributions. In the second to last step, we mapped λ → 1 − λ and in the last step, the Euler definition of the hypergeometric function was used, where B(x, y) denotes the Euler Beta-function. On the other hand, we may also directly expand the integrands in I ab in terms of a delta function and a tower of plus-distributions, The above expressions agree with the ones given in [41] (eq. 22) and [70] (eq. C.28 -C.31). For the cases D 0 ⊗ D n , the combination of harmonic polylogarithms given in the references collapses to the single term − log(z) log n (1 − z)/(1 − z). | 8,418.2 | 2013-06-10T00:00:00.000 | [
"Geology"
] |
Distinguishing f(R) theories from general relativity by gravitational lensing effect
The post-Newtonian formulation of a general class of f(R) theories is set up in a third-order approximation. It turns out that the information of a specific form of f(R) gravity is encoded in the Yukawa potential, which is contained in the perturbative expansion of the metric components. Although the Yukawa potential is canceled in the second-order expression of the effective refraction index of light, detailed analysis shows that the difference of the lensing effect between the f(R) gravity and general relativity does appear at the third order when f′′(0)/f′(0)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{f''(0)/f'(0)}$$\end{document} is larger than the distance d0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d_0$$\end{document} to the gravitational source. However, the difference between these two kinds of theories will disappear in the axially symmetric spacetime region. Therefore only in very rare case the f(R) theories are distinguishable from general relativity by gravitational lensing effect in a third-order post-Newtonian approximation.
Introduction
Recently modified gravity theories have received increasingly attention in issues related to "dark energy" [1][2][3], "dark matter" [4][5][6][7], as well as non-trivial tests on gravity beyond general relativity (GR) [8]. Historically, Einstein's GR is the simplest relativistic theory of gravity with correct Newtonian limit. To pursue new physics, Weyl and Eddington even began to consider modifying GR just after it was established [9,10]. From the viewpoints of perturbutive quantum gravity, GR is non-renormalizable [11][12][13][14], while higherorder gravity theories might alleviate the problem. From the phenomenological viewpoints, there are many ways to modify GR, and some empirical approaches seem to have promising prospect, such as Dvali-Gabadadze-Porrati gravity [15], tensor-vector-scalar theory [16] and Einstein-Aether a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>theory [17]. Among such extended theories, particular attention has been devoted to the so-called f (R)-gravity. This kind of theories is based on a generalization of the Einstein Hilbert Lagrangian to nonlinear functions f (R) of the Ricci scalar [18]. f (R)-gravity covers a lot of characteristics of higher-order gravity and is convenient to be operated. Hence, f (R) theories provide an ideal tool to study the possible extension of GR. f (R) theories of gravity can also be nonperturbatively quantized by loop quantum gravity approach [19,20].
To confront f (R)-gravity with observations in Solar System, one can get constraints on the theories from different measurements, such as the EötWash experiment [21], the geodesic precession of gyroscopes measured by Gravity Probe B [22] and the precession of the binary pulsars PSR J0737-3039 [23]. At cosmological scales one would expect to employ f (R) theories to account for the problems of "dark energy" [24][25][26] and "dark matter" [27][28][29] needed in GR. If f (R) gravity could account for dark matter, besides matching the rotation curves of galaxy clusters, it should also match the measurements on gravitational lensing effect [30]. However, it is shown in [31] that, in a second-order post-Newtonian approximation, a rather general class of f (R) theories is indistinguishable from GR in gravitational lensing effect. Nevertheless, we will show in this paper that a class of f (R) theories is indeed distinguishable from GR in gravitational lensing effect in a third-order post-Newtonian approximation. However, the possibility to account for the dark matter problem with f (R) theory in lensing effect is highly suppressed due to this tiny third-order difference.
This paper is organized as follows. In Sect. 2, we briefly review the field equations of metric f (R)-gravity. In Sect. 3 the post-Newtonian approximation of a class of f (R) theories is formulated to the desired order. In Sect. 4 we introduce the gravitational lensing effect in metric theories of gravity and show how the f (R) gravity can be distinguishable from GR at third-order post-Newtonian approximation. The dif-ference of the lensing reflection indices is discussed in an example. Finally, conclusions and remarks are given in Sect. 5. Throughout the paper, the metric tensor g μν takes the signature (−, +, +, +).
Field equations of f (R) theory
In metric f (R) theories of gravity, the action of gravity coupled to matter fields is given by where g is the determinant of the metric tensor g μν , χ = 8π G/c 4 with G and c being the Newtonian gravitational constant and the vacuum speed of light, respectively, R = g μν R μν is the Ricci scalar, f (R) is a nonlinear function and S M is the standard matter action. The variation of action (2.1) with respect to the metric g μν yields the Euler-Lagrange equations where ∇ μ is the covariant derivative for g μν , g := ∇ μ ∇ μ , and T μν = (−2c/ √ −g)(δS M /δg μν ) is the energy momentum tensor of matter. Taking the trace of Eq. (2.2) we can get where T is the trace of T μν . Using Eq. (2.3), we can rewrite Eq. (2.2) as
Post-Newtonian expansion
The matter constituents in the universe are usually well approximated by a perfect fluid with mass density ρ and pressure p [32]. Hence we assume that the Newtonian potential U of the mass distribution, the typical velocities v and the pressure of the fluid obey such approximation respectively.
In the post-Newtonian approximation, we can further expand the dynamical variables in the field equations perturbatively in powers of 1/c, since we have the following order relation [33][34][35]: where is the ratio of the energy density to the rest-mass density.
We consider the case that the gravitational field is weak and assume that in absence of a gravitational field the background space-time is flat [30]. We also assume that f (0) = 0, which neglects the contribution of a possible cosmological constant and excludes some form of f (R) theories, e.g, f (R) = 1/R. Note that actually the contribution of a possible cosmological constant can be equivalently substituted by the corresponding contribution of an energy-momentum tensor. Moreover, the f (R) form which is unable to get weak field solution is useless here. In a weak field regime the metric tensor can be expanded about the Minkowski metric η μν in its Lorentzian coordinate system as where |h μν | 1. Up to third order the components of the metric tensor can be written as [31,33,36]: where the left upper index (n) means the order O(n). Using Eq. (3.3) we can get the components of the Ricci tensor: Assuming f (R) to be analytic at R = 0, to the second order the Ricci scalar and thus f (R) and f (R) read, respectively, To the leading order the components of the energy-momentum tensor of matter fields read 3) yields at the second order (2) R = −χ (2) T 00 / f (0), which is consistent with the equation of GR at the same order. Thus in this approach, GR is nothing else but the first term of the Taylor expansion of a more general f (R) theory. As one can see from the action (2.1), we ask f (R) to carry the same dimension as R's. Thus both f (R) and the term f (0) (2) R in Eq. (3.5) are dimensionless. Since the term f (0) (2) R is required to be of order O(2), the expansion of Eq. (3.5) will break down if (2) To derive neat equations in the post-Newtonian approximation, we impose the gauge conditions [31,36] With the gauge conditions, we get from Eqs. (2.3) and (2.4) For the sake of physics and simplicity, we consider the case of f (0) > 0 and f (0) > 0 and define Note that in this case the constant 1/ f (0) can be absorbed into the gravitational constant G if necessary. Then from Eq. (3.9) we can get [34,36] ( with the Yukawa potential (3.14) Note that the information of a specific form of f (R) gravity is encoded in the parameter α in the potential V , and we only consider the solution with α > 0. It should be noticed that, for the other solution with α < 0, the potential V would tend to be divergent at infinity.
It is easy to show by using Eq. (3.9) that Eq. (3.10) can be written as Thus we get Using Eq. (3.13), the solution of Eq. (3.16) can be given by where the Newtonian potential U reads It is obvious that the Newtonian potential U remains unchanged for different forms of f (R) gravity. Similarly, the solution of Eq. (3.11) reads From Eqs. (3.10) and (3.11), we have Using Eqs. (3.17) and (3.19), it is easy to see that Thus from Eq. (3.13) we have Then Eq. (3.17) can be written as where the potential ψ is defined as such that ∇ 2 ψ = −2U . Hence Eq. (3.12) can be written as The solution of Eq. (3.25) reads (3.27) So up to third-order post-Newtonian approximation, the final form of the metric components reads (3.28) In contrast, the metric components to the same order approximation in GR reads [33] g 00 = −1 + 2 c 2 U, (3.29) Hence the difference between the f (R) gravity and GR comes from the Yukawa-like potential V and Z ,0i . In the limit f (0) → 0, we get α → ∞ and V → 0. Then the solution (3.28) of f (R) gravity goes back to (3.29) of GR. On the other hand, it is straightforward to see that, in the limit f (0) → ∞, we have α → 0 and hence get the most obvious departure of f (R) gravity from GR.
Gravitational lensing
A gravitational lens refers to a distribution of plates (such as a cluster of galaxies) between a distant source (a background galaxy) and an observer that is capable of bending the light from the source, as it travels towards the observer. The lensing effect can magnify and distort the image of the background source [37]. According to Fermat's principle, the world line of a light should extremize its arrival time T with respect to an observer under the variation of γ . In metric theories of gravity, this principle implies that the world line of a light coincides with a null geodesic in the spacetime.
In the Lorentzian coordinate system of the flat background spacetime, let dl 2 = δ i j dx i dx j be the spatial Euclidean line element. Up to a constant, the travel time of light on a null geodesic γ is given by where we defined the effective refraction index of light as Then Eq. (4.1) takes the form similar to that of the propagation of a light through a medium in Newtonian space and time.
Second-order expansion
At the second-order post-Newtonian approximation, the only nonzero perturbative metric components are h 00 and h i j in Eqs. (3.17) and (3.19). For a null geodesic, we have [31,38,39] Hence we can get where h is defined such that Eq. (3.19) can be written as (2) h i j = (2) hδ i j . Using Eq. (4.2), we can obtain the effective index of refraction as which at the second order reads which illustrates that at second order, the effective refraction index n is only determined by the Newtonian potential U . As shown in Eqs. (3.28) and (3.29), the difference between f (R) gravity and GR comes from the potential V rather than U . Hence one cannot distinguish f (R) theories from GR at second-order approximation by the gravitational lensing effect [31].
Third-order expansion
We now consider the third-order post-Newtonian approximation which is needed in dealing with light rays in spacetime [33]. At the third-order expansion, the line element of the metric can be written as For a null geodesic, by using Eq. (4.2) we have Thus we obtain where In the third-order approximation, Eq. (4.11) can be expressed as F( (2) h 00 , (2) h, (3) where (4) E( (2) h 00 , (2) h) represents the expansion terms at fourth order. It is obvious that the − sign in front of the function F in Eq. (4.10) should be neglected, since otherwise the refraction index n would become negative. Hence Eq.
(4.10) becomes It is easy to see that Eq. (4.13) can be solved: where u i = dx i /dt is the components of the coordinate speed of light, and n 2 := 1 + 2U (x, t)/c 2 is the refraction index at second order. Therefore, in a third-order post-Newtonian approximation, the effective refraction index of light is obviously dependent on the third-order metric components h 0i . From Eqs. (3.27), (3.28) and (3.29) one can see that, in contrast to the case of GR, in f (R) gravity h 0i is effected also by the Yukawa potential V . Hence, f (R) theories are in principle distinguishable from GR by gravitational lensing effect in a third-order post-Newtonian approximation.
Differences: an example
Although the difference of lensing effect between f (R) gravity and GR is encoded in the third-order terms, it is still unclear whether the difference can actually be detected at this order and in which case the departure become most obvious.
To answer these questions, we first recall from Eq. (4.14) that the difference at third-order effect is contained in the difference of the metric components h 0 j between f (R) gravity and GR, which reads A straightforward calculation leads to are two monotone decreasing functions. Hence the lensing refraction indices of f (R) gravity and GR will have the biggest departure in the limit of α → 0, which reads One may notice that this is nothing else but the potential ψ ,0i appearing in the third-order post-Newtonian approximation of GR. Thus, in the case of the highest departure, the difference of the lensing refraction indices is at the same third order of GR. By noticing that the functions g and h satisfy the relation where d = |x − x |, Eq. (4.16) can be further simplified. In terms of cylindrical coordinates {r, θ, z}, Eq. (4.16) can be written as where we used the identity d θ |x − x | = d θ r 2 + r 2 − 2rr cos(θ ) + (z − z ) 2 = rr sin(θ )/|x − x | and θ = 0.
In an axially symmetric spacetime, it is reasonable to consider the case that the velocities v of the gravitational sources are all tangent to the r -θ plane. Then from Eq. (4.20) one gets Therefore, in an axially symmetric spacetime region, which coincides in most cases with those of galaxies and compact objects, one cannot distinguish f (R) theories from GR by the lensing correction to the third-order term. This result suggests that there are few opportunities to distinguish f (R) theory from GR even in a third-order post-Newtonian approximation. However, the difference n will not vanish in a nonaxially symmetrical spacetime region, thus one could detect the difference in principle. Since the potential appearing in Eq. (4.20) without function g are third-order post-Newtonian terms, the order of n is determined by the order of the functions g(αd) (or g(αd) and h(αd), with the same order as shown in Fig. 1. It is shown that, for α ≤ 1 d , n will be around 10 −1 times the 3rd GR terms, and thus keeps the same third order. However, for α ≥ 10 d , n will be less than 10 −2 times the third-order terms, and hence is indistinguishable with the fourth-order term. This estimation would approach the exact result for the spacetime region far away from the matter center. Then the functions g and h could be approximated by the values g(αd 0 ) and h(αd 0 ), where d 0 is the distance of the position to the matter center. Thus one could write the first-order approximation of (4.16) as Fig. 1 The evaluation of the functions g(αd) and h(αd) respect to αd in which the whole integration is approximated by the integration of the region around the center where most part of the matter locate. Therefore, in highly non-axially symmetric spacetime region, it is possible to distinguish the lensing of GR from those of the f (R) theories satisfying α ≤ 1 d 0 . It also requires that the measurement can approach the 10 −1 precision of the third-order effect.
Concluding remarks
In this paper, the post-Newtonian approximation of a general class of f (R) theories is formulated up to third order. In the third-order expansion, the metric components contain not only the Newtonian potential U but also the Yukawa potential V together with the third order potentials. Note that f (R) theories can be transformed into generalized Brans-Dicke theories by suitable conformal transformations. Since the post-Newtonian formulation of Brans-Dicke gravity has been well studied [33], one can check the consistency of the post-Newtonian formulations between the two kinds of theories. It turns out that, in the limit of α → 0, our result (3.28) of f (R) gravity coincides with the result of Brans-Dicke gravity given in [33]. The proof will be presented in Appendix A.
In our post-Newtonian formulation, the information of a specific form of f (R) theories is contained in the Yukawa potential. While the Yukawa potential does not show in the second-order expression of the effective refraction index n of light, it does appear in the third-order expression of n. Therefore in principle we could distinguish f (R) gravity and GR. Moreover, detailed analysis shows that a series of f (R) forms, more specific, whose parameter 1/α ∼ f (0)/ f (0) is larger than the distance to the massive center, are distinguishable from GR by the gravitational lensing effect in a third-order post-Newtonian approximation. It should be noted that the conclusion that f (R) theories can lead to the gravitational lensing effect different from that of GR can also be obtained by the approach of Minkowski functionals [40]. Moreover, it should be pointed out that the third-order perturbations are also distinguishing f (R) theories from GR in view of the Birkhoff theorem [41]. However, it is shown in this paper that, in the axially symmetrical spacetime region, the gap term between these two kinds of theories vanishes and hence they are indistinguishable at third order.
One of the motivations for developing modified gravity theories is to account for the observed mass profiles in galaxies as well as clusters of galaxies without the inclusion of dark matter. The existence of dark matter in GR is confirmed by the observational data not only from the dynamical analysis, such as rotation curves in spiral galaxies [42] and velocity dispersions in early-type systems [43,44], but also from gravitational lensing observations [45,46]. Observations indicate that we need to take into account almost the same large amount of dark matter to explain the gravitational lensing effect as that for the dynamical data like the velocity dispersion or the temperature profile of the X-ray emitting intracluster medium [47,48] in galaxy clusters or spiral galaxies. Up to now, certain f (R) theories are tested by the dynamical data in galaxy clusters and spiral galaxies [27][28][29][49][50][51]. However, concerning the gravitational lensing observations, our results here show a disfavor of the attempts in this direction. For any f (R) form which could be weakly expanded, the lensing effect correction due to the f (R) will be at most of third order, which is at most 10 −2 times of the leading order, i.e., the second-order post-Newtonian effect. Moreover, the fact that in axially symmetrical spacetime region there is no difference in lensing effect between these two kinds of theories strongly indicates that most of the lensing observations will not show the difference even at third order. Thus it is impossible to explain the lensing observations in the pure f (R) theories that we are considering without any dark matter involved. Actually there is already some evidence implying that f (R) theories without dark matter behave badly for galaxy clusters [52].
It is still possible to determine the parameter α 2 := f (0)/ (3 f (0)) though the precise observational results in non-axially symmetric system. Thus in near future, precise observations of lensing effect would be useful to distinguish certain f (R) theories from GR. It should be remarked that our result is only valid for the f (R) forms which could be weakly expanded. It is interesting to further study whether the dark matter content can be replaced by other unexpandable f (R) theories or other kinds of modified gravity.
ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Funded by SCOAP 3 . | 4,862.2 | 2017-10-30T00:00:00.000 | [
"Physics"
] |
Pressure Induced Changes in Grain Boundary Conditions of Lithium Conducting Ceramics Characterized by Impedance Spectroscopy
Solid state batteries, particularly for lithium ion based architecture have been the focus of development for over 20 years and are receiving even more attention today. Utilizing impedance spectroscopy (IS) measurements we investigate the response of conductivity versus incremental pressure increase by a piston-cylinder-type high pressure cell up to 1 GPa for some lithium conducting ceramics: LATP (Li 1.3 Al 0.3 Ti 1.7 (PO 4 ) 3 ), LLTO (Li 5 La 3 Ta 2 O 12 ), LLT (Li 0.33 La 0.55 TiO 3 ), LAGP (Li 1.5 Al 0.5 Ge 1.5 P 3 O 12 ) and LLZO (Li 7 La 3 Zr 2 O 12 ) for non-annealed and annealed samples. Isothermal, incremental pressure increase of powders allows for an in situ observation of the transition state conditions of poorly consolidated ceramic powders and the effects on grain boundary conditions prior to sintering. Specific conductance ( 𝜎 𝑏 ) increased by several orders of magnitude in some samples, approaching 10 -3 S ∙ cm -1 , yet decreased in other samples. The affect of grain boundaries and affects of bulk capacitance as the sample dimensions are altered due to pressure, are attributed to some of this behavior and will be discussed. The understanding of some of these fundamental processes may be valuable in facilitating these and similar ceramics for use in commercial solid state battery systems.
Introduction
Lithium ion batteries have been in widespread production and use in personal electronics since the early 1990s. Since then advances have occurred incrementally, with improved performance, higher energy density, and smaller battery size. Batteries that once were quite large are now very condensed, while maintaining the same power and energy storage as much larger cells. This is desirable for lightweight portable electronics, but with the side effect of improved performance these much smaller batteries are doing the same amount of work as their larger forerunners and heat evolution is an unavoidable consequence. Battery ignition, fires, and explosions of lithium cells has resulted in many changes in policy, regulations, and law, notably that free lithium batteries are not permitted in carry on bags with most airlines.
Other considerations are the affects of pressure and the affects on battery degradation under load which would be important with the development of commercial solid state battery systems.
Several solutions have resulted from this problem such as increased interest in the areas of inflammable electrolytes, better battery design, and solid state lithium batteries among others. It is widely stated in publications over the last several years that solid state batteries (SSB) are one solution for problems lithium ion batteries (LIB) such as dendrite formation, over heating, etc.
Generally speaking, solid state batteries are as not widely available, and only recently emerging on the commercial market most notably with electric vehicles.
Here, we look at five commercially available lithium containing ceramics as candidates for the solid electrolyte for SSB [1][2] . As a solid state electrolyte, ceramics are better suited for higher temperatures 3,4 which can simultaneously address the issue of safety as well as performance.
In an effort to reconcile grain boundary conditions and resulting affects on ion conductivity (σb), each powder was analyzed by impedance spectroscopy in situ to observe changes in grain boundary conditions. The grain boundary phase 12 may be detected separately 8,12 , or inclusively 13 , but it was not found that in situ studies of grain boundaries have been characterized in this way. We should first define the grain boundary as not only the area of grain-grain contact, but also the adjacent regions and voids where grains are in very close proximity without touching. This definition of grain boundaries has been inferred previously in other publications [14][15] . For simplicity this interpretation is used here as these grain boundaries are variable and expected to change (along with sample thickness) under increased pressures.
Identified in 1968 16 , LATP belongs to space group R3C [17][18] . Considered a super-ionic conductor 19 and air stable 20 , the crystal structure most similar to NASICON ( Na1+xZr2SixP3−xO12, 0 < x < 3 ) 21 which has been studied intensively since the 1980s for use in sodium or lithium sulfur batteries 18,22 . LATP has also been described as rhombehedral 23 . Ion migration pathways in LATP have been modeled for both sodium 24 and lithium 25,14 so much is already known about the bulk ceramic and single crystal form. LAGP also has a NASICON-like structure 26 , sometimes referred to as LISICON 25 and is less studied than LATP. LATP and LAGP are thus promising due to the super ionic conduction and resistance to lithium metal 2,20 . Conversely LATP Lithium reactions have also been reported 5,20,27 leaving the door open to additional characterization. The presence of trivalent metal ions in the structure can improve bulk ion conductance (σb) 28-31 but this is not expected to be the case here.
LLZO as received has a garnet structure. When calcined at 1150°C or by other methods [32][33] LLZO recrystalizes to a cubic phase (c-LLZO) 20,[34][35] . The cubic phase is considered a super ion conductor 7 , but is less stable than the garnet phase and was not investigated here. Doping of LLZO with aluminum is possible and has been shown to improve conduction 29,36 , but was avoided here with the use of quartz sample boats for heating.
First mentioned in literature in 1993 37 , LLTO is considered an exceptional ion conductor 5 that is temperature independent 38 . Because of high grain boundary impedance (ZGB), some say that LLTO is a better insulator than it is an ion conductor 39 . Contrarily, Sakamaki et al report that partial grain boundaries have higher current flow, and the high grain boundary resistance can be overcome 40 .
Perovskite LLT has been well characterized as a bulk material 11 and having high σb, but also dominated by grain boundary resistance even under pressure [41][42][43][44][45] . This reasoning is supported by Inaguma et al whom describe elimination of the grain boundary resulting in improved conductance for LLT 46 . On the other hand, LLT is also reported as reactive with lithium metal 46 .
If lithium metal could be used for anode material, the energy densities of 3860 mAhg −1 and oxidation potential of -3.04V vs. SHE would be a significant step in LIB development, but this has so far been a poor candidate due to high reactivity, and dendrite formation 47 . This includes solid state electrolytes 48 and the solid electrolyte interphase (SEI) formed when using lithium metal anodes.
We hypothesize that by characterizing lithium ion conducting ceramics in situ that we can learn more about the grain boundary interactions and perhaps lay the groundwork for future modeling of these and similar systems. By using commercially available materials, we expect there to be more reproducibility and consistency than experimental or synthesized powders which would add more variables to the study. While used as received, powders were also annealed to relieve internal stressors, and cracks, for a more uniform and optimized powder free from such physical defects as much as possible for additional tests for comparison, while heating below temperatures that would effect recrystallization or sintering.
Materials
Ceramics were used as received from Toshima Ltd. (Japan). For sample testing, a KBr pellet press (international crystal laboratories) with vacuum capability was used without modification.
Non-porous alumina tubing and E52100 alloy steel were purchased from McMaster Carr.
Impedance spectroscopy was taken with a Solartron 1296 interface and a Solartron 1260 gain phase analyzer. Impedance data was analyzed using Zview®, and any images were processed in ImageJ MBraun glovebox with typical O2 levels below 5ppm.
Methods
Annealing of powders was done in a tube furnace in alumina boats (except LLZO to avoid Al doping), under argon at specific temperatures for each ceramic. Annealing time was not less than 4 hours for any sample. Reduction of LATP occurred, resulting in the characteristic blue color which was reversible upon subsequent heating in air. The function of annealing reduces average particle size, as explained by Jackman et al 49 : γf is fracture surface energy, and Δαmax is the difference in maximum and minimum principal axis of a unit cell, or grain. Annealing was used in an attempt to procure more pristine grains, and perhaps narrower grain size distribution.
Each sample was subjected to pressure in custom test cell for in situ impedance testing. It was convenient to use a pellet press, originally made for FTIR salt pellets, with a removable alumina tube cut to size was used as an insulator inside the sample holder. Positioning the sample holder and steel shaft reduced incident of alumina tube breakage, but likewise under such pressures cracking of the alumina tube did not compromise the sample, as powders are not fluid and pelletize when subjected to pressure. The polished anvil, part of the pellet press, served as one electrode surface, which was in contact with the metal pellet press, and The hardened steel shaft served as the other electrode. The sample holder was set on a blank electronic"bread board" which does not compress, to insulate from the hydraulic press surface.
About 0.2 grams of powder is introduced into the tube. The steel shaft is inserted into the tube, and O-rings were used on both shaft and tube to facilitate vacuum. The hydraulic press was used to apply pressure such that the pressure gauge registered, and was referred to here as 0.00GPa pressure.
The sample powder was tested under a range of pressure from 0.00 to 1.00 GPa, at incremental steps. The pressure forces closer fitting of grains, streamlining the ion migration 50 by closing the grain-grain voids, and forcing more grain-grain surface contact. Alternatively, this conductance may improve simply by increasing the sample density 21
Impedance Spectroscopy (IS)
Bulk resistance is derived using equation 2, and the ion conductance by equation 3. It is important to indicate that ion conductivity is an intrinsic quantity characterizing the entire (bulk) sample. For this particular work the sample thickness, and volume, change with pressure. Bulk resistance (Rb) is described by equation (2) and bulk conductance (σ or σb) by equation (3): In equation 2, ω = angular frequency and = bulk capacitance. Equation 3 has two parts, the convention for conductance at constant conditions, and the change in conductance roughly calculated dependent on a change of sample thickness (∂l), and resistance, ∂Rb. A is electrode surface area in contact with sample, and l is sample thickness.
Determination of sample thickness in situ was determined by the difference of the height of the hydraulic press from the top of the sample holder after settling at each pressure increment, and the height of an empty cell with some pressure applied. A feeler gauges and micrometer were used exhaustively in determination of sample thickness.
Impedance spectra were typically taken from 10MHz to 0.075 Hz for each sample, at pressure increments from 0 to 1.00GPA pressure (0.75 GPa was used in some samples, higher pressures were not always sustainable).
Electric modulus is useful for materials characterization 15,[51][52][53][54][55] . Likewise tan δ, and permittivity ε * ( * =´− ´´, = √−1) may also be used in this case to characterize the changes in grain boundary conditions and were useful in determining the correct equivalent circuit. Bulk changes in the ceramic powder grains were not observed nor expected below 1 GPa.
In part due to scaling, Nyquist plots can be inconvenient to discern phases (figure 2) as the semi circles, or time constants may differ in size by orders of magnitude. Imaginary modulus (M") was used as it can discern the grain boundary phase 15 as a local maxima (Figure 3), defined by Also M"(ω) = ´+´´ (5) In eq. 4 and 5, e0 = 8,854pF/m -permittivity in vacuum , C0= capacitance, and ε´/ ε´´ are real and imaginary permittivity (ε´is also known as dielectric constant, and ε´´ as dielectric loss). Due to the three dimensional nature of the materials, it was necessary to construct an equivalent circuit using parameters beyond this work, such as loss tangent, admittance, permittivity and so on with an exact match for this system found using the model in figure 4, with grain and grain boundaries labeled, the electrical contacts (E1 is the polished anvil/ceramic interface, and E2 is the non-polished steel shaft/ceramic interface) were also detectable using IS.
It is described by Behera et al 55 However literature to this point does not discern between and . The difference can be explained by changes in mean proximity of the grains or the reduced inter-grain hopping distance, which in turn would increase both ion hopping frequency and ratio of complete hops from one site to another resulting from this reduction in activation energy while other conditions remain constant.
X-ray diffraction studies
Initially samples were examined for crystal structure which matched well to the product diffractograms provided from the manufacturer (Figure 3a). Subsequent diffractograms were taken for treated heat and/or pressure treated samples for comparison, to ensure crystal changes were not occurring for samples of interest. With the exception of twice annealed LATP and LLT, no crystal changes were observed. These are discussed further in the supplemental information.
Annealing
Annealing temperatures were determined by literature, and by experimentation. The aim was to relieve surface stress and optimize the ceramics. Annealing was conducted in an argon tube furnace, and in air for comparison. Some samples were reversibly reduced in argon indicated by a change in color, which was reversible by subsequent annealing in air. LATP when twice annealed in this fashion showed great improvement in conductivity. However, small crystal changes were observed, and were not investigated in this work. This is referred to hereafter as twice annealed LATP.
Subsequent annealing in air resulted in LATP returning to a normal white color.
IS pressure studies
In General, conductance increased for LAGP, LATP, and LLZO samples with increased pressure. We think that the intra-grain structure of the pressed sample collapses at some threshold pressure from mechanical failure, changing the system and thus the conductivity values. Unlike the other ceramics, LLTO samples decreased in conductance probably by affects described elsewhere 40 and LLT did not have a reproducible response.
Using IS, grain boundary impedance (ZGB) is typically much larger by several orders of magnitude than the bulk impedance of the ceramic crystal (ZB). In these results the signal for bulk impedance (grains) are overwhelmed by the grain boundary impedance signal. In the high frequency range, the boundary is short circuited and capacitively conducting and negligible in magnitude, while the lower frequency 2 nd semicircle is in the time domain, and more closely resembles direct current in natures, mostly grain-to-grain conduction that varies with grain proximity 42 .
As the pressure increases within the confined space grains are forced closer, and rearrangement for stability until some upper mechanical limit is reached resulting in fracture and collapse of grain arrangement. Samples respond differently, many reaching a threshold where conductance abruptly increases, while others decrease.
This mechanical breakage resulting in smaller grains, and changing surface area (SA) the distance between grains (l) is dynamic in this pressurized system. For disordered systems which consist mostly of capacitive grain boundaries characterized by Sakamaki et al 40 , it is reasonable to modify the equation for capacitance to reflect changing conditions in this system (equation 6).
C =capacitance, A surface area, l (grain boundary) length and P pressure.
9
The brick layer model is conceptually sound even in simple form, but has been updated [41][42][43][44][45] in literature to account for grain size, geometry, percent of grain contact and so on, leaving the door open to perhaps model such systems better in the future.
With a grain boundary conductance that is much lower than the bulk as it is here, admittance equations may characterize a system in part (equation 11) when σb>>σgb or 12 where σgb>>σg . Ψt is total admittance, x is volume fraction, g is grain, gb grain boundary, for systems with known volumes of grain and grain boundaries. Pore space was measured using the Archimedes method, and it could possibly be used, in part, to determine the aforementioned volumes.
Sample pellets formed at maximum pressure (0.75 GPa typically) had a measured porosity between 18-30%, without the use of binders. While not elastic, some relaxation of pelletized ceramics was expected and observed when released from the sample holder, therefore voids and pore space are larger in a formed pellet than when subjected to pressure.
While it is easy to make the assumption that the grains get closer together and more compact, and that conduction may improve, the underlying mechanisms are quite complex and interesting.
Application of pressure up to 1.0 GPa is not alone sufficient to cause a crystallographic change in the samples, so we can say that the changes in electrical measurements are a result of mechanical forces.
Twice annealed LATP conductance increased abruptly between 0.45 and 0.60 GPa ( figure 7F). it is speculated that the inter-grain structure in the sample cylinder collapses above 0.
Scanning electron microscopy (SEM)
Micrographs were taken at each experimental step when possible. Upon annealing, grains appear better defined. Calculations of average diameter indicate smaller average sizes for pressed, annealed, and annealed then pressed powders which were expected, likely due to mechanical stress and exploitation of surface cracks and imperfections by the heating process and/or pressure.
Consequently this changes the overall grain size distribution, therefore a resulting change in grain and grain boundary surface areas. Most notably for LATP, an abrupt change in grain boundary impedance was noted probably due to collapse in intra-grain structure which occurred at about 0.45GPa.
SEM images of samples illustrate what would be expected, that the grains are closer together with consolidation of grain boundaries which range are quite large, with voids for unconsolidated samples to closely packed grains for samples subjected to pressure to form pellets. Moreover some physical changes can be observed for the grains themselves for LAGP, LLZO and LATP. Visually the consistency of the ceramic powders differs from each treatment.
TGA Analysis
Powders as received were pressed into pellets to 0.75GPa prior to TGA analysis. All powders showed loss of weight due to increase of temperature, possibly due to retained water 60-62 , trapped gases 25 , decomposition 63 , or removal of synthesis residuals 62 . This alone could change the outcome for powders that were annealed then pressed compared to powders that were pressed, and then annealed because these contaminants could be reintroduced when pelletizing annealed powder.
Samples were ramped 10°C / min to annealing temperatures, held isothermally 60 minutes under nitrogen then cooled to 50°C (figure 9). At this step the flowing gas was switched (nitrogen, argon, or air) and held isothermally for 48 hours to observe weight changes summarized in Table 2.
Improvement in conductance by gas replacement in the pores has been speculated elsewhere, and should not be ruled out. Breathing quality air A was used in place of oxygen. Samples were allowed to stand 48 hours after cooling to observe weight change. The decrease in weight suggests possible consolidation of grains and decreased pore space of pellets, as well as an indication of pore space availability in each pressed ceramic pellet.
Discussion
In all ceramic systems described, grain boundary impedance dominates each system. This results in a rather low ionic conductivity overall for a disordered powder system. This is mitigated somewhat by the process of compressing the powders into a pellet, and by annealing the powders.
Other possible methods would be ball milling which would be a consideration when using commercially available powders if a more uniform system is not available for purchase.
Without sintering, changes were observed at the grain boundary regions utilizing a novel application of IS such as modulus which has been in use but not a great deal over the years Modulus allows the observation of the grain boundary capacitance shift due to relaxation processes at the grain boundary 64 Here we have characterized and compared the changes in conductivity and other electrical properties of five commercially available lithium conducting ceramics, the interactions at the grain boundaries, and attempting to resolve these changes under different pressurized conditions without effecting a structural change in the bulk. A better understanding of the mechanisms involved is a step forward in realizing the production of solid state batteries, their practical and reliable use. Figure 1 The steel shaft is inserted into the tube, and O-rings were used on both shaft and tube to facilitate vacuum Figure 2 In part due to scaling, Nyquist plots can be inconvenient to discern phases as the semi circles Tables.pdf | 4,839.8 | 2021-06-25T00:00:00.000 | [
"Materials Science"
] |
Performance evaluation of Indonesia's large and medium-sized industries using Data Envelopment Analysis method
ABSTRACT
INTRODUCTION
The industrial sector is essential to Indonesia's economic growth, as it is the backbone and engine of the nation's economy.Foreign exchange profits may arise from the industrial sector's potential to absorb labor and from export operations.Several abilities of the industrial sector are as follows: (i) taking in labor (from businesses that require a lot of labor, capital, knowledge, and technology); (ii) comparatively high output levels; and (iii) its ability to provide links and supplies to other sectors [1].A business sector known as the processing industry is involved in the mechanical, chemical, or manual conversion of raw materials into finished or semi-finished items, or in producing higher-value goods from low-value raw materials with characteristics more akin to those of the end user.These activities include assembly work and industrial services.The industrial processing sector is divided into four categories: home industries (1-4 employees), small industries (5-19 employees), medium industries (20-99 employees), and large industries (≥ 100 employees) [2].
The processing industry sector's performance was predicted to improve and recover by 3.4% in 2021.This industry contributes to the 3.7% rise in Indonesia's GDP (Gross Domestic Product).One of the pillars of industry's expansion and competitiveness is the expanding ecosystem of industrial activity that fosters the development of industry.Industry boosts employment quality and productivity on a national level.Adequate rules, commercial prospects, resource accessibility, a favorable investment and business environment, and the availability of industrial human resources are all necessary for industrial optimization.Industry has a significant multiplier effect and adds value to the economy.All sectors in Indonesia can benefit from the distinctive outcomes of the industrial sector, which has both forward and backward connections [3].
As the sector with the highest contribution to the national GDP on a consistent basis, the processing industrial sector is important for developing the economy.In addition, export and investment values are achieved, which is a testament to the industrial sector's outstanding performance.The chemical, food and beverage, apparel, electronics, pharmaceutical, and medical equipment industries are included in the seven priority industrial sectors.As this industry contributes over 60% of the country's GDP, the ultimate goal is for Indonesia to rank among the top 10 economies in the world by 2030.The government is paying attention to the industrial sector in order to carry out initiatives and enhance performance to boost the industrial sector's competitiveness and hence spur national economic growth through a variety of strategic initiatives [4].
An important aspect of starting and growing a business is measuring performance.Businesses constantly assess their work performance in light of their advantages and disadvantages.The organization needs to monitor performance for the following reasons: (i) develop the economy and its operation efficiency in a sustainable way; and (ii) supply data for decision-making [5].Performance evaluation has an important role in the development of a company, including: (i) determining the efficiency and economics of sustainable operations; (ii) providing information as a basis for company decision-making; and (iii) improving the company's operational processes.Its role becomes very important if standards or benchmarks are not presented for evaluation.One technique for evaluating performance is data envelopment analysis, or DEA.Decision-making units (DMUs) are compared with each other using the DEA approach.These DMUs can include business units, decision-making units, companies, organizations, projects, or individuals [6,7].
The DEA approach is applied to a homogeneous group of DMUs with different inputs and outputs in order to determine their relative efficiency.This concept is a non-parametric linear programming (LP) technique.When evaluating DMUs and allocating resources to support organizational strategy and objectives, the DEA is a useful tool for businesses and organizations.Thus, DEA is a tool for decision support that may be used for planning, controlling, and monitoring management.The efficacy of DEA as a method for benchmarking and performance evaluation to improve organizational operations has been established.In order to compare a unit with its equivalent peers, DEA is used as a benchmarking technique to produce a performance score that shows how far away the unit is from best practices [8].
Businesses in large and medium-sized industries (LMIs) encounter numerous challenges as they grow.The following are several issues: the use of outdated technology in the production process, the dearth of manufacturing facilities, the low quality of raw materials, the low level of sales of products that do not meet the aim, the quality and availability of human resources with inadequate training or education, restricted network of distribution, a lack of advertising, low financial administration, capital resource limitations, restrictions on the acquisition of raw materials, high production prices, and a slow rate of product innovation, weaker IDR compared to USD exchange rates, high inflation, the nation's economy in decline, government initiatives to cut back on public subsidies, unpredictable internal political conditions, a large number of new competitors, fierce competition, quick product innovation and aggressive competitor marketing, a wide range of options for consumers purchasing the same product, low pricing demands from customers, customer complaints, growing raw material prices, the need for high-quality products at prices that are competitive, and a reduction in the supply of raw materials.Consequently, it is imperative to consistently evaluate the performance of large and medium-sized industries (LMIs).This will allow LMI to understand the company's strengths and weaknesses.They'll be able to recognize the opportunities in the industry's business.As a result, they will be better equipped to manage their business and will also be more capable of competing in the global marketplace [1].
The purpose of this research is to measure the performance of large and medium-sized industries (LMIs) in Indonesia.LMIs have a strategic role as the main engine and driver of the economy.Measuring LMI performance is very necessary so that LMI can grow and develop sustainably.The method used in this research is data envelopment analysis (DEA).Several reasons underlying the choice of the DEA method in this research are as follows: (i) DEA is a method for measuring performance; (ii) DEA is the non-parametric linear programming technique; (iii) DEA is used to determine comparisons between DMUs with multiple inputs and outputs; and (iv) DEA is a tool instrument applied to measure the relative effectiveness of the same DMU type.Therefore, DEA serves as a classification and ranking tool.
Performance Evaluation
Performance evaluation is essential to a company's ability to operate successfully in the face of a dynamic commercial environment.For the business to survive, it is therefore a necessary function.The definition of performance evaluation is the essential procedure for gauging an action's effectiveness and qualification.Efficiency, namely the efficiency determined by the needs and preferences of the client (customer satisfaction), is employed in the framework of performance evaluation.The goal of performance evaluation is to provide information for the company to make decisions while continuously monitoring the economy and efficiency of the business's operations.Performance evaluation is a commonly employed technique to enhance organizational procedures.In the event that criteria or benchmarks are not provided for assessment, this approach becomes crucial [7,9].Production efficiency is a key indicator of productivity.Reduced productivity can cause excessive inflation, an unfavorable balance of payments, and sluggish economic growth at the national level.Reduced productivity inside the company may lead to higher production costs and a decline in the company's ability to compete [10].
Efficiency management is becoming more and more crucial to enhancing the sustainability of the chain.The objective of an organization's performance efficiency management strategy is to optimize output using the fewest resources possible or the fewest inputs possible to produce a given quantity of output.It implies that while measuring efficiency, numerous inputs and various outputs would be taken into account [11].For a firm to grow and flourish, it is essential to evaluate its business performance.Internally assessing a company's existing operations and comparing them to similar organizations and best practices are the two main goals of performance evaluation.In addition to helping a firm better satisfy consumer expectations and requirements, this will also help it: (i) identify its strengths and weaknesses; (ii) better manage its business; and (iii) determine potential business opportunities to improve operations and activities, such as developing new products, services, and processes [12].
Small, Medium, and Large Sized Industries
A national strategy aims to establish small and medium-sized industries (SMIs).SMIs are crucial for promoting economic expansion through workforce-intensive operations, corporate expansion, and revenue generation.Building SMIs requires strengthening the industries that make up the value chain.The core, allied, and supporting industries make up this group.SMIs with advantageous locations have the ability to transform a comparative advantage into a competitive advantage.This is being accomplished through a number of initiatives, such as (i) strengthening the connections between SMI clusters across industries and (ii) encouraging partnerships between SMIs and large companies.Consequently, it will establish a network structure that fosters cooperation between related, auxiliary, and primary businesses.The term "micro, small, and medium-sized industry" (MSMI) refers to a trading business in which individuals or corporate entities run it.This also includes small-or micro-scale business requirements.Law No. 20, 2008, lists the MSMI regulations.A company with a monthly net worth of less than IDR 50,000,000 is considered to be in the microindustry.This computation does not account for the value of buildings or commercial space.A firm with a net worth of less than IDR 300,000,000 annually that is run independently, without the assistance of a corporate organization, is considered tiny [13].
Industries classified as large require substantial sums of capital to operate.The kind of goods produced determines this capital.High-tech products require progressively more expensive prices to operate.Additionally, this industry provides goods that other industry types-such as small-or medium-sized industries-need desperately.A major industry is defined under Law No. 3 of 2014 concerning industry as one that employs more than 100 people or has an investment worth of more than IDR 10 billion (excluding land and buildings).Multinational corporations in large industries typically attract investors from different nations.Big businesses work together with associated parties that produce similar goods to these massive industrial products.Finishing touches are typically provided in large industries.Any company will take part in the corporation.There are many different types of partnerships in large industries.The industry's advancement greatly benefits from this relationship [14].
Data Envelopment Analysis Method
Charnes, Copper, and Rhodes introduced the data envelopment analysis (DEA) method.This method developed the efficiency estimation method invented by Farrell, which involves comparing each production unit to the efficient production frontier.It is not necessary to provide a functional link between the inputs and outputs in order to use this idea [15].DEA is a benchmarking tool that can be used to evaluate performance.As a result, less effective production techniques are "enveloped" by the best-practice production frontier.Because DEA makes no assumptions regarding the production function's functional structure, it is less likely to lead to misspecifications [16].
Decision-making units (DMUs) that use numerous inputs to produce several outputs might use data envelopment analysis (DEA).This method is a mathematical programming technique to assess the relative efficiency of their operations.In terms of benchmarking and performance evaluation, the DEA approach's viability has been demonstrated.The DEA model under discussion is solved to obtain the efficiency score and benchmarking data for each DMU.The efficiency score is the optimal value of the objective function, and the projection point that the optimal solution yields is in line with the benchmarking data [17].DEA represents a method for nonparametric linear programming.The goal of DEA is to assess a set of similar organizations or decision-making units (DMUs) in terms of their relative efficiency.The technique known as DEA uses a variety of inputs and outputs to calculate the efficiency score.An efficiency frontier is created using a set of effective DMUs that serve as best practices, based on the efficiency index.Measurement of the distance from the efficiency frontier allows one to determine the efficiency level of inefficient DMUs.A production process can serve as an appropriate representation of the DEA approach [18,19].
The DEA method is used to compare the technical efficiency (TE) of various decision-making units (DMUs).TE is a term used to describe the optimal use of resources during the production process, much like physical productivity.A certain set of inputs yields the maximum output.Physical indications are the main focus.Constant returns to scale (CRS) and variable returns to scale (VRS) are two alternative hypotheses that allow for the non-parametric development of a DEA production frontier.Furthermore, an input-oriented model is applied when DMUs have greater control over inputs.The aim is to minimize resource utilization while fulfilling a particular productivity level.On the other hand, when DMUs concentrate on optimizing output from a fixed level of inputs, they apply an output-oriented approach [20][21][22].
Equations ( 1) through ( 4) of the linear programming formula are present in the DEA model.The model's output criteria are specified at the current level and are designed to minimize input.θ* = min θ .subjected to the following restrictions: (1) Equation (1) represents the objective function that maintains existing output levels while minimizing inputs.Equation (2) represents the input constraint, which contains several restrictions for every input.Equation (3) represents the output constraint, which contains several restrictions for each output.The unknown weights (λj) are shown in equation ( 4).
Among the n mentioned DMUs is DMU0.Xi0 and Yr0, respectively, represent the r-input and r-output of DMU0.λj represents the unknown weight, where j = 1,..., n.The solution variable, with the notation θ, represents the efficacy value.If θ is equal to 1, then the solution will be feasible.θ* ≤ 1 at its optimal value.If θ* = 1, then DMU0 is situated at the optimal criteria limit, indicating that a proportionate reduction in the current input level is not possible.The DMU0 is situated at the edge.If θ* is less than 1, then the solution is not feasible.Therefore, the same proportion of θ* can reduce the input.To achieve the same amount of output, less input is required [23].Notations of the DEA model are presented in Table 1.The DEA model arises from three constraints.The model can be expressed in equations ( 5) through (8).The objective function is represented in equation ( 1).This function minimizes inputs while maintaining current output levels.These exactly have m+s+1 constraints.The first constraint consists of m-different constraints for each other's input (Eq.6).The second constraint consists of s-different constraints for each other's output (Eq.7).There is just one constraint remaining (Eq.8), which is the quantity of unknown weights (λj).Table 2 provides a description of the notation used in equations ( 5) through (8).θ* = min θ Subject to (5) 9) through ( 16) provide a more detailed description of the DEA model.Equation ( 9) represents the objective function that maintains existing output levels while minimizing inputs.Equations ( 10) and (11) represent the first and second input constraints.The last input constraint is presented in equation (12).Equations ( 13) and ( 14) represent the first and second output constraints.The last output constraint is presented in equation (15).The quantity of unknown weights (λj) is presented in equation (16).Table 3 provides a description of the notation used in equations ( 9) through (16).
Research Methodology
The following are the steps that this research uses to solve problems: phases 1 (research design and definition), phases 2 (preparation, data gathering, and data assessment), phases 3 (data processing), phases 4 (result analysis), and phases 5 (conclusion).Phases of preparation, gathering, and assessing data, namely: (i) categorizing input and output data for large and medium-sized industries (LMIs); and (ii) figuring out LMI input, output, and DMU (decision-making unit) data.The steps involved in data processing are as follows: (i) standardization of input and output data; (ii) constraints; and (iii) efficiency.The DMU and DMU under evaluation data are entered into Microsoft Excel spreadsheets.Microsoft Excel is a solver for DMU efficiency calculations.The analysis of results includes: (i) efficient DMU; (ii) DMU is inefficient; (iii) DMU classification; (iv) factors causing increases and decreases in LMIs performance; and (v) LMIs development strategy.Figure 1 displays the research method flowchart.
Input and Output Variables
The data of Indonesia's large and medium-sized industries (LMIs) in 2021 was applied in this research [24].These data include: added value/cost of production factors, indirect taxes, number of workers, input costs, number of companies, proportion of workers in the manufacturing industrial sector, added value/market prices, and production index.DEA is a linear programming technique that deals with many efficiency parameters within an integrated model.Multiple efficiency measurements are associated with input and output variables.The variables that are typically minimized are called input variables.These include things like expenses, labor, materials consumed, etc.The variables that are typically maximized are called output variables.Examples of these are profit, revenue, and products.Prior to applying the DEA approach, input and output parameters are categorized and selected [13].Based on these regulations, six input variables (X1 to X6) and two output variables (Y1 and Y2) can be determined, as shown in Table 4.
Industrial Classification Based on KBLI and DMUs
The Standard Classification of Indonesian Business Fields (Klasifikasi Baku Lapangan Usaha Indonesia, or KBLI) is one of the standard classifications published by the Central Statistics Agency (BPS) for economic activities.A strategy of grouping utilized in statistical procedures and economic communication is called classification.When data is classified, it is arranged into classes that are as similar to one another as feasible based on predetermined guidelines or standards.KBLI offers an extensive collection of frameworks for classifying economic activities in Indonesia, making it usable for conducting statistics, basic planning, policy evaluation, and licensing [25].ISIC (International Standard Industrial Classification of All Economic Activities) serves as the foundation for the industrial classification utilized in the processing industry survey.Under the name Standard Classification of Indonesian Business Fields (Klasifikasi Baku Lapangan Usaha Indonesia, KBLI), this classification has been adjusted to better suit Indonesia's demands.The standard business field code of an industrial company is determined by its primary production, or the type of commodity produced with the highest value.If an industrial company produces two or more types of commodities with the same value, then the main production is the commodity produced in the largest quantity [24].The DEA method applies a decision-making unit (DMU) to perform each process, unity, and business activity in its calculation [23].In this research, KBLI industries are DMUs.Furthermore, the identity of each DMU is adjusted to the KBLI industry code.The DMU identity for the code 10 food industry is KI-10, the DMU identity for the code 11 beverage industry is KI-11, and so on.Table 5 presents 24 industry classifications based on KBLI and Decision-Making Units (DMUs).
Input and Output Data
An overview of all the data utilized in this research is given in Table 6.There are two output variables (Y1 and Y2), six input variables (X1 to X6), and 24 DMUs in this set of data.
Standardization of Input and Output Data
Data standardization is carried out to standardize data values whose format is inconsistent when input using a certain format, until all data becomes standard.The standardization of the data is presented in Table 7.
Utilizing Microsoft Excel Spreadsheets for Data Processing
A Microsoft Excel spreadsheet is used to organize research data and results, as follows: DMUs, input and output data, unknown weights (λ), constraints, reference set, and DMU under evaluation.These components are presented in Table 8 and Table 9.The input-oriented DEA Envelopment Model is used to calculate efficiency scores.Next, a score for each DMU's efficiency was obtained by using the MS Excel Solver function.
Analysis of Efficient and Inefficient DMUs
The efficient and inefficient status of the DMUs can be determined based on the efficiency score results.An efficient DMU has an efficiency score equal to one, and an inefficient DMU has a score of less than one.The factors that cause DMUs to have efficient and inefficient statuses are explained as follows: An efficient DMU always generates more outputs with equal input consumption or produces a given quantity of outputs with lower input consumption [23].In contrast, an inefficient DMU consumes more input to produce a given amount of output.Table 10 presents the analysis of these DMU statuses.There are 12 efficient DMUs, namely: KI-10, KI-11, KI-12, KI-18, KI-19, KI-20, KI-21, KI-22, KI-24, KI-28, KI-29, and KI-33.Inefficient DMUs also consist of 12 DMUs, namely: KI-13, KI-14, KI-15, KI-16, KI-17, KI-23, KI-25, KI-26, KI-27, KI-30, KI-31, and KI-32.
DMU Classification
The relative effectiveness of the same type of DMU is measured using a tool called Data Envelopment Analysis (DEA).The concept of the approach is to specify the relative effectiveness of the production frontier by maintaining the DMU inputs or outputs constant.This process applies a mathematical model and statistical data.The DEA model is used to project each DMU onto the DEA production frontier.The relative effectiveness of each DMU is then calculated by comparing its divergence from the DEA effective frontier [26].By comparing DMU outcomes, DEA serves as a classification and ranking tool.The consistency of the results demonstrates the validity of the DEA as a classification and ranking tool.DEA is therefore validated as a method of ranking and classification [27].
Figure 2 presents the efficiency score (ES) value for each decision-making unit (DMU).The x axis shows the type of DMUs, and the y axis shows the ES value for each DMU.The efficient and inefficient status of the DMUs can be determined based on the efficiency score results.An efficient DMU has an ES equal to one (high score), and an inefficient DMU has an ES of less than one (low score).The factors that cause DMUs to have high and low efficiency scores are explained as follows: A DMU with high efficiency scores always generates more outputs with equal input consumption or produces a given quantity of outputs with lower input consumption [23].In contrast, a DMU with low efficiency scores consumes more input to produce a given amount of output.This research indicated that there are 12 DMUs with high efficiency scores and 12 DMUs with low efficiency scores.
Large and Medium-Sized Industry Classification Categories
Based on the DMU classification, the Large and Medium-Sized Industry (LMI) classification categories can be determined as presented in Table 12.The types of industries included in category 1 are as follows: (i) food; (ii) beverages; (iii) tobacco processing; (iv) coal and petroleum refining products; (v) chemicals and products derived from them; (vi) pharmaceuticals, chemical medicinal products, and traditional medicines; (vii) plastic, rubber, and rubber-based products; (viii) primary metal; (ix) machinery and equipment ytdl; (x) automobiles, semi-trailers, and trailers; (xi) services for installing and repairing machinery and equipment; (xii) printing and duplicating recorded media.The types of industries included in category 2 are as follows: (i) textiles, (ii) clothes, (iii) leather, leather goods, and footwear; (iv) wood, furniture constructed of wood and cork, and objects woven from bamboo, rattan, and similar materials; (v) paper products; (vi) minerals without metals; (vii) items made of metal, not machinery and equipment; (viii) electronics, optics, and computers; and (ix) electrical equipment.The types of industries included in category 3 are as follows: (i) another mode of transportation; (ii) furniture; and (iii) another processing.
Analysis of Improvements and Decreases in LMI Performance
According to the analysis's findings (applying a cause-and-effect matrix), the factors causing the increase and decrease in the performance of large and medium-sized industries (LMIs) can be identified.The following factors contributed to LMI's improved performance: price reductions, variable product prices (bargaining), availability of newly available technologies, strategic location, high-quality products, responsiveness to market demands, focused marketing skills, benchmarking to evaluate the state of the market, accessibility of human resources, as well as their knowledge, abilities, and experience, accessibility of raw supplies, machines, and manufacturing sites that comply with requirements, accessibility of working capital, accessibility of bank credit, the role of non-governmental organizations, the function of the local government and relevant institutions, the presence of institutions for research and development, education, and training; the possibility of exporting goods abroad; purchasing power; excellent relationships with suppliers; assistance in selecting raw material suppliers; the entry of competitors that promotes an increase in both quantity and quality; and having excellent relationships with customers.
The factors that cause a decline in LMI performance are as follows: poor marketing strategy, the quantity of goods sold that don't meet the goal, the standard and accessibility of poorly educated and trained human resources, the lack of manufacturing facilities, the use of outdated technology in the production process, poor raw material quality, restrictions in obtaining raw resources, limitations on capital resources, poor financial management, high production costs, and sluggish product innovation, insufficient network of distribution, the deficiency in promotions, reduced rate of the IDR relative to the USD, a high rate of inflation, a deteriorating national economy, government initiatives to cut back on public subsidies, an unpredictable domestic political environment, competitive pressure, the entry of numerous new competitors, Customers' demands for low prices, competitors' aggressive marketing campaigns, quick product innovation, and a large range of options for the same product, the need for high-quality goods at prices that are becoming more competitive, consumer complaints, increasing costs for basic resources, and a decrease in their supply [1].
Large and Medium-Sized Industrial Business Development
Regarding the classification categories of Large and Medium-Sized Industry (LMI), industries in category 1 have effective performance.Meanwhile, industries in categories 2 and 3 have ineffective performance.Therefore, in order to be effective, efforts must be made to develop the businesses of industries in these two categories.In general, two important factors are required for developing LMI businesses, namely, internal and external factors.The company's strengths and weaknesses are determined by internal factors.The opportunities and threats facing the companies are determined by external factors.Identification of internal factors of LMI is presented in Table 13 f.Facilities for production are still lacking.g.Outdated technology used in the production process; restrictions on obtaining raw materials; and limitations on financial resources.h.There is still a deficiency in the distribution network in some regional areas.i.Production expenses are high.j.Product innovation is sluggish in comparison to competitors.
The identification of external factors in LMI is presented in Table 14
CONCLUSION
An efficient DMU has an efficiency score equal to one, and an inefficient DMU has a score less than one.There are 12 efficient DMUs and 12 inefficient DMUs.Both types of DMU have the same percentage, namely, 50% each.There are three DMU classification categories based on efficiency score (SE), namely: Category 1 (SE = 1), Category 2 (SE = 0.9986-0.9998),and Category 3 (SE = 0.9971-0.9974).The percentages for each category are 50%, 37.5%, and 12.5%.Various factors are needed to develop large and medium-sized industries (LMI).In general, there are two important factors in developing an LMI business, namely: (a) internal factors that determine the strengths and weaknesses of an LMI business; and (b) external factors that determine LMI business opportunities and threats.
Research
Design and Definition Preparation, Data Gathering, and Data Assessment: Categorizing input and output data for LMIs; and (ii) Figuring out LMI input, output, and DMU (decision-making unit) data.Data Processing Data settings in Microsoft Excel spreadsheets consist of columns: (i) Data of DMU and DMU under evaluation; (ii) input and output data; (iii) constraints; and (iv) efficiency.DMU efficiency calculations using DEA Method Conclusion Result Analysis: (i) efficient DMU; (ii) DMU is inefficient; (iii) DMU classification; (iv) factors causing increases and decreases in LMIs performance; and (v) LMIs development strategy.
Figure 2 .
Figure 2. Efficiency score (SE) for each DMU . Strengths of internal factors consist of (a) the market's needs; (b) brainstorming; (c) targeted marketing; (d) human resources (HRs); (e) HR expertise, skills, and experience; (f) the production process; (g) engines and production facilities; and (h) capital, credit, outcome, location, and pricing.Weaknesses of internal factors consist of (a) marketing strategy; (b) sales target; (c) both human resources and quality; (d) personnel with education and training; (e) raw material quality; (f) production facilities; (g) technology of the production process; restrictions on raw materials; and financial resources; (h) distribution network; (i) production expenses; and (j) product innovation.
. The opportunities of external factors consist of (a) the role of local government, related agencies, and non-governmental organizations (NGOs); (b) institutions of research, development, education, and training; (c) management information systems; (d) new technologies; (e) population growth; (f) export opportunities; (g) relationships with suppliers and customers; (h) the emergence of competitors; and (i) the number of regular, new, and non-fixed customers.Threats from external factors consist of (a) the inflation rate; (b) the country's economy; (c) new competitors; (d) business competition; (e) goods invention; (f) promotion; (g) product price; (h) customers' desire; (i) superior product;(j) customer complaints; (k) raw material price; and (l) raw material[28].
Table 1 .
Descriptions for DEA symbols
Table 4 .
Input and output variables
Table 5 .
Industrial classification (IC) based on KBLI and DMUs
Table 6 .
Input and output data
Table 7 .
Standardization of data
Table 8 .
Data preparation in microsoft excel spreadsheet
Table 9 .
Constraints, reference set, DMU under evaluation, and efficiency
Table 10 .
Analysis of efficient and inefficient DMUs
Table 10 .
Large and medium-sized industry classification categories
Table 11 .
Internal factors Working capital accessibility, bank credit accessibility, product quality, local and export product scales, strategic location, product pricing flexibility (bargaining), and the availability of price breaks.a. Poor marketing strategy b.Low sales of non-target products c.Low quality of trained and educated human resources.d.Low presence of personnel with education and training.e. Inadequate raw material quality.
Table 12 .
External factors Opportunities Threats a.The role of local government and related agencies.b.The existence of research and development institutes.c.The existence of education and training institutions.d.The role of non-governmental organizations (NGOs).e.The sophistication of management information systems.f.The adoption of new technologies.g.The rapid growth of the population.h.Export opportunities overseas.i. Positive relationships with suppliers.Assistance in choosing raw material and material suppliers.j.The emergence of competitors prompts an increase in quantity and quality.k.A large number of regular customers.l.A large number of new and non-fixed customers.m.A positive rapport with customers.a.A high rate of inflation.b.The country's economy is declining.c.A large number of new competitors have emerged.d.The competition is fiercely strict.e. Quick invention in competing goods.f.Fierce competition promotion.g.A variety of ways for customers to purchase the same item.h.Customers' desire for low prices.i. Requirements for a superior product at a more affordable pricing.j.Customer complaints that are filed.k.Price increases for raw materials.l.Reduction in raw material availability. | 6,904.8 | 2024-06-30T00:00:00.000 | [
"Economics",
"Business",
"Engineering"
] |
Identification of herbarium specimen sheet components from high‐resolution images using deep learning
Abstract Advanced computer vision techniques hold the potential to mobilise vast quantities of biodiversity data by facilitating the rapid extraction of text‐ and trait‐based data from herbarium specimen digital images, and to increase the efficiency and accuracy of downstream data capture during digitisation. This investigation developed an object detection model using YOLOv5 and digitised collection images from the University of Melbourne Herbarium (MELU). The MELU‐trained ‘sheet‐component’ model—trained on 3371 annotated images, validated on 1000 annotated images, run using ‘large’ model type, at 640 pixels, for 200 epochs—successfully identified most of the 11 component types of the digital specimen images, with an overall model precision measure of 0.983, recall of 0.969 and moving average precision (mAP0.5–0.95) of 0.847. Specifically, ‘institutional’ and ‘annotation’ labels were predicted with mAP0.5–0.95 of 0.970 and 0.878 respectively. It was found that annotating at least 2000 images was required to train an adequate model, likely due to the heterogeneity of specimen sheets. The full model was then applied to selected specimens from nine global herbaria (Biodiversity Data Journal, 7, 2019), quantifying its generalisability: for example, the ‘institutional label’ was identified with mAP0.5–0.95 of between 0.68 and 0.89 across the various herbaria. Further detailed study demonstrated that starting with the MELU‐model weights and retraining for as few as 50 epochs on 30 additional annotated images was sufficient to enable the prediction of a previously unseen component. As many herbaria are resource‐constrained, the MELU‐trained ‘sheet‐component’ model weights are made available and application encouraged.
data digitisation, that is the manual labour required for extraction of these data. These techniques are increasingly being used to extract text and trait-based data from specimen images (Carranza-Rojas et al., 2017;Ott et al., 2020;Triki et al., 2022;Younis et al., 2020).
Greater understanding of the accuracy and efficiency of computer vision techniques as applied to different kinds of herbarium specimens is necessary to understand the potential application of these methods for data mobilisation.
Herbarium specimens and their associated collection data contain a wealth of biodiversity data; documenting morphological diversity, geographic distributions, biome or vegetation occupancy and flowering and fruiting periods of the taxon represented on the specimen, and how these may change over time. These typically dried pressed plant samples are secured to archival sheets, and are accompanied by label(s) on the sheet detailing collector, location and taxon and occasionally contain other elements such as stamps, handwritten notes (outside the label) and accession numbers ( Figure 1). Large-scale digitisation efforts are required in order to provide access to herbarium specimen-associated data (Carranza-Rojas et al., 2017) and to ensure these data are FAIR (findable, accessible, interoperable and reusable; Wilkinson et al., 2016). Critical to the success of the digitisation endeavour is an efficient, scalable, adaptable and cost-effective workflow. An 'object to image to data' workflow, which involves the generation of a digital image of the specimen followed by the transcription of data from the digital image, is used in large-scale digitisation initiatives such as that undertaken by the National Herbarium of New South Wales in Australia (Cox, 2022). The visibility of the specimen label data in the corresponding digital image 'allows the data capture process to be undertaken remotely, both in distance and time' (Haston et al., 2015, p. 116). Digitising enables creation of a 'digital specimen' (Nieva de la Hidalga et al., 2020): generating a digital image of each specimen sheet, manually transcribing some or all of the data present on the specimen label into a searchable database, and then sharing that information for reuse via online biodiversity repositories such as the Atlas of Living Australia (ALA; https://www.ala.org.au/), Global Biodiversity Information Facility (GBIF; https://www.gbif.org/) and iDigBio (https://www.idigb io.org/).
In recent years, research has focussed on optimising specific tasks within such digitisation workflows. Particularly evident is the desire to minimise or remove manual intervention, speed up the process, improve accuracy and reduce costs, particularly with respect to label data transcription (e.g. Granzow-de la Cerda & Beach, 2010;Walton, Livermore, & Bánki, 2020;Walton, Livermore, Dillen, et al., 2020).
Studies have tackled streamlining the imaging process (e.g. Sweeney et al., 2018;Tegelberg et al., 2014) and extending the use of digital images (e.g. Carranza-Rojas et al., 2017;Corney et al., 2018;Triki et al., 2021;Unger et al., 2016;White et al., 2020). The task of interest here is that of harvesting label data from a specimen sheet digital image (SSDI). Important information is held not only on the formal institutional labels but is also present in handwritten notes on the labels and on the specimen sheet itself. The research value of these specimens is maximised when all data present on a specimen and derived digital image are transcribed verbatim, those data are then enriched and/or interpreted and recorded in the collection management system, so that specimen data becomes searchable F I G U R E 1 Examples of specimen sheet digital images from the Melbourne University Herbarium (MELU) (left) MELUM012346a-d (https://online.herba rium.unime lb.edu.au/colle ction objec t/MELUM 012346a); (middle) MELUD121701c (https://online.herba rium.unime lb.edu.au/colle ction objec t/MELUD 121701c); (right) MELUD105252a (https://online.herba rium.unime lb.edu.au/colle ction objec t/MELUD 105252a). and available to other researchers. A first step toward reducing the manual labour-intensive task of initial verbatim data transcription is building a means for artificial intelligence to identify areas where these data are present on the SSDI.
Much of the earlier literature addressing this task concentrates on extracting data from labels via optical character recognition (OCR). Some applied OCR software to the whole SSDI, (e.g. Drinkwater et al., 2014;Haston et al., 2012;Tulig et al., 2012). Other studies identified the label first and then applied OCR; in these cases, selecting or 'marking up' the label area was either (a) manual, (e.g. Alzuru et al., 2016;Anglin et al., 2013;Barber et al., 2013;Dillen et al., 2019;Haston et al., 2015); (b) vaguely described, (e.g. Heidorn & Wei, 2008;Takano et al., 2019Takano et al., , 2020; or (c) proposed as future work (i.e. not actually implemented) (e.g. Haston et al., 2015;Kirchhoff et al., 2018;Moen et al., 2010). Some investigations (e.g. Alzuru et al., 2016;Haston et al., 2015;Owen et al., 2019) demonstrated that applying OCR tools to the label-only images was more effective, faster and more accurate, than applying OCR tools to the whole SSDI. Owen et al. (2019) took this a step further and found that running OCR over individual text lines cropped from a label image was faster than processing the whole label. These findings reinforce the value of pursuing the current research, for having a semiautomated tool which identifies components of an SSDI, which can then be cropped out and further analysed/transcribed, holds potential for downstream elements in the SSDI data collection to be more efficient. Automated identification of components of specimen images lends itself to the application of computer vision (CV) models.
In recent years computer vision models have become more sophisticated (for literature reviews see Hussein et al., 2022, Rocchetti et al., 2021, Wäldchen & Mäder, 2018. While some studies have applied CV methods to the analysis of the plant material, here the application of that technology to identify label and handwritten data is of most interest. Relevant forms of CV include object detection, classification, and semantic segmentation. Semantic segmentation is at the pixel level (Nieva de la Hidalga et al., 2022;Triki et al., 2022;White et al., 2020), whereas object detection methodology uses bounding boxes. And while there is 'some overlap between semantic segmentation and object detection' (Walton, Livermore, & Bánki, 2020;Walton, Livermore, Dillen, et al., 2020, p. 7), the latter can be used 'to identify and segment the different objects that are commonly found on herbarium sheets' (ibid., p. 7). One such tool is YOLO (You Only Look Once, Redmon et al., 2016). The third version, YOLOv3, was applied to SSDIs by Triki et al. (2020Triki et al. ( , 2022; in that study, 4000 SSDIs from the Freidrich Schiller University Jena herbarium Germany (JE) were manually marked-up and used to train a model to identify specific plant traits and organs. Nieva de la Hidalga This paper describes efforts to identify all components of a digital image of an herbarium specimen sheet by training a YOLOv5 object detection model on a subset of MELU SSDIs. As the building of this capacity is itself resource-intensive with respect to time, expertise and computational infrastructure-with smaller and medium-sized collections regularly resource constrained-the key aim was to derive and share practical guidelines to enable other herbaria to integrate such a model in their digitisation workflow. As such, the specific research questions were: 1. Can a model be built to separately identify labels, handwriting and other original information, taxon annotation labels and other components of a specimen sheet digital image? 2. How many images must be annotated to train an effective model?
3. What is required to enable cross-herbarium application of the model, that is, how many new annotated images are needed to retrain a model for a new feature or collection?
| ME THODOLOGY
To answer the first research question, an object detection model was built. The second research question was interrogated by testing model parameters. The third research question involved testing how many additional marked-up images were needed to retrain the model to accurately identify a new feature.
| Choosing YOLOv5
It is usually less labour-intensive to mark up training data for an object detection model than for a semantic segmentation model. With this in mind, taking into account the heterogeneity of the MELU SSDIs and that a substantial number of images would be required for any model, and considering the methods observed in the reviewed literature, an object detection model using YOLOv5 (https://github. com/ultra lytic s/yolov5) was chosen for this investigation (described more below). While a comparative study against other methods and models is a promising research area, the focus of this investigation was to comprehensively investigate and quantify what accuracy could be achieved using this specific model type.
YOLO works through a single neural network base to predict bounding boxes around objects and class probabilities for those boxes (Redmon et al., 2016). The model uses a series of convolutional layers to infer features from the whole image and reduce the size of the spatial dimensions. Detections for the bounding boxes and class probabilities are made on coarse spatial cells resulting from the convolutions and predictions of the same object in multiple cells are corrected using non-maximal suppression. Enhancements were made to the model in the release of YOLO9000 (Redmon & Farhadi, 2017) and YOLOv3 (Redmon & Farhadi, 2018). A Python implementation of this model using PyTorch was released in 2020, named YOLOv5 (Jocher, 2020). This implementation of YOLO was used for this project for its convenience and flexibility. All YOLO training and validation were run on the University of Melbourne's high-performance computing infrastructure using four Intel Xeon E5-2650 CPUs and a single NVIDIA Tesla P100 GPU.
| Phase 1. MELU-trained model
SSDIs from MELU were annotated. A subset of these images was used to train an object detection model, and the remaining SSDIs validated the accuracy of the trained model. Training and validation were then undertaken on various-sized training datasets, also varying modelling parameters. The output is the MELU-trained 'sheetcomponent model' and recommendations for how many annotated images are required to train an effective model.
| Annotating MELU images
Both medium-and high-resolution MELU SSDIs were downloaded from the publicly accessible collection portal (https://online.herba rium.unime lb.edu.au/). In the machine learning context, to 'annotate' a SSDI is to mark up the image to identify the areas of interest.
Contrary then to how the word 'annotation' is used in the herbarium curation field, here is it used to refer to the information from the marking up exercise.
The MELU curator, together with the analytic team, determined SSDI components, or areas of interest. The guiding principles of this part of the study were to maximise the potential value from the annotation exercise, and, therefore, all components on the SSDIs except for the biological specimen were annotated. In this way, this data could be made available for future (as yet unforeseen) summaries and investigations, and the object detection models for this investigation could be consolidated if the analysis suggested this was required. Figure 2 shows two examples of annotations on MELU SSDIs: (1) institutional label; (2) data on the specimen sheet outside of a label ('original data', often handwritten); (3) taxon and other annotation labels; (4) stamps; (5) swing tags attached to specimens; (6) accession number (when outside the institutional label). Also of interest were labels produced as part of the MELU digitisation process: (7) small database labels; (8) medium database labels; (9) full database labels. Further, artefacts from the imaging process that do not remain with the specimen sheet: (10) swatch; (11) scale. When a marked-up box is given one of the above names, they are usually called labels; however, given the context, they will be referred to as component categories here. Often there was more than one variability. In this paper, the phrase 'image-annotations' is used to refer to the set of annotations for a set of SSDIs, not the actual count of those annotations, that is, a total of 4371 image-annotations are available for use.
The annotation data were used to generate collection summaries to identify how common each component was on MELU SSDIs. These data were also used to locate the centre point of each of the SSDI components on the specimen sheets, using twodimensional kernel density estimations (KDE) to create locative 'heat maps'. In total, 282 training-validation dataset combinations (detailed in Table A1) provided indications for the impact of significant SSDI heterogeneity, and guidance for determining how many images must be annotated to train an effective model.
| Assessing trained models
Measures used to evaluate the accuracy of the trained models were: (i) precision; (ii) recall; (iii) F1; (iv) mAP0.5; (v) mAP0. 5-0.95; and (vi) confusion matrix. These measures are well described elsewhere (e.g. Redmon et al., 2016), but as mAP0.5-0.95 is used as the key measure in this work a brief description is worthwhile. Mean average precision (mAP) is effectively a combination of the precision and recall measures, it is between 0 and 1, and the higher the value the better the model. It effectively measures the overlap between the actual and predicted object boundaries (i.e. the 'intersection over union' (IoU)). For example, mAP0.5 is the mAP where the boundaries overlap by at least 50%. Then, mAP0.5-0.95 is the average mAP for IoU between 50% and 95% overlaps in 5% steps. These measures were visualised using the web-based tool Weights and Biases (https://wandb.ai/). Each component category (e.g. 'institutional label', 'swatch') is assessed separately for these measures, and the overall model measures are an arithmetic average across the component categories. When assessing a trained model, YOLOv5 assigns the 'best' epoch for a model is that with the highest value for (10% mAP0.5 + 90% mAP0.5-0.95).
| Phase 2. Applying the sheet-component model to unseen images
The purpose of Phase 2 is to go some way towards answering the It was not expected that the MELU-trained model would cope well with these components as it was not trained on them. Examples of image-annotations are in Figure 3. The annotation data was also used to locate the centre points of SSDI components, for comparison to MELU SSDI heat maps.
The MELU-trained model was initially tested using annotations from each of the nine herbaria separately and then tested against the combined set of benchmark dataset image-annotations. The heterogeneity of the SSDI components and layouts from each herbarium means an 'overall' result was less useful than individual results.
Precision, recall, mAP0.5 and mAP0.5-0.95 along with the confusion matrix were used for the assessment of model accuracy.
| Only using new annotations
The purpose of this group of tests was to determine whether retraining the MELU-trained model only on the additional imageannotations, without including the full MELU training dataset, could be as effective for developing an accurate model. The expectation was that these tests would be faster and, therefore, more practical for other herbaria if the results were comparable. When the whole set of annotations was split between the training and validation datasets, the proportions across each component were checked, to ensure the two datasets were not biased. As demonstrated in Table 2, the proportions (the '% of annotations' columns) are similar. As is the average count of annotations per SSDI.
The 'heat maps' for the centre of the institutional (left) and annotation (right) labels are presented in Figure 4.
| Phase 1: Testing trained models
Early in the testing regime, it was found that the 'large' YOLOv5 model type produced better models than the 'medium' model type with minimal time trade-off. It was also found that running on 1280 pixels took more than three times longer than running on 640 (e.g. while specific to the infrastructure used in this study, a 'large' model Additionally, components with good overall predictability in the full model (per mAP0.5-0.95 in Figure 5; for example, scale, institutional label) showed less variability across all training dataset sizes than the poorly predicted components (e.g. number). The 'heat map' of centre points for institutional (left) and annotation (right) labels for the SSDIs in the benchmark dataset is shown in Figure 8 and enables comparison to placement in the MELU SSDIs ( Figure 4).
| Phase 2: Applying the MELU model to unseen SSDIs
Validating the revised MELU-trained object detection model against each of the benchmark datasets produced different results by from each herbarium in the benchmark dataset. Figure 10 and adding 20 better than 30 for Berlin and Kew. Table 6 lists the four model assessment measures for all key models in this analysis. Note that 'swing tag' is excluded for all outputs in Phases 2 and 3. The measures for models including 'swing tag', and only for 'institutional label', are included in Tables A2 and A3 respectively.
| DISCUSS ION
The above results of this study, as will be explored in more detail in this section, demonstrate that an effective object detection model has been built to identify components of SSDIs. While trained on MELU digitised images, it is shown to be reasonably transferrable to other herbaria SSDIs. The predictive accuracy has been further improved by retraining the MELU model with new image-annotations.
| Phase 1: MELU annotations
On average there were 5.6 annotated components per MELU SSDI (Table 1). Almost all SSDIs have 'swatch' and 'scale'. Of SSDIs without an 'institutional label', these had one of the three MELU digitisation labels. Approximately 28% of the annotated MELU SSDIs have one or more taxon annotations and just over 30% have handwriting present on the specimen-this information alone informs prioritisation of future steps to read data from these SSDI components.
As is standard in curation protocols, institutional and annotation labels were consistently placed in the lower right corner of the specimen sheet (Figure 4). This reflects that many of the MELU SSDIs annotated for this research had been remounted prior to digitisation, with consistent instructions for the positioning of components.
| Value of annotation task
The initial image annotation work represents the largest resource
| Phase 2: Applying to new images without training
The concentrated locations of institutional and annotation labels noted in the MELU SSDIs ( Figure 4) were also seen in the SSDIs from the Dillen et al. (2019) study (Figure 8). While the lower left corner of the specimen sheet is also commonly used for both label types, there is more variability in overall placements (as expected, given these are results across different herbaria) particularly for 'annotation label'.
When the revised MELU-trained sheet-component object detection model was applied to the benchmark image-annotations (without retraining the model) the results varied across the nine herbaria and uphold the basic object detection tenet that a model works best with components close to those it was trained on. Referring to Figure 9, the transferability of 'institutional label' and 'annotation label' was satisfying, though it was noted that some 'annotation labels' are little more than free-hand text on unformatted paper and It should also be noted that the SSDIs selected from the benchmark dataset, and annotated for this investigation, were chosen without consideration of how the specimens were ordered in that dataset. While all of the specimens met the requirements of the Dillen et al. (2019) study, specimens from each participating herbarium varied significantly, for example, in the label or stamp types present, the placement of the labels or stamps, as well as in the format (typed or handwritten) and arrangement of the data on the label.
Therefore, a different selection of SSDIs from the benchmark dataset will result in different model outcomes.
That said, it can be asserted that the revised MELU-trained sheetcomponent object detection model could be directly applied to new SSDIs not from MELU to identify and locate sheet components and would predict reasonably well, particularly for the 'institutional label'. As for all models though, targeted retraining could be conducted to improve outcomes (covered in the next section).
| Phase 3: Applying to new images or components with retraining
Adding new image-annotations to the full MELU training dataset resulted, in most cases, in better predictions than using the untrained MELU model alone. The differences between the two validation sets The 'scale' improvements demonstrate the improvement that minor retraining has on predictions ( Figure 11, right) even more clearly.
Adding as few as 10 new image-annotations has raised mAP0.
| Further work
The research team has incorporated the MELU-trained sheet- Further, such machine-driven component identification, particularly when focussed on labels and integrated with text reading, has the potential for application to many kinds of collections that have initiatives focussed on the digitisation of data stored on pro-forma object or specimen labels.
DATA AVA I L A B I L I T Y S TAT E M E N T
With the intent to contribute to the research of other herbaria and supporting research teams, the following assets and outputs from this research are made available on the condition of (a) full cita- | 5,126.8 | 2023-08-01T00:00:00.000 | [
"Environmental Science",
"Biology",
"Computer Science"
] |
Attitudes of second language students towards self-editing their own written texts
Recognizing students’ deliberate e!orts to minimize errors in their written texts is valuable in seeing them as responsible active agents in text creation. "is paper reports on a brief survey of the attitudes towards self-editing of seventy university students using a questionnaire and class discussion. "e context of the study is characterized by its emphasis on evaluating the #nished written product. Findings show that students appreciate the role of self-editing in minimizing errors in their texts and that it helps in eventually producing well-written texts. Conceptualizing writing as discourse and therefore as social practice leads to an understanding of writers as socially-situated actors; repositions the student writer as an active agent in text creation; and is central to student-centred pedagogy. We recommend the recognition of self-editing as a vital element in the writing process and that additional error detection mechanisms namely peers, the lecturer, and the computer, increase student autonomy.
Introduction
One way of recognizing students' deliberate e orts to minimize errors in their academic writing is to understand their attitudes towards self-editing.While we agree that "no learner intentionally writes strings of incoherent text" (Yates and Kenkel 2002: 43), we suspected that an attitudinal problem existed amongst our students, because of the apparent indi erence exhibited in the written texts that they submitted on a writing support course we teach.is rendered their texts very tedious to read.Recurrent errors in our students' work convinced us that students do not self-edit their texts, do not appreciate its value, and deliberately wait for the lecturer to correct their errors for them.In practical terms "self-editing" is the writer's ability to independently or otherwise identify and act on textual inaccuracies and loss of clarity in content, organization and mechanics.Charles (cited in Cresswell, 2000: 235) proposed a "self-monitoring" technique where "students write marginal annotations R W & about problems in their evolving compositions, to which the teacher responds".While we did not dismiss the potential of self-monitoring for improving students' nal dra s, we questioned the second language (L2) writer's ability to act on errors identi ed via self-editing and thought that research was required, so that we could better understand the extent to which student writers take responsibility for textual accuracy, and are able to do so.
A system of assessment that focuses only on the nal written product, as is the case at our university, provides unclear feedback on what students are capable of doing as writers.While students might be expected to be able to improve their written texts by way of acting on the lecturers' feedback, the system requires students to submit, not multiple dra s, but only one nal dra of the essay.In this way, the formative role of teacher feedback (Glover and Brown, 2006) seems non-existent.e expectation of markers is that that single submission is as error-free as possible; and that over the years of their education, students have received su cient feedback on earlier submissions to have learnt what constitutes correctness, and can therefore produce error-free texts.
In this paper, we hypothesize that in a largely product-focused writing context such as ours, students hold speci c attitudes on self-editing that may detract from the quality of their writing, even though they might want to minimize errors that might occur in their work.is generated the research question: what are students' attitudes to self-editing?
Why self-editing is important
We regard self-editing as central to increasing students' facility in meeting their lecturers' expectations.Only the writer, via dra ing and redra ing, reviewing (by self or peers), re-casting, and repeated self-editing, can respond to the entirety of textual detail, ranging from punctuation to word appropriateness to sentence length, cohesiveness, viewpoint, force of argument, pacing, and so on.We also regard self-editing as extremely important in the era of electronic communication because clicking "send" or "print" before attending to possible errors in form, content, and organization can be a source of embarassment for writers or annoyance for their readers.e promoting of self-editing practices therefore has lasting value for today's university student.More importantly, enhancing student self-editing capacity eliminates a culture of over-dependency on teachers, enabling the teacher to assume the role of a facilitator, co-learner or collaborator (Atkinson, 2003) during the writing process.It may also be important in reducing the teachers' workload, so that they can focus on providing feedback that is relevant to students' future work.It is also an important resource for learner-centred pedagogy (Vollmer, 2006) which places the student as an active agent in knowledge creation.Using our experience as teachers of a university writing course, we explore students' attitudes towards self-editing and identify possible reasons for these attitudes before making our recommendations.
eoretical framework
When Krashen (1985) proposed his Monitor hypothesis to explain how L2 users apply their learnt knowledge of L2 rules to monitor (self-edit) their texts, he was of the view that self-editing was sometimes bad practice because of the resultant hesitant speech of L2 users.In contrast, however, we regard self-editing as a vital competency for L2 users in writing, especially, since the academic essay has become an entrenched assessment tool within higher education and must therefore be carefully cra ed to optimise chances of success in assessment exercises.
Increasingly, studies in the eld of L2 writing, especially research on revision and text quality, have come to view the writer as a responsible and active participant in the writing process.Focusing on what the student knows about communication and language, Yates and Kenkel (2002) adopt a learner interlanguage perspective on error correction, which they use to critique error correction procedures in the literature.ey propose that "learners have principles which, if understood by the writing teacher, provide insights which are more useful than a target deviation perspective" (Yates and Kenkel, 2002: 31).Cresswell (2000) studied learner autonomy resulting from training in using the self-monitoring technique whereby students indicate their doubts by annotating their texts so that the lecturer can give feedback on these doubts and the essay itself.Results showed that learners appreciated the degree of independence gained and showed willingness to continue using the technique (Cresswell, 2000: 243).Similarly, Xiang (2004: 245) found self-monitoring particularly bene cial for high-achievers, although this applied only to the organizational aspect of their compositions.Both these ndings show that students are not passive recipients of feedback but can be active participants in the construction and meaning of that feedback (Xiang 2004: 244).Charles (1990: 292) also proposed the self-monitoring technique and claimed that the technique "encourages students to look critically and analytically at their writing and to place themselves in the position of the readers".e above studies questioned the e ectiveness of established ways of giving feedback i.e. from the teacher to the student.Research in this area of L2 writing (Storch 2005, Brender 2002, Sugita 2006, Truscott 1999, Ferris 1996, Carson and Nelson 1996) is on whether and how teachers should correct errors in students' writing, whether teacher feedback improves students' writing prociency, or whether peer-editing helps.Currently, debate rages around the question whether or not teacher's error feedback makes a di erence.Over the past ten years the debate has featured Truscott against Ferris.e former argues that error feedback in L2 writing is counter-productive as it detrimentally a ects learners' writing development and that it has not improved students' writing (Truscott 1999); while the latter argues that error feedback can improve language accuracy over a period of time (Ferris 1996).Williams (2003) suggested using individual conferencing as one way to explain the teacher's feedback to each student but this strategy is less feasible in large-class settings, such as ours.e literature reviewed by Glover and Brown (2006) claimed that in large classes the frequency and quantity of teacher feedback is reduced; consequently, the formative value of such feedback was lacking; and students argued that because written assignments were topic-focused, feedback lacked relevance to future assignments.Under these di culties surrounding the e ectiveness of feedback, additional techniques such as self-editing are needed.
Students' self-editing attitudes have, however, received relatively little attention in L2 research.Polio, Fleck, and Leder (1998) studied ESL students' editing for sentence level errors and Francis (2002) investigated the editing and correction strategies of much younger bilingual children.In both studies learners showed remarkable attentiveness with regard to their texts.Additionally, there are several writing manuals available that mention self-editing and the Internet o ers access to numerous checklists for self-editing purposes.However, we still do not know clearly why errors that would seem to be author-correctable continue to end up in students' texts.e purpose of this paper therefore is to ascertain students' attitudes that may in uence their ability to self-edit their written texts.Once identi ed, these attitudes may provide insights for instructional purposes for teachers of academic writing.
e subjects
Two out of the ve classes taking an optional post-Year One course called ' Advanced Writing Skills' responded to a questionnaire and took part in the class discussion therea er.(Both these activities were intended as consciousness-raisers for a self-editing activity that followed but is not reported here).
e course is housed in a 'study skills' unit for student academic support programmes at our university, and is underpinned by behavioral psychological approaches characterized by genre writing drills.
Altogether there were seventy students (23 males and 47 females) from seven di erent faculties as follows: the Faculty of Social Sciences (36), Humanities (12), Education (8), Science (7), Health Sciences (4), Business (2), and Engineering (1).ese gures show the uneven popularity of the Advanced Writing Skills course across Faculties and disciplines, which can be attributed to the relative importance that the Faculties of Social Sciences, Humanities, and Education attach to essay writing, as reported by the students.e sample included forty-two second-year students, twenty-three third-years, and ve fourth-years.Respondents' ages varied widely: y-two respondents were below twenty-ve indicating that they le senior secondary school less than ve years ago, which indicates that their familiarity with academic writing was more recent than the few who were over thirty.ey were also a very complex multilingual group whose home languages included: Setswana (37 respondents), Kalanga (3), Sebirwa (3), Hindi (1), Ndebele (1), Setswapong (1) and Herero (1).Some spoke a combination of two or more home languages, either English and Setswana ( 14); or Kalanga and Setswana (6); or English and Sekgalagadi (2); or Setswana, Afrikaans and Herero (1).Despite this home language diversity, the sample shared similar school experiences of using English as the medium of instruction.
Methodology
Using, rst, a questionnaire and then a class discussion, the study explored university students' self-editing attitudes in order to see whether students thought that self-editing improves textual quality.e three-item open-ended questionnaire, initially piloted with a di erent group of students taking the Advanced Writing Skills course, introduced students to the idea that writers must remain consciously in control of the writing and editing process (Cresswell, 2000:237) so as to minimize the errors that slip into the nal dra .Again, taking hints from Cresswell (2000), the class discussion was based on students' memories of writing experiences.
Findings
e rst item on the questionnaire was: "Do you think writers are able to selfedit their work themselves?"It sought respondents' perception of writers as autonomous individuals and as members of a writing community capable of self-editing and acting on the textual errors they spotted.All their responses communicated the attitude that self-editing is di cult, ine ective and complex.In the class discussion, they also expressed the view that they had not attained the status of 'writer' , so as to be able to overcome these di culties.(Indeed many self-editing online manuals convey a similar view.)ree such responses expressed the confounding nature of self-editing as follows: "yes, writers are able to self-edit their work but it is quite di cult because we tend to believe that we did everything correctly, thus defending our work"; and "because it is your work, you will understand it your own way, and some mistakes you will not identify"; and "they [writers] can [self-edit] and maybe not, because sometimes when you think you are reading what is written, in fact you are saying what you thought you were writing".
We further sought to know respondents' thoughts regarding the importance of self-editing by asking the question: "What in your opinion is the importance of self-editing to the text writer?"Respondents agreed that self-editing was an important part of writing because: "you [the writer] are the one who knows what you want to say and it will be di cult for another person who does not know what you want to say to do that for you"; "work with lots of mistakes turns o readers"; "I have realized how much I make mistakes when writing a er I had my exercise marked"; "bumping into someone's editing mistakes is irritating".One respondent lamented the omission of self-editing activities in early education: "it is a bit di cult for most to grasp this concept [self-editing] because from elementary school we were taught in such a way that the teacher has to be the identi er of mistakes instead of us communicating through our writing".Another suggested that "time should be made for self-editing in exams and tests like probably a er the test duration has lapsed"; while yet another suggested "maybe we can write our academic papers via a computer as it easily picks up errors".
We sought to see if students ever thought of alternative assistance with their dra s in the form of peer editing, by asking the question: "Do you ever ask a friend to edit your work before you submit it?".Almost half (34 out of 70) the respondents said they did.While they saw the value in peer editing, many saw it as merely a way to: "see if I have any mistakes"; " nd each and every mistake'; "see mistakes I overlooked"; "be corrected by somebody"; "identify errors you the writer cannot see"; and "correct construction of words and spelling".One respondent wanted to "ensure that the work is readable".
Evidently, peer-editors mainly focus on the mechanics, and not on content and organization of ideas.However, this help was not sought by many because, as several respondents put it, "there was no time to show your work to a friend".e thirty-six respondents who said "no" to peer-editing gave reasons that indicated that they doubted if their peers were any better skilled than they were themselves.Peer-editing was also viewed with suspicion re ecting the competitiveness students attach to texts submitted for assessment and for that reason they were worried that peer-editing might result in "plagiarizing my points"; or "copying from me to improve [their] work and get higher marks"; or even "friends making fun of your mistakes"; or worse still "missing submission deadlines".
As expected, students' attitudes to self-editing are divided.On their ability to self-edit, they are unanimous that self-editing is complex, but that despite its complexity, it is important for the writer.However, on the value of asking a friend to assist in the editing, some students say that friends are helpful while others view that help with suspicion.e hypothesis of the study is therefore con rmed: that in a largely product-focused writing context such as ours, students' attitudes towards self-editing are not helping the quality of their writing, even though these students would want to take full responsibility to minimize the errors that occur in their work.
Discussion of ndings
Increasingly, studies in the eld of L2 writing, especially research on revision, have come to view the writer as a responsible and active participant in the writing process (Charles 1990, Cresswell 2000, Xiang 2004).However, as writers, students do not usually position themselves as co-researchers or as creators of new knowledge (McIntosh, 2001), a situation con rmed by their responses to the questionnaire.Responses also implied a strong need to minimize the sense of competition in the learning process. is is because developing academic writing skills in L2 can be theorized as a process of apprenticeship, where learning is viewed as a process of social participation rather than simply as acquisition of knowledge.e teacher's role, too, di ers from that of a disseminator of knowledge.Within such a learning approach to knowledge acquisition, teacher and/or peer feedback may be viewed as part of the process of apprenticing students into legitimate participation.When student writers position themselves as communicators in a discourse community (consisting of their peers, their lecturer, and themselves), they become their own rst readers of the texts they produce.However, evidence from the questionnaire and class discussion showed that mainly because writing tasks are competitively understood by students, audience is perceived as either assessors (their lecturer) or plagiarizers (their peers).
In the class discussion, students argued that feedback from their faculty lecturers was emphatic around correct use of the conventions of linguistic and textual features.is reinforces students' view of self-editing as a complex process shrouded in uncertainties.Students also rightly perceive a model of correctness to exist somewhere; a view advanced by the genre approach to academic literacy.Unfortunately, that ideal model seemed to remain obscure to them, either because their faculty lecturers do not model the genres for them, or the models are not made accessible to students; and also because such ideal models (or "genres") fail to produce autonomous writers because of an over-emphasis on the technical features of genres, rather than on their expressive resources.Additionally, the writing tasks are for purpose of compiling Continuous Assessment scores, and less because students need to practice and develop their discipline-based writing facility.us, the function of academic writing in the di erent faculties is seen to be mainly that of an evaluative tool that determines pass or failure; a view perpetuated by the absence of a real audience beyond the lecturer who chose the topic(s) and by the requirement to submit, not multiple dra s, but only one nal product of the essay (Wright, 2006:90).As a result, students' attitudes to their writing tended to imply a process over which they had little control.
With regard to involving peers in editing one's work, one respondent indicated a reluctance to be critiqued by a friend because "friends may not want to disappoint you". is comment may be attributed to the Tswana cultural philosophy of botho which means compassion and caring.In the students' view, friends are expected to show their "goodness" in assessing what a friend has written; thus one respondent argued that, out of modesty, friends may not do a thorough job of editing because "they don't want to disappoint you", in case they are seen as bad or unsupportive friends.Carson and Nelson (1996) reported similar results among Chinese learners which they attributed to the Asian collectivist culture, saying that more successful peer interactions come from students who share a common language and cultural expectations than from students in heterogeneous cultural groupings.e same study identi ed additional cultural factors that underlie a reluctance to involve peers.One of these factors is "mutual status inequality" which was exempli ed in the current study by one respondent who felt much belittled by peer-editing saying, "Someone who is not my lecturer reading my work!!!I feel as though they judge me".e other factor is "trustworthiness of peers' language pro ciency".For instance one respondent dismissed her peers' linguistic pro ciency as "most of their [peers] English is not very good… almost useless to have them do it [peer-edit]".ese comments indicate a reluctance to accept guidance that comes from elsewhere other than from the lecturer, a problem attributable to classroom culture and power, where the teacher is perceived as the only source of knowledge.Such teacher-fronted perceptions of learning to write are not very helpful in large classes.For instance, at our university, semesterization reduced contact time per week for the Advanced Writing course from three to two hours, and due to heavy marking loads, student's work is returned long a er submission but without any direct contact with the student.O en students do not even collect the marked scripts.ose who do are only curious to see their score but make very little use of the feedback.
Earlier studies on revision cited in Cresswell (2000:236) found that students tended to edit for grammar at the expense of other textual elements such as logic, relevance, and appropriateness of content.Similar results are evident in the respondents' comments, where editing is only associated with their overemphasis on local, surface-level components and ignorance of global structures of texts.In the class discussion there were suggestions that since the computer can edit their work for them, there was little need to worry about errors.However, this is only partly true.For instance, with regard to essay content, the computer cannot supply the description, argumentation, thesis statement, focus, or di erentiate factual and experiential information.e writer must also deal with the logical organization of ideas and arguments, the e ectiveness of the introduction and conclusion, and the sequencing of ideas in order of importance.us the computer can only be a supplement to detecting textual inaccuracies.More importantly, when the peer's primary concern is "to see if I have any mistakes" or "to nd each and every mistake", the attitude conveyed is that the original purpose of academic writing is not a genuine concern to understand something.Instead it is an opportunity for the reader to judge the degree of adherence or divergence to the writing conventions; and for the writer to display awareness of such textual features and; and that the form rather than the message is at the centre of writing. is view was further con rmed in the class discussion: among the main areas of writing mentioned by students during the discussion as requiring improvement were those relating to mechanics (referencing, grammar, punctuation, and spelling) and organization (sequencing of ideas, cohesion and coherence).
During the class discussion the students also revealed that, based on the feedback they were getting on their assignments, their faculty lecturers perceived the Advanced Writing Skills course as essentially remedial and that taking the course would enable them to write better.Such a de cit view of the students who take the course has implications which might be apparent in their written scripts in the Advanced Writing Skills course.Vollmer (2002) argues that a de cit perspective "sees them [L2 writers] as developmentally weak and their texts as riddled with errors".For their part the students said they found writing in English easy although they admitted that they needed help.Linking these statements to their responses to the questionnaire items, it is possible to suggest that greater autonomy via self-editing skills could enhance students' textual control better.Because these respondents did not position themselves as purposeful communicators, they o en failed in their attempt to communicate meaning to their readers, probably as a result of the doubts they hold about their capabilities to do so.Such doubts are the result of recurrent disappointments from earlier assessed work.According to Garcia-Sanchez and de Caso-Fuertes (2005:273) a long history of failure in uences task perseverance, the level of e ort, and the degree of success achieved among other things.
On the basis of the attitudes to self-editing ascertained in this brief study, a strong case can be made regarding the teaching of writing conventions to L2 writers.From an L1 perspective, McIntosh (2001) regarded writing conventions as domesticating and limiting because they discourage subjectivity.However, the important question for L2 writing instruction is "what bene t do students get, as writers, from a genre approach to literacy"?e textual inaccuracies in mechanics, organization, and content show that the learning of academic writing conventions is still needed by L2 writers if their sense of being in control is to be realized.Omitting the teaching of writing conventions marginalizes students within academia and relegates them to the back row of academic literacy.We are convinced that self-editing, initiated by way of direct instruction, requires a good or growing command of the conventions of writing.Without conscious engagement with and exposure to the conventions, students are at risk of failing and of being ever-subject to their lecturers' corrections, rather than developing their own facility.
Recommendations
Ideally, a genre approach to writing suggests that students acquaint themselves with actual examples of a variety of textual types.In this way they get to recognize the di erent linguistic and textual features.When this is followed by learner training in self-editing, students' texts have a basis of correctness to follow.Learners also need to be alerted to the expectations of the target audience.Additional error-detection mechanisms such as peers, the lecturer, and the computer promote a sense of discourse community within which meaning contained in the written text is constructed.Because the computer provides impersonal feedback on the mechanics (spelling, punctuation, and grammar) of writing, it protects the L2 writer against embarrassment and feelings of humiliation over the errors committed.It also saves the writers some e ort, enabling them to focus beyond grammar and spelling on more global textual concerns such as logic, cohesion, word appropriateness, and overall textual organization.
Future research
Analyses in this brief survey of student attitudes to self-editing did not factor in the full range of variables involved.For example, it would be interesting for writing instruction to determine how age, gender, home language, and course and level of university study of the respondent impact on the attitudes to self-editing.ere is also a need to investigate the impact on self-editing of contextual realities, such as large class sizes which reinforce classroom organizational practices that result in product-focused (rather than process-focused) writing and assessment practices.Due to the large number of learners involved, heavy marking loads are a constant burden for the lecturer.Hence, the intervening teacher feedback on the dra s and revisions are o en impossible to give.Under these conditions, students' actual self-editing practices need to be documented and developed.e documenting of actual practices serves as indicators of how students are exhibiting control of the text they produce.
Conclusion
Overall the study has shown that although L2 writers in the research sample see self-editing as complex, they value it in reducing textual inaccuracies.Although a larger study sample would have provided more generalizable results, the ndings of this brief attitudinal study contribute to the debate over how e ectiveness within L2 writing can be developed: that, despite students' attitude that self-editing is complex, self-editing is a vital skill for improving textual quality; and writing instruction that nurtures its development is bene cial for purposes of developing autonomous L2 writers. | 6,088.6 | 2010-05-22T00:00:00.000 | [
"Education",
"Linguistics"
] |
Enhancing photoluminescence of carbon quantum dots doped PVA films with randomly dispersed silica microspheres
As a kind of excellent photoluminescent material, carbon quantum dots have been extensively studied in many fields, including biomedical applications and optoelectronic devices. They have been dispersed in polymer matrices to form luminescent films which can be used in LEDs, displays, sensors, etc. Owing to the total internal reflection at the flat polymer/air interfaces, a significant portion of the emitted light are trapped and dissipated. In this paper, we fabricate free standing flexible PVA films with photoluminescent carbon quantum dots embedded in them. We disperse silica microspheres at the film surfaces to couple out the total internal reflection. The effects of sphere densities and diameters on the enhancement of photoluminescence are experimentally investigated with a homemade microscope. The enhancement of fluorescence intensity is as high as 1.83 when the film is fully covered by spheres of 0.86 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\boldsymbol{\mu }}$$\end{document}µm diameter. It is worth noting that the light extraction originates from rather the scattering of individual spheres than the diffraction of ordered arrays. The mechanism of scattering is confirmed by numerical simulations. The simulated results show that the evanescent wave at the flat PVA/air interface can be effectively scattered out of the film.
by a single ultraviolet source due to aggregations and Förster resonance energy transfer between the CDs. Excitation-independent near-infrared emission has also been demonstrated owing to surface chemical states and homogeneous microstructures of the CDs 25 . The PL properties of CDs can be further improved with physical mechanisms. For instance, the green and yellow emission of CDs was enhanced by localized surface plasmon resonance of Ag nanoparticles 26 .
In order to explore their applications in optoelectronic devices, it is favorable to make CDs in solid forms 27 . A simple way is to disperse CDs in polymer matrices which can protect them from concentration-induced PL self-quenching. It was demonstrated that CDs could be well dispersed into epoxy matrices and the CDs/epoxy composites could be applied to encapsulate LED chips 23 . Flexible full-color emissive poly vinyl alcohol (PVA) films have been achieved through mixing two or three CDs in appropriate ratios 28 . Moreover, incorporating CDs in polymer matrices can also improve their luminescent properties. Comparing to the aqueous solutions, CDs dispersed in PVA films exhibited enhanced fluorescence emissions 29 . The enhancement was attributed to enhanced surface passivation for the carbon dots in a more confined environment in the PVA matrix. CDs are also compatible with other luminescent materials in polymer matrices. When CDs and EuCl 3 were dispersed together in a PVA film, the PL spectrum could be tuned by adjusting the mixing proportion of CDs and Eu + 3 30 . Based on the polymer films doped with CDs, displays and LEDs with excellent performance can be expected. Besides, the application of CDs doped luminescent polymer films can be further extended to the research of sensors. Stretching of CDs doped polymer films induced both blue shift in the fluorescence peak positions and dramatic increase in fluorescence intensities 31 . Such phenomena can facilitate optical determination of tensile properties.
However, there is a drawback of embedding CDs in transparent polymer films. Because the refractive indices of polymers are higher than air, a considerable portion of luminescence from CDs are reflected back by the total internal reflection (TIR) at the polymer/air interface [ Fig. 1(a)]. Such a phenomenon can be analyzed with a simple ray-optics model 32,33 . For a polymer with the refractive index of 1.5, the critical angle of TIR is . 41 8 . Assuming a luminescent center inside the polymer film emits light identically to all the directions, about 74.5% of the light emission toward the PVA/air interface is reflected back by TIR. The reflected light often dissipates through waveguide modes when the bottom of the film is also reflective.
Trapping of light in layers of high refractive indices is a long existed problem in the field of LED 3,33,34 . Many efforts have been devoted to suppression of TIR and extraction of trapped light. The key issue is to eliminate sudden changes of refractive indices at flat interfaces. The efficiency of an organic light-emitting diode (OLED) can be increased by simply roughening the substrate surface by sand blasting 35 . Micro lens arrays were attached to the glass substrate of an OLED, and led to an extraction increase of 60% 36 . Organic particles can serve as scattering media and extract waveguide light 37 . Light extraction can be enhanced by combining the ideas of refractive index matching and photon recycling in films with quantum dots dispersed in them 38 . Irregular subwavelength nanopillars were made on flexible polycarbonate substrates with the nanoimprint method 39 . The efficiency of the OLED was improved by 69% with such an antireflective structure.
Similar to the study of light extraction of LED, structures on surfaces of CDs doped polymer films have also been elaborately designed to enhance the PL efficiency. Using a surface micro-textured silicon wafer as a template, free standing CDs/PVA films with large-area ordered inverted-pyramid patterns were fabricated 40 . Similar to the anti-reflective (AR) surfaces used in solar cells, such a structure reduced the reflection from the PVA/air interface and led to a quantum yield enhancement. Periodic micro-structures were also patterned onto the film surface with a commercial digital versatile disc serving as a mold 41 . The submicron-patterns with a periodicity of 700 nm provided an emission enhancement factor of 1.96. The enhancement was interpreted as the compensation for momentum mismatch between the waveguide-mode light and far-field radiation.
In this paper, we demonstrate the light extraction of CDs/PVA films with silica microspheres [ Fig. 1(b)]. The spheres are dispersed on the film surfaces and scatter out the TIR. The scattering is induced by rather individual spheres than ordered arrays, therefore, there is no requirement of periodicity. Because random structures can extract light propagating along any direction with a wide spectral range 42 , the strategy presented here is suitable for broad-band emitting devices.
experiments
The fabrication of samples started with preparing CDs and PVA solutions. The CDs were synthesized with a hydrothermal method 43,44 . Urea (0.12 g) and p-phenylenediamine (0.12 g) were dissolved in ultrapure water (30 mL). The solution was heated in a sealed autoclave at 160 C for 10 hours. After cooling down to room temperature, the obtained solution was filtered with a microporous membrane and dialyzed against water. As for the PVA solutions, 1.5 g of purchased PVA powder [Mw 85,000-124,000, Sigma-Aldrich] was added to ultrapure water (30 mL). The mixture was stirred for about 5 hours until the powder was completely dissolved. The following procedure of making a film was shown in Fig. 2. The solutions of CDs (concentration: 1.5 mg/mL) and PVA (mass concentration: 5%) were mixed at the ratio of 1:100 by volume, then stirred for 5 minutes until a uniform liquid was obtained. The resulting CDs/PVA solution was transferred to a petri dish and left for 15 hours, so that the air bubbles were completely removed. After being dried for 3 days at the temperature of 60 C, the solution became a uniform solid film. The thickness of the CDs/PVA film was determined by the volume transferred into the petri dish. Finally, the film was cooled down to room temperature then peeled off to be free standing. Purchased silica microspheres [BaseLine Chromtech Research Centre] with different diameters (0.3 µm, 0.86 µm, 1 µm, and 1.7 µm) were dispersed on the surfaces of CDs/PVA films by spin coating. The specific process is as follows: firstly, suspensions of microspheres were diluted to be 12.5 mg/mL; secondly, the ultrasonic dispersion method was applied for 10 minutes to make sure the microspheres were monodisperse; finally, the suspensions were spin coated [[Laurell Technologies, WS-650-23]] on top of the films at 400 rpm for 1 minute. In the spin coating step, the PVA films were adhered to glass slides with the help of Polydimethylsiloxane (PDMS) to keep the surfaces flat.
The basic properties of the CDs/PVA films were characterized by commercial instruments. The morphologies of the dispersed microspheres were examined by a scanning electron microscope (SEM) [FEI, Quanta FEG 250]. The absorption was measured with a UV-Vis spectrometer [Macylab instruments, UV-1900], while the PL and excitation spectra were obtained by a spectrofluorophotometer [Shimadzu, RF-6000].
In order to investigate the PL enhancements of spheres, we built a home made microscope equipped with a high sensitivity fluorescence spectrometer [Ocean Optics, QE Pro]. The configuration of the optical setup was shown in Fig. 3. The illuminator of the sample (CDs/PVA films) can be switched between a halogen lamp and a continuous wave laser. As for the detector, a switchable mirror (M3) was used to choose between a CCD camera and the spectrometer. First, the sample was finely tuned to be at the focus of the objective lens. Then a lens (L1, focal length: 200 mm) projected the image of the sample to its focal plane, where the magnified image was filtered by an adjustable aperture. Finally, the light went through the aperture was refracted by another lens (L2, focal length: 100 mm) and captured by the CCD camera or the collimator followed by the spectrometer. When the sample was illuminated by the lamp and the mirror M3 was in the optical path, the CCD camera detected the image of the film surface. When the sample is illuminated by the laser and the mirror M3 was out of the optical path, the PL spectrum was measured by the spectrometer. The direction of the laser beam (wavelength: 532 nm; power: 200 mW; diameter: 1 mm at the position of the sample) was tuned with two mirrors M1 and M2. The CDs was excited by the green laser and emitted yellow fluorescence. A Long-Wavelength-Pass filter was used to block the scattered laser. PL from the part of the film selected by the aperture was collected by the collimator then analyzed by the spectrometer.
numerical Simulations
Since the diameters of the microspheres were in the µm range, we resorted to the electromagnetic wave theory to analyze the scattering mechanism.
Results and Discussions
Transparent and flexible films were fabricated following the aforementioned process [ Fig. 2]. All the films have smooth flat surfaces before spin coating microspheres. Two typical CDs/PVA samples without and with microspheres (diameter of 0.86 µm) are shown in Fig. 5(a,b). Owing to the scattering, the transmittance of light decreases for the film with microspheres [ Fig. 5 For the investigation of scattering induced by dispersed microspheres, the parameters of the spin coating process were controlled to make monolayers of the microspheres. Due to the fact that the dispersions of microspheres are not perfectly uniform, we can find different dispersions at different parts of a film. Figure 5(c-e) display three representative situations of the obtained microspheres. In Fig. 5(c), the observed surface is completely covered by microspheres. The spheres in most parts are in the close packed form with occasional vacancies among them. In Fig. 5(d), a large fraction of the area is covered by microspheres. All the spheres are in a mono layer, and form aggregations. Each aggregation comprises of tens of spheres, and periodicity can hardly be found. In Fig. 5(e), only a small fraction of the area is covered by microspheres. Several spheres aggregate in small groups along random directions. We can find these cases in all the films with microspheres of different diameters. That enables us to select different areas to investigate the effects of different microsphere dispersions.
The basic optical properties of the prepared CDs/PVA films are shown in Fig. 6. Because 532 nm is one of the most common used laser wavelengths, we pay special attention to the absorption and excitation at this wavelength. The CDs in the PVA matrix exhibit broad band absorption over the visible wavelength range [ Fig. 6(a)]. The absorption peak lies at about 489 nm, while the absorption at the wavelength of 532 nm is still high. At the excitation wavelength of 532 nm, the films emit yellow fluorescence shown in Fig. 6(b). The emission has a broad spectral profile spanning over 100 nm. Monitoring the emission intensity at 585 nm, the measured excitation spectrum is shown in Fig. 6(c). The PL can be induced by a wide range of wavelengths, especially, the excitation efficiency of 532 nm is near the peak value.
Although the resolution of an optical microscope is not high enough to see the details of a microsphere, the dispersions of spheres can be seen by our home made setup [ Fig. 3]. Using a × 40 objective (NA = 0.65), a circular field with 0.25 mm diameter on the film surface was imaged. At the focal plane of L1, the diameter of the image was about 10 mm. In order to precisely observe a small area, the diameter of the adjustable aperture was tuned to be 2 mm. Therefore, we can select a small circular field with 50 µm diameter. By transversely moving the sample, we can select different parts of the film surface and find different sphere densities [ Fig. 7]. Before measuring the PL spectrum, we used an open source software 46 [ImageJ] to measure the area of covered/bare parts (see Supplementary Fig. S1 online), then carefully tuned the sample position until we got expected sphere densities. When the selected area is completely covered by the spheres [Fig. 7(e)], the sphere density is defined as 100%. We use different sphere densities (20%, 40%, 60%, 80%, 100%) to quantitatively investigate the PL enhancements induced by microspheres. During measurements of each film, the spectrum of a selected bare area without microspheres [ Fig. 7(a)] was used as a reference. In this way, differences between different CDs/PVA films were avoided. www.nature.com/scientificreports www.nature.com/scientificreports/ In the measurements of PL spectra, the laser beam entered the CDs/PVA films from the back side (the surface without microspheres) and was not affected by microspheres, so that the CDs were always excited identically. The incident angle of the laser was carefully adjusted to be about 60 to avoid multi reflections inside the films (in the measured area). The measured PL spectra of CDs/PVA films with different sphere densities and diameters are shown in Fig. 8. Figure 8(a-d) are the spectra corresponding to sphere diameters of 0.3 µm, 0.86 µm, 1 µm, and 1.7 µm, respectively. In each sub figure, the fluorescence intensities are normalized to the maximum of the bare www.nature.com/scientificreports www.nature.com/scientificreports/ surface [black lines]. The purple, blue, green, olive, and red lines are normalized PL spectra corresponding to sphere densities of 20%, 40%, 60%, 80%, and 100%, respectively.
For the microspheres of each diameter, PL spectra are almost the same for different sphere densities. That means the microspheres scatter light almost identically over a broad range of wavelengths. This phenomenon is in accordance with the fact that there is no large ordered arrays on the surfaces [Figs. 5 and 7]. Ordered arrays of dielectric microspheres tend to form photonic crystals whose diffractions are sensitive to wavelengths and directions 47 . By contrast, randomly dispersed microspheres are suitable for the applications requiring broad band performance.
Light extraction induced by the microspheres are prominent. Comparing to the bare film surfaces (black lines), the intensities increase dramatically with increasing sphere densities. The maximum of PL enhancements appears at the diameter of 0.86 µm for 100% sphere density. The normalized intensity at the wavelength of 585 nm is as high as 1.83.
The light extraction is attributed to the scattering of individual microspheres. This is confirmed by the dependence of PL enhancements on sphere densities. For each sphere diameter, we extract the maximums of the normalized intensities for different sphere densities, and plot them in Fig. 8(e-h). The relationships between these values and sphere densities can be well linearly fitted (black lines) with the slopes being 0.33, 0.87, 0.64, and 0.41, respectively. Considering microspheres aggregate in different ways at different sphere densities, there is little influence of aggregation on the light extraction.
When the sphere density is 100%, which means the surface is completely covered with spheres, the maximum of normalized PL intensities are 1. Fig. 9(a)], the electric field intensity (denoted with different color corresponding to E log 2 ) decays smoothly with the propagation distance. There is an obvious contrast of the field intensity below and above the interface because of the change of refractive index. When 11 microspheres are placed at the interface, the distribution of field intensity is modified dramatically by the spheres [ Fig. 9(b)]. In order to show the contribution of individual spheres, the distance between adjacent spheres was set to be 1 µm. The diameter of the spheres was set to be 0.86 µm. It is clear that each sphere induces significant scattering of the electromagnetic wave. The main mechanism of PL enhancement is the extraction of reflected light at large incident angles. When looking into the areas of µ < − x 5 m and µ > x 5 m below the interface, we can see that the intensity reflected downward [ Fig. 9(a)] decrease with the addition of spheres [ Fig. 9(b)].
The effects of silica microspheres on TIR are further shown in Fig. 10. The wavelength of the plane wave source was set to be 585 nm [ Fig. 4(b)]. At the incident angle of 45 , TIR happens at the flat PVA/air interface. In order www.nature.com/scientificreports www.nature.com/scientificreports/ to investigate the effects of individual spheres on the electric field distribution, four spheres with diameters of 0.3 µm, 0.86 µm, 1 µm, and 1.7 µm were placed at the PVA/air interface. An aggregation of 5 spheres with the diameter of 0.86 µm was also placed at the interface to show the collective effects of sphere. The positions of these spheres were set to be x 0 m, and µ = x 10 m. In such a configuration, the spheres do not influence each other because the distances between them are far larger than the wavelength. Figure 10(a-e) are the calculated electric field distribution near these spheres.
When TIR happens, there are evanescent waves 48 penetrating into air with a limited spatial distance (in the order of wavelengths). The scattering is induced by the following processes: the evanescent wave is coupled into silica microspheres, then re-emitted out of the spheres. The profiles of evanescent wave can be seen from the red parts near the interface in Fig. 10. The sphere with the diameter of 0.3 µm is so small, that it induces small modifications of the evanescent wave [ Fig. 10(a)]. When the diameters become larger, the spheres lead to significant modifications of the evanescent wave [ Fig. 10(b-d)]. As the plane wave propagates from left to right, the evanescent wave is effectively coupled into the spheres to the right of the contact points. If several microspheres aggregate together, the profile of the evanescent wave is different from the case of single spheres [ Fig. 10(e)]. A great part of the evanescent wave near the aggregation is still coupled into the spheres and re-emitted into air.
The electric field distribution in a wide scope is shown in Fig. 10(f). In the places far from the microspheres, there is a uniform evanescent wave near the interface. The evanescent wave near the microspheres is significantly scattered. Subsequently, we can see the pattern of scattered intensities in the region of air ( µ > y 0 m). This is the main origin of light extraction. The effects of microspheres can also be seen in the region of PVA ( µ < y 0 m). Because the plane wave source is placed at µ = − y 2 m, the electric field in the region of µ µ − < < y 2 m 0 m is determined by the superposition of incident and reflected waves, while the electric field in the region of µ < − y 2 m is only induced by the reflected wave. In these two regions, we can see clearly intensity reductions originating from the microspheres, because the reflection wave is scattered.
The sphere diameter is an important parameter of scattering 49 . Scatterings increase with increasing diameters, therefore, PL enhancements induced by microspheres of 0.86 µm diameter are higher than 0.3 µm diameter. However, the evanescent wave decays rapidly as the distance to the interface increases. It can only be coupled into the spheres when silica is near the interface. If a sphere is too big, most part of the sphere becomes too far from the interface, and less evanescent wave is coupled into it. This is the reason for decreased PL enhancements of bigger spheres (1 µm and 1.7 µm diameters).
From the experimental and simulated results, we can see that the most important parameter of PL enhancement is the density of microspheres. In order to get the best performance of PL enhancement, the polymer surface should be covered with microspheres as much as possible. Sphere diameter is another important factor. PL enhancement can be optimized by choosing proper sphere diameters.
Silica microspheres have been used as AR layers on top of transparent substrates 47 , but the mechanism was different from the light extraction investigated in this paper. When the surface of a substrate was covered by hexagonally closely packed microspheres, the spheres layer was viewed as a thin film comprising of silica spheres and air. The effective refractive index (1.30) was calculated from the dielectric constant and filling factor of silica. The spheres layer reduced the refractive index contrast between the substrate and air, and led to the suppression of Fresnel reflection. However, the AR effect worked only at small incident angles, TIR could not be affected by the AR layers. Besides, the desired diameter of the spheres, which defined the thickness of the AR layer, was a quarter of the wavelength. In this case, the reflection at the substrate/spheres and spheres/air interfaces interfere destructively at normal incidence. Incorporating a thin film with the refractive index being 1.30, we calculated the reflection of a PVA/film/air structure with a transfer matrix method 50 (see Supplementary Fig. S2 online). At the wavelength of 585 nm and incident angles smaller than the critical angle (about 42 ) of TIR, the thin film showed the best AR effect at the thickness of 113 nm. When the incident angles were larger than the critical angle, the reflection was 100% at all the film thicknesses (113 nm, 300 nm, 860 nm, 1000 nm, and 1700 nm).
Hexagonally closely packed microspheres were used to extract trapped light in OLEDs 51 . The microsphere arrays acted as a two dimensional diffraction lattice, which effectively scattered out the wave guided modes. It was demonstrated that the scattering was highly dependent on both the wavelength and diffraction angle. The emission spectra were significantly modified by the periodic structures. Rainbow patterns at the emission www.nature.com/scientificreports www.nature.com/scientificreports/ zones were also observed. The microspheres in this paper are randomly dispersed, and there is no long-range periodic structure. As a result, the effects of wavelength and direction on PL enhancement are negligible. Since the PL enhancements do not depend on periodicity, the microspheres can be useful in wearable and stretchable devices 52,53 . Comparing to the ordered structures 36,47 , randomly dispersed spheres are much easier to fabricate. Neither complex processes (such as etching) nor rigorous conditions (such as vacuum) are needed. In order to fabricate polymer films with structured surfaces, the fabrication processes are often specially designed from the beginning 41 , while the microspheres can be dispersed on fabricated film surfaces. Therefore the strategy demonstrated here can be included in established technologies. Additionally, functions such as self cleaning 54 can also be explored based on dispersed silica microspheres. The dispersion of dielectric microspheres can also be used in other luminescent films such as inorganic semiconductor doped polymers 55 , dye doped polymers 56 , and OLEDs.
conclusion
We have dispersed CDs in the matrices of free standing PVA films, and fabricated randomly dispersed silica microspheres on the surfaces. The scatterings of microspheres lead to significant PL enhancements (maximum is 1.83). With an aperture superimposed on the intermediate images of film surfaces, the fluorescence spectra have been investigated in a small selected area (25 µm diameter). The experimental results show that the PL enhancements depend linearly on the sphere densities. The PL enhancements induced by the spheres of 0.86 µm diameter is higher than smaller (0.3 µm) and bigger (1 µm and 1.7 µm) spheres. The enhancements are attributed to the extraction of TIR by scatterings of individual silica microspheres. The effects of microspheres on the near-field distribution of electric field are analyzed with the FDTD method. The strategy demonstrated here is easy to implement because no ordered pattern is required. | 5,798.8 | 2020-03-31T00:00:00.000 | [
"Physics"
] |
The combinatorial analysis of n-gram dictionaries, coverage and information entropy based on the web corpus of English
. We research n-gram dictionaries and estimate its coverage and entropy based on the web corpus of English. We consider a method for estimating the coverage of empirically generated dictionaries and an approach to address the disadvantage of low coverage. Based on the ideas of Kolmogorov’s combinatorial approach, we estimate the n-gram entropy of the English language and use mathematical extrapolation to approximate the marginal entropy. In addition, we approximate the number of all possible legal n-grams in the English language for large order of n-grams.
Introduction
Entropy is the basis of the information-theoretic approach to information security. It is a degree of uncertainty. The data with maximum entropy is completely random, and no patterns can be established. For low-entropy data, we are able to predict the following generated values. The level of chaos in the data can be calculated using the entropy values of the system. The higher the entropy, the greater the uncertainty and unpredictability, the more chaotic the system.
Text is also a system that has entropy. Moreover, natural language texts have entropy significantly lower than the maximum entropy of the alphabet. In turn, a random set of characters has the maximum possible entropy in a given alphabet. The entropy index can be used for automatic recognition if the text is legal in the language when searching through various decryption options or when dictionary attack, as described by Jaglom and Jaglom (1973).
In addition, entropy can be used in the keyless recovery method of encrypted information. If we divide an encrypted message into discrete segments of a fixed length, the entropy value determines how many possible text recovery options there are for each such segment of the message. Since the number of existing texts in the language is significantly less than arbitrary (random) ones, this approach critically reduces the complexity of decryption compared to a brute force attack. A similar approach was used for passwords by Florencio and Herley (2007).
There are various methods for determining the entropy of a text or its individual segments, called n-grams. The most popular of them is the Shannon (1948) method. Using representation of the text by a Marcov chain of depth n, it is possible to approximately estimate the probabilities of n-grams. In this paper, we propose to use a dictionary-based method for determining the entropy of n-grams, whose ideas go back to the Kolmogorov (1993) approach. Moreover, we propose a theoretical method to estimating the coverage of the created n-gram dictionaries and a approach for correcting the accuracy of its volume.
We explore texts in English collected from various web pages, measured by n-gram language model, considering an extended alphabet that includes the simplest punctuation marks. The aim of the study is to evaluate the entropy of short-length n-grams based on the corpus and to extend the results obtained to long n-grams. Using the entropy data, we theoretically estimate the approximate number of long legal n-grams in the language for which an empirical estimate is impossible.
The structure of the paper is as follows: in Section 2, we describe the corpus of analyzed texts and the preprocessing of the corpus, and in Section 3, the methodology used for n-gram dictionaries and coverage, as well as the estimation of n-gram entropies. Section 4 presents and discusses the results of our analysis. Section 5 summarizes the main conclusions of this article.
Related works
The n-gram model is one of the most widely used model for natural language modeling. These n-grams are related to Marcov models that estimate next symbol from a fixed set of previous symbols. For n-grams its probabilities can be estimated by counting in a corpus and normalizing via the maximum likelihood estimate. If the numerical estimates for the n-gram model are determined based on the same corpus in which they appear, then such an estimate is considered intrinsic according to Jurafsky and Martin (2009).
There are various algorithms used to improve the accuracy of determining the ngram probabilities and smoothing the coverage. These algorithms are based on counting lower-order n-grams by backtracking or interpolation.
The problem is that any empirical textual material is limited and a priori does not include all the existing n-grams of the language (such n-grams are called out-ofvocabulary). Then the coverage is associated with an estimate of the percentage of such elements, that is, the OOV rate. In problems of speech recognition and machine translation, the problem of out-of-vocabulary elements is often solved by using closed dictionaries, that is, the existence of OOV n-grams is ignored.
The problem of estimation and optimization of coverage is periodically considered in different subtasks. The problem of n-gram coverage often arises in machine learning The combinatorial analysis of n-gram dictionaries 3 and machine translation tasks. Methods for increasing the coverage of n-grams based on the alignment entropy is proposed in Poncelas et al. (2017), but this approach uses a parallel pair of text corpora and is not applicable to the self-assessment of the coverage of a single corpus. Rosenfeld (1995) ascertained that optimization of the coverage depends on the problems considered. First, the coverage depends on the text corpus volume that is used for compiling dictionaries. But as the corpus volume increases, this dependence becomes less pronounced, so data can be extrapolated for further volumes of dictionaries according to Bellegarda (2001). For example, for English, the growth of the dictionary volume slows down significantly when the corpus size is from 30 to 50 million words. Second, the optimal size of the corpus depends on the sources and novelty of the data according to Chase et al. (1994). In General, Rosenfeld (1995) states the corpus is considered saturated when the sharp growth of new words stops with an increase in the corpus volume. There is no metadata in the corpus, since it is not essential for further use of this corpus.
Marcov models are often used as an approximate models of natural language. As described by Cover and King (1978), the Marcov process is stationary, that is the probability distribution for n-grams at time t is the same as the probability distribution at time t + 1, but any natural language is not stationary, since the probability of upcoming n-grams can depend on events that were arbitrarily distant and time-dependent. Thus, these statistical models only give an approximation to the correct natural language distributions and entropies.
Despite the existence of other models, for example described by Chomsky (1956), many studies of natural languages, and in particular English, use the approximation of the text by the stationary Marcov process. For example, in the papers of Calin (2020), Hahn andSivley (2011), Yadav et al. (2010), Guerrero (2009) the Marcov process is used to simulate a natural language text. Since the accuracy of approximating a natural language text using the Marcov model decreases significantly with an increase in the order of n-gram, related studies mainly investigate the entropy of short-length texts. For instance, the research of Guerrero (2009) explores models of n-grams only to order 15. Therefore, there is some gap in research related to the long-order n-gram entropy. Kolmogorov (1993) proposed an alternative combinatorial approach to the study of the entropy of language. Such a purely combinatorial approach evaluates the flexibility of the language, that is, it gives an estimate of the number of text continuations with a fixed dictionary and phrase construction rules. This method tends to overestimate the real values of the language entropy, since any meaningful texts in natural language are subject not only to grammatical rules, but also have some content constraints. Nevertheless, there are areas of research, such as White (1967) paper, in which a certain vocabulary is fixed initially. Then, for such closed dictionary systems, the combinatorial approach can give fairly accurate entropy estimates.
Regarding the study of the marginal entropy of the English language, at different times there are a number of studies that consider different approaches to the assessment of the entropy of English, correcting and clarifying previously obtained estimates. Initially, Shannon (1951) estimated the entropy of printed English between 0.6 and 1.3 bits per character. Then Brown et al. (1992) gave an estimate of the upper bound of printed texts in English equal to 1.75 bits per character, considering 128 ASCII characters. Next Teahan and Cleary (1996) estimated the average entropy of English texts at 1.46 bits per character. The entropy of the English language is 1.77 bits per character relative to Kontoyiannis (1997). Teahan and Cleary (1996) estimated the entropy of the English language from 0.94 to 1.72 bits per character, considering 32 characters of the alphabet. Calin (2020) estimated the entropy of modern English as 1.37 bits per character.
Corpus description
The corpus we analyze is based on text samples from the iWeb corpus of English language presented by Davies (2018) and contains about 100 million characters collected from web pages. Web corpora allow us to research many linguistic changes and reflections with minimal time lag. Unlike other large corpora from the web, the iWeb corpus was created in a systematic way and includes specific websites.
Based on the text corpus collected and in accordance with the n-gram language model, we create n-gram dictionaries for further research.
Data Preprocessing
To increase the coverage and relevance of n-gram dictionaries, we restrict the size of the alphabet. Therefore, the corpus created goes through a filtering process. In addition, we delete errors and typos from the text to minimize the probability that type II errors will appear: we assume that only legal texts which excite in the English language are represented in the n-gram dictionaries.
Thus, the alphabet power of our corpus is 29 characters. We consider it as a simple extension of the Latin alphabet including punctuation.
The number of n-grams extracted from the corpus is shown in Table 1. Within the n-gram model of the language, we generate the dictionaries. The dictionary is a set of n-grams arranged alphabetically, without repetition. We consider an n-gram as a sequence of n characters. The n-grams are selected from the text with chaining: for the next n-gram, we shift to the right by one character. An example of this process is shown in Figure 1. The process of a dictionary creation consists of extracting all n-grams from the corpus in the list, deleting duplicate n-grams and sorting.
The dictionary volume is the number of unique n-grams that remain after removing duplicates in the corpus. We fix the dictionary volumes and the number of n-grams that occur in the corpus only once for various n.
In this paper, we consider 4 n-gram model orders: 10-grams, 15-grams, 20-grams and 25-grams. We choose a step between the model orders of 5 symbols for more accurate extrapolation model construction. For small model orders, there are some statistics, as opposed to model orders over 10. It is quite difficult to count n-grams with an order greater than 25 − 30. Since the study of the n-grams is conducted on a limited-size corpus, the coverage of n-grams decreases with increasing n.
Therefore, we generate the n-gram dictionaries of 10, 15, 20, and 25 characters. We process about 100 million n-grams. Different lengths of n-grams help us study how the characteristics of dictionaries change with increasing length of the text segment.
The compiled n-gram dictionaries form the basis of our methodology for calculating the entropy values. We assume that the created dictionaries are a tool for automatically distinguishing legal n-grams that exist in the language and random n-grams that are impossible for the language.
Coverage and dictionary resizing
Coverage is the ratio of the volume of the constructed dictionary to the total number of different s-grams that exist in the language.
Technically, the coverage of the dictionary can be estimated by ratio of dictionary volume (number of n-grams in dictionary) and the number of all legal n-grams in the selected language. The problem with using this approach is that the exact number of all legal n-grams of the language is unknown, especially for large n-grams. For an asymptotic estimate of the number of legal texts of fixed length, one could use the model of Shannon (1948), but the accuracy of this model is still not fully understood. In addition, such an approach would require relatively accurate estimates of the entropy of n-grams, and such results are still very few and the accuracy of the entropy estimate strongly depends on the type of text being evaluated. Thus, it is necessary to search for alternative methods for evaluating the coverage of dictionaries.
Since dictionaries are compiled on the corpus of limited length, their coverage is incomplete. This means that not all possible existing n-grams of the language are included in this dictionary. That is, type I errors are possible, when a legal n-gram that is not present in the dictionary is discarded as random. This situation, for example, is possible when organizing a dictionary attack. Therefore, it is necessary to evaluate the coverage of the dictionaries created, that is, to estimate what proportion of possible n-grams of the language our dictionaries cover.
In this study, we propose a theoretical estimate of the coverage that is independent on empirical tests: where K s is the initial volume of n-gram dictionary, and k is the number of n-grams that occur in the corpus only once, τ is the theoretical coverage of n-gram dictionary. Therefore, the n-gram dictionary is a tool for distinguishing between two statistical hypotheses: -H 0 -n-gram is a legal text, -H 1 -n-gram is a random sequence of characters.
The probability of the type I error to take a legal n-gram for a random set of characters is determined by the dictionary coverage: The probability of the type II error to take a random set of characters for a legal n-gram is considered close to 0, since the proportion of forbidden n-grams that fall into the dictionary is negligibly small: β = P (H 0 |H 1 ) = 0.
If the coverage of the initial dictionary is low, then the empirical estimates derived from it may not be accurate enough. Therefore, it is necessary to recalculate the volume of the empirical dictionary to bring it closer to the real one. We propose the approach to resizing of dictionaries.
Let us obtain a dictionary of K n units with k n elements occurring once. Then 1− kn Kn is a fraction of repetitive elements in the dictionary, and (1 − kn Kn ) · K n is the number of duplicate elements in the initial dictionary. Since it is compiled on the corpus of limited volume, this dictionary does not have full coverage. Obviously, (1 − kn Kn ) · K n < K n . Based on the paper of Chase et al. (1994), it is known that up to a certain point, the number of new n-grams in the dictionary grows at a linear rate. To get a dictionary consisting of K repeated elements, we need to increase its volume by 1 1− kn Kn times.
The combinatorial analysis of n-gram dictionaries 7 Thus, a new dictionary with volumẽ contains about K n repeated elements. The new volume of the dictionary compensates for the lack of coverage of the original one.
Thus,K n is a theoretical estimate of volume for an n-gram dictionary which presumably covers most of all possible n-grams in a language.
Entropy of n-grams
We consider the theoretical estimate of the dictionaryK n as some approximation of the number of all possible n-grams in the language. This means that we consider all out of dictionary n-grams as random texts. Based on this assumption, we can estimate the entropy of n-grams.
Within the n-gram language model, the text is the implementation of independent tests, the results of which are the n-grams of the corresponding natural language. Then the entropy per sign of the text is estimated asĤ n n , whereĤ n is the entropy of a random source, where the outcomes are n-grams.
To avoid calculating n-gram probabilities, we propose a combinatorical approach for calculating entropy based on the dictionary volume. This idea is based on the Kolmogorov's combinatorial method and the Shannon's second theorem.
Let M (n) be the number of all possible n-grams in a language with an alphabet of power A. Since the number of all distinct n-grams in this alphabet is estimated as A n = 2 n·log 2 n > M (n) is greater than the number of legal n-grams in the language, then there is a value H n , such that M (n) = 2 n·Hn , where H n < log 2 A, as presented in the paper of Shannon (1948).
With the growth of n, the value H n tends to a certain limit. Let n → ∞, then the value: is the language entropy. The existence of this limit is strictly proved in the framework of the stationary ergodic model of a random source. The second theorem of Shannon (1948) gives an asymptotic estimate of the number of all possible n-grams: M (n) = 2 H·n , where H is entropy of the language.
Let the dictionary volumeK n be an approximation of the number of all possible n-grams in the language. Then the entropy of n-grams per character (bits/symbol) can be estimated as: For small orders of the n-gram model, while the value of H n still decreases with the growth of n, the entropy value of H n can be directly used to estimate the number of possible legal texts of length n symbols. Starting from some n, the value of H n stabilizes and no longer changes with increasing n. To estimate the number of legal n-grams of this length, the value of the limit H n is used. The methodology for finding the limit of the H n value through extrapolation is described in the Section 5.2.
Dictionary properties
In Figure 2, we present the results of the estimation of the n-gram dictionaries for short length texts. As said in Section 3.1, the shown values are the empirically obtained. In the diagram, we can see the size of n-gram dictionaries and the number of n-grams that occur in the corpus only once. Since the dictionary is a set of n-grams included in the text corpus, without taking into account duplicates, its size increases with the growth of n. The growth of the dictionary with an increase in n is explained by the fact that the greater the order of the n-gram model, the less these n-grams are repeated in the original corpus. For large orders of n, there are already very few repetitions in the set of n-grams: many n-grams enter the corpus only once. For example, initially for 10-grams, a set which was extracted from the corpus had a volume of 99999991 10-grams, for 25-grams -99999976. However, after deleting the duplicates in each of the sets, there are respectively 22855480 10grams and 85694340 25-grams in the dictionaries. The fact that the number of possible legal texts increases with its length is fully consistent with model of Shannon (1948) for estimating the number of legal texts in a language.
Based on the number of n-grams occurred only once, we can estimate the coverage of empirically obtained dictionaries. Since the coverage of the source dictionaries is insufficient, it is necessary to recalculate the volumes of the n-gram dictionaries using the methodology proposed in Section 3.1. The coverage values and volumes of new dictionaries are shown in Table 2. As expected, the new dictionaries correspond to a more complete coverage and are used in subsequent stages of the study.
To investigate how the volume of dictionaries changes depending on the corpus size, we have built an interpolation function. We have graphically represented the dependence of the dictionary size on the corpus size and noticed that it is similar to a square root function. Then we have constructed the interpolation function using Wolfram Mathematica and a non-linear fit form. In Figure 3, we show this interpolation model for 10-grams. We can see that the growth rate of dictionaries is below the linear dependence is the closest to the square root function.
Equation 6 describes the interpolation function for 10-grams.
−1.14697 · 10 6 + 1230.21 √ x For other values of n, the interpolation function remains the same with a slight change in the coefficients. Using this model, we can predict some subsequent values. For example, for a corpus of 300 million characters, we expect a dictionary of 20 million 10-grams, and for a corpus of 700 million characters, we could expect a dictionary of 30 million 10-grams.
Distribution of n-gram entropy
It is well-known the amount of information transmitted by a single n-gram increases with the length of the segment. To determine the average amount of information per character, i.e. the specific information content of the source, we need to divide this number by n. With unlimited growth, the approximate equality will turn into the exact one. The result is an asymptotic relation.
Using the approach presented in Section 3.2, we have determined the entropy of short-length texts based on the volume of the original empirical dictionary K and the theoretical oneK. In Figure 4, we can see that the specific entropy of the source (text) decreases with increasing length of the n-gram. The entropy per symbol means that it takes H n bits of information to determine the n + 1 character of the text. The more information we know, the less uncertainty there is about the next character in the text. This fact explains the decreasing nature of the specific entropy function.
However, as the text length increases, the rate of entropy decrease slows down. For example, the difference between H 10 and H 15 is only about 0.63 bits. It means that if we know the first 10 characters of a substring of length 15, there is little uncertainty about the remaining 5 characters.
It is important to note that we have considered the alphabet extended, so the resulting n-gram entropy values differ from the known values for English.
With the growth of n, the value of H n decreases to some n and with further growth almost does not change, that is, it reaches a certain limit, called the entropy of the language. However, our n-gram model is based on a finite corpus of text samples, so estimating the entropy rate for large values of n gives implausibly low information rates. In other words, as the value of n increases, the experimental estimates of entropy per symbol tend to 0. Indeed, as the model order increases, the number of n-grams samples decreases, so that for very large values of n, knowledge of the first n − 1 letters of the text uniquely identifies the text in question, that is, the n-letter is pre-determined.
Extrapolating these results to large values of n is difficult, because the shape of this sequence of values is generally unknown, except that it is positively decreasing. To obtain the ultimate entropy from this set of measurements, we construct a model of sequential estimates.
We have assumed that the sequence of entropy values obeys a linear recurrence relation: with initial conditions F 0 = H 10 , F 1 = H 15 and F 2 = H 15 . The coefficient k for the model is determined numerically in accordance with the experimentally obtained entropy values for segments of small length.
In this case, the value of k, which gives the best approximation, is k ≈ 0.62. By increasing the value of n, a sequence of heuristic estimates of H n is constructed, the experimental evaluation of which becomes difficult for a large length of a text segment. Starting from n = 50, the values of H n are stabilized and no longer change with the length of the segment.
In Figure 5, we present the extrapolation results of entropy per character for the initial and theoretical dictionariesK.
This extrapolation model is heuristic, but it is sufficient to solve the problems set in the paper. As we can see on the graph, the limit entropy rate is 0.8 bits per character for the theoretical dictionary. Therefore, we can estimate the entropy H n as the limit entropy for a long text.
Number of legal texts
For various studies, such as cipher systems, the number of possible legal texts of fixed length plays an important role. The number of all distinct n-grams in the alphabet of power A is estimated as A n . However, among this set, many n-grams are invalid for the selected language. As described in Section 4.3, the number of legal n-grams of large orders can be estimated as 2 H·n , where H is the entropy of the language introduced in Section 4.3 and found in Section 5.2.
Using the entropy values obtained and the extrapolation model constructed, we have estimated the number of legal texts among all texts of fixed length. The results are shown in Table 3.
Therefore, the relative part of legal texts among all distinct texts of fixed length n can be estimated as: where H is the language entropy, A is the language alphabet, n is the text length. 1.2 · 10 24 0.7 · 10 −122 300 1.8 · 10 72 0.3 · 10 −366 500 2.6 · 10 120 1.7 · 10 −611 1000 6.7 · 10 240 2.7 · 10 −1222 The last finding is the proportion of legal n-grams among the total number of ngrams. In other words, it is the probability of finding a legal text with a random sample from among all possible n-grams. As expected, this probability decreases with increasing text length. As the text length increases, the probability of finding a legal n-gram critically decreases. This confirms the previously stated hypothesis that to recover individual parts of the encrypted text, it is worth considering n-grams of short length.
Conclusion
In this paper, we have estimated the n-gram entropies of natural language texts and examined the number of legal possible texts in English. Most of the previous studies on the n-grams entropy did not take into account punctuation, so the values obtained in this paper have eliminated this gap. We have found that the empirical method of generating dictionaries can lead to significant type I errors in estimating the number of existing n-grams due to low coverage. We have eliminated this drawback by offering a method for refining the theoretical volume.
The entropy of the text per character decreases positively with the growth of the n-gram length. This can be explained by the fact that as the length of the known text increases, the uncertainty of the next character decreases. However, starting from a certain n, the entropy values almost do not change, reaching a certain plateau, called the entropy of the language. By extrapolating the data with a linear recurrent sequence, we have heuristically determined the limiting entropy of our corpus, which is 0.8 bits per character.
The found limit value of entropy allowed us to estimate the number of legal longlength n-grams, so it is almost impossible to do empirically. The probability of finding a legal text among all possible sets of n-grams for large n is catastrophically small. This result confirmed our assumption that it is advisable to use short n-grams to recover information using the information-theoretic approach. Figure 1 The process of trigram selection with chaining.
Figure 2
Dictionaries of n-grams for shortlength texts. Entropy per character for short length texts.
Figure 5
Extrapolated entropy rate values. | 6,448.4 | 2021-06-07T00:00:00.000 | [
"Computer Science"
] |
Two-Higgs-Doublet Models with a Flavored Z 2
Two Higgs-doublet models usually consider an ad-hoc Z2 discrete symmetry to avoid flavor changing neutral currents. We consider a new class of two Higgs-doublet models where Z2 is enlarged to the symmetry group FoZ2, i.e. an inner semi-direct product of a discrete symmetry group F and Z2. In such a scenario the symmetry constrains the Yukawa interactions but goes unnoticed by the scalar sector. In the most minimal scenario, Z3 o Z2 = D3, flavor changing neutral currents mediated by scalars are absent at tree and one-loop level, while at the same time predictions to quark and lepton mixing are obtained, namely a trivial CKM matrix and a PMNS matrix (upon introduction of three heavy right-handed neutrinos) containing maximal atmospheric mixing. Small extensions allow to fully reproduce mixing parameters, including cobimaximal mixing in the lepton sector (maximal atmospheric mixing and a maximal CP phase).
I. INTRODUCTION
The discovery of a Higgs boson with a mass of m h 125 GeV has opened the door to the possibility of having in Nature multiple fundamental scalars. In principle, nothing forbids their proliferation. Nonetheless, the amount of parameters dramatically increases, both in the Yukawa and scalar sector. Here we consider a simple extension to the standard model (SM) by only introducing a second Higgs doublet (2HDM) with quantum numbers identical to the SM Higgs, and three right-handed neutrinos to generate active neutrino masses. Furthermore, we mainly focus on the problem of fermion mixing by first adopting the common 2HDM framework with natural flavor conservation (NFC) [1,2], achieved through a Z 2 reflection symmetry. Then, we add flavor to it via the enlargement of the symmetry group in a very particular manner, F Z 2 . This denotes an inner semi-direct product of a discrete symmetry group F and a Z 2 symmetry. The non-Abelian nature of the enlarged symmetry group then strongly reduces the number of Yukawa couplings, thus providing a more predictive theory. Moreover, the adhoc nature of the Z 2 is explained as a part of a larger group 1 .
To understand the need for the Z 2 symmetry we briefly sketch its impact. In a general setup, one may immediately write the Yukawa Lagrangian for a given fermion (1) *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>‡<EMAIL_ADDRESS>1 There are other possibilities to explain the ad-hoc Z 2 , for instance by linking it to the remnant symmetry of a spontaneously broken U (1), see e.g. [3][4][5] where ψ R or ψ L are three-dimensional vectors in flavor space each denoting a weak singlet or a weak doublet, respectively, and ψ representing any of the four fermion types, ψ = q u , q d , , ν. Notice that the Higgs doublets must be replaced by their charge-conjugate fields, Φ k = iσ 2 Φ * k , for the up-type quark and neutrino cases 2 . If the neutral components of both scalar doublets acquire a vacuum expectation value (VEV), Φ 0 1 = v 1 and Φ 0 2 = v 2 , both Yukawa matrices contribute to the fermion masses and mixing. It is clear that diagonalization of the mass matrix, cannot mean, in general, diagonalization of the individual Yukawa matrices. This brings about dangerous tree-level flavor-changing-neutral-currents (FCNC). To avoid them it is sufficient to assume NFC by introducing a Z 2 symmetry and by assigning a single scalar doublet for a given fermion species such that only one of the two Yukawa matrices contributes to the mass matrix. This is, the scalar fields transform under the discrete symmetry such that while the left-handed quarks and leptons transform trivially and the right-handed parts transform appropriately. The different assignment possibilities lead to four nonequivalent types of 2HDMs 3 : • Type I: All charged fermions couple to Φ 2 .
• Type II: q d and couple to Φ 1 and q u to Φ 2 .
• Type X: q u and q d couple to Φ 2 and to Φ 1 .
• Type Y: q u and couple to Φ 2 and q d to Φ 1 .
Other possibilities are the Type III which is the general 2HDM with all couplings permitted and the inert doublet model where Φ 2 couples to all fermions while Φ 1 has no VEV thus leaving unbroken the Z 2 symmetry and providing a viable dark matter candidate. Although other approaches, such as Yukawa-alignment [6] or singularalignment [7], may also avoid tree-level FCNC, here we only focus on those 2HDM employing the discrete symmetry Z 2 . Note that the non-equivalence nature between the four types comes from the fact that each framework ends up having different effective Yukawa couplings of the fermions to the various scalar particles; for a thorough discussion on various phenomenological and theoretical aspects of 2HDMs see Ref. [8].
On the other hand, a general feature shared by the four different types (I, II, X, and Y) is the Z 2 -invariant scalar potential given by The hermiticity condition of the potential leaves λ 5 as the only complex coefficient while the rest, m 2 11 , m 2 22 , and λ 1,2,3,4 , are real. There are in total eight real parameters. However, not all of them are physical. A phase redefinition can make λ 5 real and only seven parameters are physical. Note that our potential has explicitly become CP -symmetric.
No matter the amount of Higgs doublets one employs, the full mass matrix for any given fermion is parametrized by nine complex parameters. The initial arbitrariness may then be reduced via weak-basis transformations (unitary transformations leaving invariant the kinetic terms), but not enough to claim predictivity. In the mass basis, for either quarks or leptons, one has six fermion masses and four (six for Majorana neutrinos) mixing parameters, plus arbitrary Yukawa couplings. The flavor sector thus gives to the SM and its extensions (without symmetries) the highest amount of arbitrariness. It is only when symmetries are introduced that the initial arbitrariness can be drastically reduced.
Here we intend to explore the effect of symmetries in the flavor sector such that we find correlations among the quark and lepton mixing parameters.
The paper is organized as follows: in Sec. II, we discuss the meaning of adding flavor to Z 2 . Next, in Sec. III, we provide the most minimal scenario realizing the features of our approach. Also we highlight the main differences when compared to the four types of 2HDMs. Thereafter, in Sec. IV, we take the incompleteness of fermion mixing in our simple model as a hint to the presence of additional new physics and introduce a flavor doublet of real scalar gauge singlets. Finally, in Sec. V, we conclude. Some technical details are delegated to appendices.
II. ADDING FLAVOR TO Z2
We are interested in those finite symmetry groups, G, which can be written as an inner semi-direct product of an arbitrary group F and Z 2 , There are in fact many examples of such groups (see Ref. [9] for more details): The main property of this kind of groups is that they contain two one-dimensional irreducible representations (denoted singlets), which behave exactly as if we only had a Z 2 symmetry. Thus, by assigning each Higgs doublet to one of these singlets, we are mimicking in the scalar sector any of the NFC models with a Z 2 symmetry. On the other hand, the non-Abelian nature of the symmetry only impacts the Yukawa interactions, thus providing a way to approach the problem of mixing while simultaneously tackling minimal scalar extensions to the SM.
An additional feature of this approach is the following. Since the number of Higgs doublets in a theory restricts the maximum group order of allowed symmetries ('realizable symmetries') that would otherwise imply massless Goldstone bosons [10], then by implementing symmetry groups as here proposed we avoid these constrictions.
Let us take as a first example the Klein group given by Z 2 Z 2 . It is the smallest possibility within this approach. It has four elements and four irreducible representations (irreps): 1 ++ , 1 +− , 1 −+ , and 1 −− . However, as it is still an Abelian group its effect on the Yukawa couplings is only of reduction but not of relation. For example, we could assign the Higgs doublets as Φ 1 ∼ 1 −− and Φ 2 ∼ 1 ++ while the third, second, and first fermion families as 1 −+ , 1 +− , and 1 ++ , respectively. In return the mass matrix for Dirac fermions would take the generic form where Φ 0 1 = v −− and Φ 0 2 = v ++ . Therefore, although we have reduced the number of complex parameters from nine to five, we yet have no predictions except for the fact that we only expect mixing between the first two generations. Nevertheless, it demonstrates that the combination of the flavor-safe Z 2 with an additional group will simplify the Yukawa sector. Going to the non-Abelian case will result in predictive scenarios, and we will study a very minimal approach in what follows.
III. THE MINIMAL CASE: Z3 Z2
The smallest non-Abelian finite group has six elements and is denoted by D 3 ≡ Z 3 Z 2 . This dihedral group describes the symmetrical properties of an equilateral triangle 4 . It has three irreducible representations: two singlets 1 + , 1 − , and one doublet 2. The product rules can be found in Appendix A.
Although different assignments between the D 3 irreps and the fermion fields could be done, here we opt to consider whereas in the lepton sector, We are motivated to this choice, as we will see, because the dominant contributions to quark and lepton mixing are the Cabibbo and atmospheric angle, correspondingly.
Recall that the scalar sector should be assigned to The neutral component of both Higgs doublets acquires a VEV, spontaneously breaking the electroweak symmetry; we denote them as Note we are using the convention v 4 D 3 is isomorphic to S 3 , the group describing the permutations of three indistinguishable objects. with and where [ ] k = {1 + , 1 − , 2} represents one of the three possible outputs from the D 3 tensorial product. Also notice that we are now assuming Majorana neutrinos by virtue of a standard seesaw.
In the quark sector, the resulting Yukawa matrices take the form while in the lepton sector we have , where all the parameters are real and positive and where we have taken {y u 1 , y d 1 , y t , y b , y e 1 , y e } ∈ + without loss of generality. All Dirac mass matrices satisfy where Ξ = Γ, ∆, Π, and Ω. Each mass matrix has three complex parameters and possesses the feature of being diagonalisable by the same transformation that brings to diagonal form its individual Yukawa matrices. It is this property that guarantee the absence of FCNC at tree level and it represents an explicit realization of the singular alignment ansatz [7]. Note how we end up, in the quark sector, with only eight real parameters, six of which correspond to the six quark masses while the other two, being complex phases, are forced to be nearly ±π/2 due to the phenomenological observation of hierarchical fermion masses. We return to this point later.
The effective Majorana neutrino mass matrix can be computed from the standard seesaw formula, M ν = −M ν M −1 R M T ν , and is found to be diagonal: Here , which is a consequence of the D 3 flavor symmetry. The mass matrix has a mass degeneracy between the two neutrino states, ν L,2 and ν L,3 , while, since it is diagonal, it does not contribute to the mixing.
Towards studying the phenomenology of this scenario we note that complex matrices of the form are brought to diagonal shape via a maximal bi-unitary transformation that is, with γ 1 = arg(a ∓ ib) and γ 2 = arg(a ± ib) implying real and positive masses. The choice of the signs will depend on the ordering of the masses. The singular values of such a matrix m are given by where ρ = arg(a) − arg(b). Moreover, note that if the parameters a and b are taken to be real (ρ = 0) then the masses would be degenerate. In particular, if ρ is in the first quadrant then ρ ∈ [ArcSin The transformations ρ → ±ρ + π and ρ → −ρ will lead to the same masses as ρ. Additionally, when the masses are hierarchical, m 2 m 1 , the allowed interval for ρ shrinks , π/2], essentially implying that ρ ±π/2. We have chosen the off-diagonal Yukawas to be real and positive without loss of generality. Therefore the complex phase of the diagonal Yukawas is found to be γ f ±π/2.
With these results in mind and looking at the form of the mass matrices of the charged fermions shown in Eqs. (14)- (16) we can extract the masses and mixing pa-rameters: with the Majorana neutrino masses as given in Eq. (17). The quark Yukawa couplings can now be generically fixed to (defining An alternative solution exists when one exchanges y f 1 ↔ y f 2 . Similarly for the charged leptons, and again it is possible to exchange y e 1 ↔ y e 2 . Turning to fermion mixing, we can parametrize the relevant diagonalization matrices in terms of the complex rotation matrices U ij (θ, φ), which are defined as and similarly for U 13 and U 23 . Then, the mixing matrices for the up and down quarks and for the charged leptons are simply given by We obtain for the Cabibbo-Kobayashi-Maskawa (CKM) and Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrices where one of the signs of the PMNS matrix is realized when Eq. (24) applies and the opposite when y e 1 ↔ y e 2 . This is, by enlarging Z 2 to Z 3 Z 2 , we are now able to predict trivial mixing in the quark sector and a maximal atmospheric mixing angle in the lepton sector. There is also a maximal CP violation phase, which is unphysical if the angles θ 12 and θ 13 remain 0, but it will become important later. These features have to be understood as the dominant characteristics of this model at leading order. Its incompleteness points to further investigation on how the model should be extended, see Sec. IV.
A. FCNC
There are no tree level FCNC since all Yukawa matrices are simultaneously diagonalisable. However, at the one-loop level quantum corrections could induce misalignment in the different Yukawa matrices and generate FCNC. To check this effect we employ the formulas obtained for a theory with N −Higgs doublets [11] and given in Appendix B. It is straightforward to see that for our particular model in all cases the one-loop renormalization-group-equations may only give place to flavor-conserving terms 5 where µ is the renormalization scale and Ξ = Γ, ∆, Π, and Ω. More details can be found in Appendix B 6 .
B. Nonuniversal charged fermion-scalar couplings
In order to find the couplings between the charged fermions and the Higgs scalars we need to move both of them to their mass basis. In our case, only the latter are still in the symmetry adapted basis. We first introduce their notation Since the scalar potential is CP -symmetric there are states with definite CP -odd and CP -even quantum numbers. This allows one to write two independent mass matrices where λ 345 = λ 3 + λ 4 + λ 5 , and (33) 5 As we are only interested in finding flavor-violating structures, we have not considered the quantum corrections to the VEVs. 6 Due to the fact that we are employing the standard seesaw, FCNC with heavy sterile neutrinos are sufficiently suppressed and are not discussed here.
The first case can be brought to diagonal form by means of the orthogonal transformation with tan 2α = 2v 1 v 2 λ 345 /(v 2 1 λ 1 − v 2 2 λ 2 ), while the second one by Here the latter angle of rotation satisfies tan β = v 2 /v 1 and G 0 is the neutral pseudo-Goldstone boson to be 'eaten' by the Z mass. Similarly, one has for the charged scalars a mass matrix diagonalised by the same rotation as for the CP -odd neutral scalars, The Yukawa Lagrangian related to the interactions to the neutral scalars is while the one related to the interactions to the charged scalar is with P L,R = (1 ∓ γ 5 )/2. In order to compare this expression to that appearing in conventional 2HDMs we have assumed, for the moment, massless neutrinos. We find that an important distinction between this framework with typical 2HDMs with NFC (see Table I) is that fermion couplings become nonuniversal, see Table II. Furthermore, those fermions which initially talk to both Higgs doublets (Φ 1,2 ) have the following couplings f ± (α, β, y 1 , y 2 ) = y 2 c α ∓ y 1 s α y 2 s β ± y 1 c β , g ± (α, β, y 1 , y 2 ) = y 2 s α ± y 1 c α y 2 s β ± y 1 c β .
Note that cancellations can occur, which could make f ± or g ± vanish. The observed hierarchy in the fermion For small or large tan β both relations reduce to f − ≈ m2 m1 f + and g − ≈ m2 m1 g + ; meaning that the fermion with a lighter mass (m 1 < m 2 ) has an O(10−100) enhancement in its coupling to the scalars compared to the heavier one. Moreover, for α → β − π/2 all couplings to the 125 GeV scalar state, h, including the new functions f ± , are automatically made SM-like, i.e. ξ h qu,q d , → 1, while the other couplings end up only depending on tan β. A further implication of the alignment limit is that the coupling of the CP -even state H with the W and Z bosons becomes null.
The resulting couplings have been grouped into different sets corresponding to similar characteristics in Table II. This also holds for couplings which depend on the Yukawa parameters (and therefore, to the different fermion masses), like ξ H f = g + (α, β, y f 1 , y f 2 ) for f = τ, s, c. As they have the same functional dependence they are grouped under the category ξ H τ,s,c . In general, conventional 2HDMs with NFC have a moderate behaviour for moderate values of tan β. Their main differences appear in the small (or large) tan β limits. For example, take the couplings to the charged scalar, H + . In the type-II scenario, its coupling to tb is large (philic) at large tan β, whereas in the same limit, it is always small (phobic) for the type-I case. In contrast to this typical situation, the Z 3 Z 2 model shows already at moderate values of tan β either phobic or philic behaviour, see Figs. 1-3. Also it can be seen that for a given value of β a given fermion may completely decou- ple from one of the four scalars and accidentally become inert to that scalar.
IV. COMPLETING MIXING AS A GUIDE FOR NEW PHYSICS
While possessing attractive features, the minimal Z 3 Z 2 model presented so far does not fully reproduce the fermion mixing and masses. We take this 'incomplete mixing' as a hint pointing towards new physics. In the quark sector, the vanishing mixing points to the introduction of a dim > 4 operator that generates small corrections. In a similar fashion, the Majorana nature of neutrinos could allow dim-4 operators and therefore large contributions to mixing. The simplest possibility is obtained by introducing a real singlet scalar field, which is assumed to transform under D 3 as a doublet, This field acquires a VEV Note that by introducing η and its non-renormalizable interactions we have allowed at tree level the appear- ance of FCNC. We may assume a large mass and later decouple it from the theory. While perturbing 2HDMs is typically done to explain anomalies [12,13], here we need it to complete fermion mixing. Note however that our approach uses an explicit model, i.e. the symmetry and field content of our model determines the type of Yukawa matrices to be added. At last, notice that integrating out the singlet scalar means that our theory has become a 2HDM of Type III. We will later demonstrate that the model can be easily made flavor-safe. An explicit numerical example will be provided in Sec. IV D.
A. Quark mixing
In the quark sector, the non-renormalizable dim-5 operators leading to a correct CKM matrix requires a complete UV formulation to be realized. As a simple example that serves as a plausibility argument, consider the following dim-5 effective interactions, invariant under the SM gauge group and the flavor symmetry: These contributions give rise to small corrections in quark mixing through perturbations to the down quark mass matrix of the form which should be enough to perturb our initial identity matrix and reproduce the CKM mixing. These effective operators can be realized in an UV-complete model just by adding a vector-like pair of coloured particles with the same gauge quantum numbers as the right-handed down quarks.
In the basis where M u and M d are diagonal, the perturbation matrix becomes Recall that here we still have trivial quark mixing. In order to obtain a realistic mixing scenario, the perturbations need to be sufficiently small compared to the bottom quark mass but large enough compared to the down and strange quark masses. This implies that the Yukawa parameters y d 1,2 are no longer completely satisfying Eq. (23). Through a qualitative analysis we find that for where λ 0.225, it is possible to fully reproduce quark mixing without introducing unacceptably large amounts of flavor violation at tree level. The (3, 1) and (3, 2) matrix elements could be taken as zero or of the same order that their transpose counterparts. On the other hand, all entries are given up to O(1) complex factors. It is interesting to note that Eq. (47) shows an approximate U (2) flavor symmetry for the first two generations, m b m d,s (analogously for the up-type quarks). The above resulting mass matrix is a similar realization of the 'flavorful' 2HDMs investigated in Ref. [14] wherein Yukawa couplings, for all charged fermions, are chosen as to approximately preserve a U (2) 5 flavor symmetry acting on the first two generations.
Alternatively, we could have introduced perturbations through the up-type quarks; however, to reproduce the CKM mixing would have required a larger modification of the initial Yukawa parameters, |y u 2 | and y u 1 , by at least one order of magnitude. This may be easily appreciated by considering that a perturbation to the 1 − 2 sector of the size √ m 1 m 2 is enough in the down quark sector, √ m d m s ∼ 10 MeV, to generate Cabibbo mixing, while for the up-type quarks it would still require an additional order of magnitude, O(10) √ m u m c ∼ 100 MeV, plus some extra tuning in the Yukawa parameters to get the correct light quark masses, m u and m c .
B. Lepton mixing
In the lepton sector, the dominant perturbation contributions come through the right-handed neutrinos, producing where we have defined r = ω 2 /ω 1 and δ N i = g N i ω 1 . The charged lepton contribution remains untouched by the addition of the scalar η and is given by Eq. (28). Once we consider the contributions to the mass matrix of the right-handed neutrinos shown in Eq. (49) the initial lepton mixing given by Eq. (29) gets modified. If the Yukawa couplings appearing in the neutrino mass matrix are taken real then we have where O ij (θ) is the usual rotation matrix in the (i, j) plane. It can be shown that U 23 (π/4, ±π/2)O 23 (θ ν 23 ) = P · U 23 (π/4, ∓π/2) , (51) where P is a diagonal unitary matrix which is unphysical. Therefore, if the neutrino sector is real we obtain cobimaximal mixing [15] with θ 23 = π/4 and δ CP = ±π/2 in the lepton sector. While the sign of δ CP is not fixed, data seems to favor the negative option [16]. Note that this is a particular case of the general theorem derived in Ref. [17], i.e. if cobimaximal mixing is present in the charged lepton sector and the neutrino sector is real, then the full PMNS matrix is also cobimaximal. In particular, the full lepton mixing parameters are given by θ 12 = θ ν 12 , θ 13 = θ ν 13 , θ 23 = π/4 , δ CP = ±π/2 , φ 12 = 0, π/2 = φ 13 , irrespective of θ ν 23 . That is, the large hierarchy between the charged lepton masses coupled with the assumption that the neutrino Yukawas are real leads to cobimaximal mixing i.e. maximal atmospheric mixing angle and δ CP = ±π/2. For the other two mixing angles θ 12 and θ 13 no predictions can be made, but the parameters can be chosen in such a way that they lie inside the experimental constraints. Moreover, the Majorana phases rel-evant for neutrinoless double beta decay maintain their CP conserving values.
We remark that of course there is no need to assume the neutrino sector to remain real, in the most general scenario with complex parameters there is enough freedom to fit all the mixing parameters. The assumption that the neutrino Yukawas are real, while the charged lepton Yukawas are forced to be complex due to hierarchical masses, may seem ad-hoc but can actually be justified in many different scenarios. For example in Ref. [18] the author derives a general loop mechanism in which the neutrino mass matrix is complex but diagonalized by a real orthogonal matrix. This same mechanism could be applied here by changing the type I seesaw neutrino mass generation by an inverted loop seesaw mediated by three real scalars. Then, the cobimaximal nature of the PMNS would remain. Another option would be to explicitly impose a remnant CP symmetry in the neutrino sector.
It is worth to note that our scenario is minimal and quite simple, yet, it leads to such a restricted scenario. The SM symmetry group is extended by just D 3 while the particle content is enlarged by an extra Higgs gauge doublet and an D 3 doublet η which is a gauge singlet.
C. The scalar potential
The most general scalar potential invariant under Z 3 Z 2 is with the first term given in Eq. (4) and Additionally, the fact that the heavy quark masses are simply given by m t y t v 2 and m b y b v 1 naturally points to having order one Yukawas and hierarchical VEVs in the range meaning that tan β ∈ (10, 100). To create such a hierarchy while maintaining all scalar masses around the electroweak scale we need to consider m 2 22 < 0, m 2 11 > 0, and introduce the soft-breaking term where m 12 ∼ O(10) GeV. By assuming |m 11 |, |m 22 | ∼ 100 GeV, a straightforward calculation then leads to The smallness of v 1 is thus natural as one recovers a larger symmetry when setting it to zero. The minimization conditions read whereλ = λ η1 + λ η2 . The latter two conditions can only be met if The general expressions for the squared mass matrices are given in Appendix C.
In order to decouple η from the 2HDM we assume its mass (or VEV) to be large enough and ζ 1,2 → 0. Then, for the full potential, V +V soft , to be bounded from below we require the well-known relations while for the new contributions which all are sufficient and necessary conditions.
D. Numerical example
In the following, we give a numerical example of how the perturbations brought by the addition of η modify our initial 2HDM setup. We assign a best-fit value to our set of complex parameters { 1 , 2 , 3 , 4 , 5 , 6 } by virtue of a χ 2 fit to the three down quark masses and four quark mixing parameters where the value of the masses is taken at the Z boson mass scale, M Z , using the RunDec package [19] as shown in the most recent global fit from the PDG [20]. As a proof of principle, we consider a minimal scenario with the least number of parameters. We assume all of them real except for 4 which we consider it as purely imaginary and set 5,6 = 0. Also we allow for small variations in the initial down quark Yukawa couplings appearing in Eq. (22).
The following best-fit values, reproduce the down quark masses and the observed CKM mixing at the 1σ level with a quality of fit of χ 2 d.o.f. = 0.49.
Besides their role in mixing, the introduction of perturbations has also brought FCNC at tree level. We now show how the size of the contributions is still sufficiently small. Through the best-fit values we calculate the unitary transformations for the left-and right-handed fields. With them the corresponding down quark Yukawa matrices, in the mass basis, are where we have assumed v 1 ∼ 10 m b and v 2 ∼ m t to estimate the upper bounds and which are all consistent with those presented in Refs. [14,21]. There are in fact three different scenarios from which Eq. (66) represents one of them. As all the independent perturbations defined in Eq. (45) originate from both Higgs doublets, Φ 1 and Φ 2 , we can define three different benchmark scenarios as follows: all the perturbations come from i) Φ 1 , ii) Φ 2 , or iii) both. Our choice in Eq. (66) depicts the first case. We left for future work a detailed study of the differences between this approach and the conventional 2HDMs.
V. CONCLUSIONS
We have considered a new class of 2HDM where the conventional Z 2 symmetry, by which FCNC can be naturally avoided, has been enlarged to F Z 2 such that symmetry constrains the Yukawa sector but goes unnoticed by the scalar sector. In particular, we have shown that the minimal case with Z 3 Z 2 is able to provide trivial quark and maximal atmospheric mixing at leading order. A further implication to this class of models is that couplings between the fermions to the scalars are nonuniversal, compared to the conventional types where couplings are universal. At last, we have taken the incompleteness of fermion mixing as a hint pointing towards new physics. To this end we have included two real scalar gauge singlets which transform as a flavor doublet, and are later integrated out by assuming them to be properly heavy. We have shown that quark mixing can be set in agreement with the latest global fits while the lepton mixing can become cobimaximal, i.e. maximal atmospheric mixing and maximal CP violation. We have treated the introduction of the real scalars as a new way of adding perturbations to 2HDMs in a systematic manner by demanding them to be invariant under the flavor symmetry. In general, these additions have the effect of breaking flavor conservation and tree level FCNC, mediated by the neutral scalars, are induced. However, the size of the contributions remains sufficiently small thanks to the approximate presence of a U (2) 3 global flavor symmetry in the light quark sector. | 7,530.6 | 2019-11-15T00:00:00.000 | [
"Physics"
] |
Quantitative analysis of regional distribution of tau pathology with 11C-PBB3-PET in a clinical setting
Purpose The recent developments of tau-positron emission tomography (tau-PET) enable in vivo assessment of neuropathological tau aggregates. Among the tau-specific tracers, the application of 11C-pyridinyl-butadienyl-benzothiazole 3 (11C-PBB3) in PET shows high sensitivity to Alzheimer disease (AD)-related tau deposition. The current study investigates the regional tau load in patients within the AD continuum, biomarker-negative individuals (BN) and patients with suspected non-AD pathophysiology (SNAP) using 11C-PBB3-PET. Materials and methods A total of 23 memory clinic outpatients with recent decline of episodic memory were examined using 11C-PBB3-PET. Pittsburg compound B (11C-PIB) PET was available for 17, 18F-flurodeoxyglucose (18F-FDG) PET for 16, and cerebrospinal fluid (CSF) protein levels for 11 patients. CSF biomarkers were considered abnormal based on Aβ42 (< 600 ng/L) and t-tau (> 450 ng/L). The PET biomarkers were classified as positive or negative using statistical parametric mapping (SPM) analysis and visual assessment. Using the amyloid/tau/neurodegeneration (A/T/N) scheme, patients were grouped as within the AD continuum, SNAP, and BN based on amyloid and neurodegeneration status. The 11C-PBB3 load detected by PET was compared among the groups using both atlas-based and voxel-wise analyses. Results Seven patients were identified as within the AD continuum, 10 SNAP and 6 BN. In voxel-wise analysis, significantly higher 11C-PBB3 binding was observed in the AD continuum group compared to the BN patients in the cingulate gyrus, tempo-parieto-occipital junction and frontal lobe. Compared to the SNAP group, patients within the AD continuum had a considerably increased 11C-PBB3 uptake in the posterior cingulate cortex. There was no significant difference between SNAP and BN groups. The atlas-based analysis supported the outcome of the voxel-wise quantification analysis. Conclusion Our results suggest that 11C-PBB3-PET can effectively analyze regional tau load and has the potential to differentiate patients in the AD continuum group from the BN and SNAP group.
Introduction
In 2018, the National Institute on Aging and Alzheimer's Association (NIA-AA) has updated the definition of AD by focusing on biomarkers associated with the pathological processes of Alzheimer's and excluding the clinical symptoms as diagnostic criteria [1]. The biomarkers that are closely correlated with the hallmarks of AD are amyloid-beta (Aß) and tau. However, the role of neurodegeneration or neuronal injury biomarkers in predicting cognitive decline is also undeniable. The NIA-AA framework therefore suggests the A/T/N biomarker classification scheme in AD and brain aging research, where "A" refers to biomarkers of Aß, "T" stands for biomarkers of tau pathology, and "N" refers to biomarkers of neurodegeneration or neuronal injury [2].
Numerous studies have highlighted the importance of Aß biomarkers [3][4][5][6] as well as the combination of Aß and neurodegeneration biomarkers in the pathogenesis of AD [7][8][9]. More recently due to the introduction of PET ligands for pathologic tau, the investigation of the role of tau pathology has also attracted considerable interest. In terms of regional distributions, Aβ is spread diffusely throughout the neocortex, while tau spreads more selectively across the temporal lobe, association cortices, and finally primary sensorimotor cortices, as summarized in the Braak stage scheme of progressive tau pathology [10,11]. This progression of tau is closely associated with disease stage and cognitive performance [11].
Several PET-tracers have been developed over the past few years to target tau [12][13][14][15]. Among them, the highly affine and specific 11 C-PBB3 may have the potential to be used in visualizing intracellular tau aggregates [16,17]. However, little is yet known regarding the diagnostic value of 11 C-PBB3-PET in a routine setting and on an individual patient level.
Clinical studies using a limited number of patients indicated sensitive detection of tau pathology by 11 C-PBB3 in patients with AD, with evidence of association between 11 C-PBB3 uptake and disease progression [18,19]. The 11 C-PBB3 distribution among cognitively normal and AD groups could mirror the pathological staging [20]. It was reported that in contrast to a relatively low 11 C-PIB uptake in the hippocampus as a cortical association area in AD, 11 C-PBB3 provided a robust signal in this region [18]. A head-to-head comparison of different tau tracers demonstrated that 11 C-PBB3 is more sensitive to tau aggregations that are correlated with amyloid-beta deposits [21]. Moreover, for 11 C-PBB3 binding to tau aggregates without evidence for positive amyloid-beta detection has been demonstrated [18,22].
In this preliminary study, we aim to apply the A/T/N biomarker classification scheme to a population of neurological patients and compare the regional tau deposition by 11 C-PBB3-PET imaging between patients within the AD continuum, BN individuals and patients with SNAP.
Study population
A total of 23 patients (Male: 12; Female: 11; mean age: 66.0 ± 6.6 y; range: 52-75 y) with probable neurodegenerative dementia, who underwent an 11 C-PBB3-PET imaging session, was pooled from the population database of the Neurology Center in the Ulm University Hospital, Germany. For all patients included in the study, biomarker data on amyloid-beta ( 11 C-PIB--PET and/or CSF Aß 42 ), tau ( 11 C-PBB3-PET) and neurodegeneration ( 18 F-FDG-PET and/or CSF t-tau) were available. 11 C-PIB-PET was available for 17 patients, 18 F-FDG-PET for 16, magnetic resonance (MR) images for 13, and CSF studies for 11 patients. The study was conducted according to the international Declaration of Helsinki and with the national regulations (German Medicinal Products Act, AMG §13 2b). A written informed consent was obtained from all patients.
To identify potential hypometabolism on 18 F-FDG-PET images, a set of 102 18 F-FDG-PET images from cognitively normal individuals (Male: 69; Female: 81; mean age: 69.7 ± 3.7 y; range: 56-75 y) was selected from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (http://adni.loni.usc.edu/). The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer's disease (AD).
In addition, a set of 17 11 C-PIB-PET data and the corresponding MR images (Male: 7; Female: 10; mean age: 73.5 ± 8.7 y; range: 59-85 y), including 9 AD patients and 8 healthy subjects, were also obtained from the ADNI database to create 11 C-PIB-PET templates.
CSF biomarkers
The CSF samples were collected by lumbar puncture at the Ulm University Hospital, Department of Neurology. In brief, samples were centrifuged and stored at -80˚C according to local SOPs and the Aß 42 and t-tau CSF levels were determined.
Imaging biomarkers
2.3.1 Image acquisition. All PET scans were acquired on a Biograph 40 PET/CT scanner (Siemens Medical Solutions, Erlangen, Germany) and low-dose CT scans were used for attenuation correction. For tau-PET, patients were injected with 11 C-PBB3 of median 517 MBq (range: 186-925 MBq) and, after a 40 min uptake time period, a PET acquisition was performed for 20 min. For amyloid-PET, patients received a single intravenous bolus injection of median 487 MBq (range: 222-567 MBq) of 11 C-PIB, followed by a 20 min PET acquisition performed 40 min after injection. The MR images were acquired with a Prisma 3 T clinical scanner (Siemens Medical Solutions, Erlangen, Germany). T1-weighted images were obtained using a magnetization-prepared rapid acquisition gradient echo (MPRAGE) sequence with the following parameters: repetition time = 2300 ms, echo time = 2.03 ms, inversion time = 900 ms, flip ang1e = 9˚, 240 × 256 in plane matrix with a phase field of view of 0.94, 192 slices, and slice thickness of 1.0 mm.
2.3.2 Image processing. All PET images were analyzed with an in-house pipeline in the Matlab software (R2017a, MathWorks, Natick, Massachusetts, USA) that uses the Statistical Parametric Mapping software package (SPM12; www.fil.ion.ac.uk/spm).
Since not all patients had an MRI, there was a necessity for a PET-template-based preprocessing method. Various studies have shown that the spatial normalization using PET templates are highly effective for quantification of hypometabolism and amyloid deposition using PET [23][24][25]. The feasibility of a PET-based method for the quantification of 11 C-PBB3 tracer was also evaluated in our previous study [26].
For tau-PET, the 11 C-PBB3-PET images with available MR scans were co-registered with the corresponding MR images using the normalized mutual information maximization algorithm. The MR images were then aligned with the standard T1-template provided by SPM12 using the unified segmentation-normalization algorithm [27]. The obtained transformation matrices were applied to the corresponding 11 C-PBB3-PET images to normalize them into the Montreal Neurological Institute (MNI) space. Next, the PET images were scaled to the cerebellum and averaged for generation of a 11 C-PBB3-PET template ( Fig 1A). Subsequently, all 23 individual 11 C-PBB3-PET images were spatially normalized into the 11 C-PBB3 template using the 'old normalization' module of SPM12 [28]. A detailed description of the method can be found in [26].
Since images of amyloid-positive and -negative patients have different activity distribution patterns, adaptive template methods have been suggested for PET-based amyloid quantification [23]. Nine positive and eight negative 11 C-PIB-PET images with available MRI from the ADNI database were normalized into the MNI space according to the procedure described above (Fig 1A). Positive and negative images were then averaged to generate positive and negative templates, respectively. Every 11 C-PIB-PET patient image was non-rigidly normalized into both positive and negative templates using the 'old normalization' module of SPM12. The normalized cross-correlation (NCC) was calculated between the 11 C-PIB-templates and all Flowchart of the image processing procedures. a) PET images were co-registered with the corresponding MR images. The SPM unified segmentation algorithm was used to normalize MR images into the MNI space. The forward transformation matrices were applied to the PET images. Normalized PET scans were scaled and averaged to generate a PET-template. b) Each 11 C-PIB-PET patient image was normalized into both positive and negative PIB-templates. The normalized cross-correlation (NCC) was calculated between the PIB-templates and normalized 11 C-PIB-PET images. Normalized 11 C-PIB-image with higher NCC is selected for the rest of the study. https://doi.org/10.1371/journal.pone.0266906.g001 spatially normalized 11 C-PIB-PET images as follows [23]: where NCC z means the NCC on each axial slice (z), n stands for the number of pixels per slice, σ is the standard deviation, T and I represent the template and 11 C-PIB-PET images, respectively. The template with higher NCC was adopted ( Fig 1B).
The 18 F-FDG-PET images from the ADNI dataset included 6 frames of 5 min duration from 30 to 60 min post injection. The first frame of these images was comparable to the 18 F-FDG-PET images used in this study, with an acquisition time of 7 min. Therefore, only first frame of the ADNI 18 F-FDG-PET images was used for the voxel-wise SPM analysis. In addition, the ADNI images were filtered with a scanner-specific filter function to produce images of a common resolution of 8 mm FWHM, the approximate resolution of the lowest resolution scanners used in ADNI. The effective spatial resolution in our brain 18 F-FDG-PET scans after iterative reconstruction using a 5 mm Gaussian filter was also approximately 8 mm FWHM. For normalization of the 18 F-FDG-PET images, the dementia-specific FDG-PET template developed by Della Rosa et al. was used [29,30]. This template was built by averaging the 18 F-FDG-PET images of 50 healthy controls and 50 patients with dementia (http://inlab.ibfm. cnr.it/inlab/PET_template.php). All normalized 18 F-FDG-PET scans were then smoothed with an isotropic Gaussian kernel of 8 mm FWHM for single-subject voxel-wise analysis, as suggested in [29,31,32].
CSF biomarkers.
The CSF biomarker profile was considered abnormal if the CSF Aβ 42 level was below 600 ng/L (A + ) and the CSF t-tau value was higher than 450 ng/L (N + ) [33].
2.4.2 Imaging biomarkers. All 11 C-PIB-PET and 18 F-FDG-PET images were evaluated with visual assessment by two experienced nuclear medicine physicians (P.B. and A.J.B.).
The Hammers grey-matter-masked probabilistic brain map was used to calculate regional PET values of the grey matter for each patient [34,35]. Median PET values in each volume of interest (VOI) were then divided by median uptake in cerebellar crus grey matter to create standardized uptake value ratios (SUVRs). To classify the 11 C-PIB-PET scans as positive/negative (A + /A -), the global PIB retention ratios were calculated from the volume-weighted average SUVRs of bilateral frontal, precuneus/posterior cingulate gyri, anterior cingulate gyri, superior parietal and lateral temporal VOIs. Using visually established amyloid positivity as the gold standard, a receiver operating characteristics (ROC) analysis was performed on the global SUVR values to determine the optimal threshold for classification of A + and A -. The cutoff point was computed from the ROC curve at the point with the largest Youden's index [36]. A leave-one-out-cross-validation (LOOCV) was applied to evaluate the accuracy of the cutoff point. 18 F-FDG-PET biomarker positivity (N + ) was defined using visual inspection combined with the optimized single-subject SPM analysis, as recommended by the common practice guideline for brain 18 F-FDG-PET in patients with dementing disorders [37]. The preprocessing steps for the normalized 18 F-FDG-PET images for optimal single-subject statistical analysis have been described elsewhere [29,31,32]. Each 18 F-FDG-PET patient image was evaluated with respect to the 102 healthy controls via the two sample t-test in SPM. All analyses were controlled for age and sex. Clusters of hypometabolism were considered significant when they were present in the typical VOIs, which are more susceptible to the neurodegenerative dementia, with a minimum extent of 100 voxels and surviving at p < 0.05 FWE corrected threshold at a voxel level. The hypometabolism pattern, obtained with single-subject SPM analysis, supports the visual inspection to classify the 18 F-FDG-PET images.
Group classification
Categorization into diagnostic groups was made based on the imaging or CSF biomarkers by applying the NIA-AA criteria [1]. The patients were classified into three groups using Amyloid (A) and neurodegeneration or neuronal injury biomarkers (N). Six patients were identified as BN (A -T � N -), ten SNAP (A -T � N + ) and seven within the AD continuum (A + T � N -[n = 2] or A + T � N + [n = 5]). The absent biomarker group in the classification process is labeled with an asterisk ( � ).
Among the seven patients within the AD continuum, three had a diagnosis of typical AD, three logopenic primary progressive aphasia (PPA) and one undetermined. Among the patients categorized as SNAP, four had non-fluent PPA, two semantic PPA, three corticobasal dementia (CBD) and one behavioral frontotemporal dementia (bv-FTD). Among the individuals identified as BN, one had progressive supranuclear palsy (PSP), one non-fluent PPA, one vascular Parkinson (VP) and three undetermined.
Statistical analysis
2.6.1 Voxel-wise analyses. Before group comparisons, a grey matter probability map from the Hammers probabilistic brain atlas was used to mask the 11 C-PBB3-PET images for grey matter. Then subjects within the AD continuum were compared with BN and SNAP groups using a voxel-wise two-tailed student's t-test, assuming independence and unequal variances. An explicit mask was used to restrict the analyses only to within-brain voxels. All 11 C-PBB3-PET images were intensity-normalized to the cerebellum as reference region. Due to a relatively small sample size of this study and to increase the sensitivity of the analysis, the threshold of p < 0.01 under uncorrected statistics at voxel level was applied. However, only clusters surviving at p < 0.05 (FWE corrected) and for cluster extent of k > 100 are reported.
Atlas-based analyses.
To evaluate whether the signal extracted from the predefined VOIs was different between patients within the AD continuum and two other groups, an atlasbased analysis was performed. The Hammers probabilistic brain atlas which contains 95 regions was combined into the following meta-VOIs, which are known to be associated with tau deposition in AD: the medial temporal lobe including the hippocampus, parahippocampal gyrus and amygdala; the temporal lobe including the inferior, middle, anterior, posterior and superior temporal gyri and fusiform; the frontal lobe including the inferior, middle, and superior frontal gyri, orbitofrontal gyrus, rectus and precentral gyrus; the occipital lobe including the lateral remainder of occipital cortex, lingual gyrus and cuneus; the parietal lobe including the superior parietal, postcentral, supramarginal and angular gyri; anterior cingulate cortex; posterior cingulate cortex and global cortical calculated by the volume-weighted average SUVRs of the above meta-VOIs.
Statistical analyses were performed using the R Statistical Software version 3.6.3 (the R Project for statistical computing, available at https://www.r-project.org/). Due to the limited number of patients, non-parametric tests were used for analysis. The SUVR values in the meta-VOIs were compared between groups using the non-parametric one-way analysis of variance (ANOVA) followed by Bonferroni post hoc test. A p-value < 0.05 was considered statistically significant. The effect sizes for the discrimination between groups were calculated using Cliff's Delta (delta), a non-parametric effect size measure which ranges between -1 and +1 [38]. An effect size of -1 or +1 shows a perfect separation between two groups, whereas an effect size of 0 indicates a complete overlap between groups. The magnitude of the effect sizes is assessed using the thresholds provided in [39], where |delta| < 0.33 indicates small effect sizes, 0.33 < | delta| < 0.47 represents medium effect sizes and |delta| > 0.47 large effect sizes.
Categorization of scans
The global SUVR cutoff value for amyloid positivity, that provided the highest Youden's index with a sensitivity and specificity of 100%, was 1.58. The leave-one-out cross validation resulted in a minor reduction of average classification accuracy to 94% (AUC: 0.99, sensitivity: 100%, specificity: 0.86%). In the semi-quantitative scan classification, 41% (7/17) of the 11 C-PIB-PET images were determined as amyloid-positive (A + ). By visual inspection, 75% (12/16) of the 18 F-FDG-PET images were defined as neurodegeneration-positive (N + ).
Demographics of the AD continuum, SNAP and BN patients are presented in Table 1. There were no significant differences in age and sex between groups. The cognitive performance tended to be lower in the AD continuum and SNAP groups compared to the BN group. CSF levels of Aβ 42 and t-tau were also recorded, when available. There was no significant difference in the recorded CSF levels between the groups. However, the statistical power may be limited due to the small sample size.
Voxel-wise analyses
The SPM analysis showed that patients within the AD continuum had significantly higher 11 C-PBB3 uptake than BN patients in the cingulate gyrus and temporo-parieto-occipital junction as well as in the frontal region (Fig 2 and Table 2).
Comparing the SUVRs between AD continuum and SNAP patients, AD continuum patients had a slightly increased 11
Atlas-based analyses
The atlas-based quantitative analysis of 11 C-PBB3-PET images revealed that the SUVR values of the temporal, frontal, parietal, occipital lobes and posterior cingulate cortex were
PLOS ONE
Quantitative analysis of tau pathology in human brain using 11C-PBB3-PET significantly higher in the AD continuum group than in the BN group (Table 3; p < 0.01 and delta � 0.95 for all). By regional analysis a significant increase of 11 C-PBB3-SUVRs in patients within the AD continuum as compared to the SNAP group in the posterior cingulate (Table 3; p = 0.04; delta = 0.72) was demonstrated. The SUVRs of the predefined meta-VOIs between SNAP and BN patient groups indicated no significant differences. The variability and overlap in the 11 C-PBB3-SUVR values from the predefined meta-VOIs for all three patient groups are presented in Fig 4. There was less overlap in the 11 C-PBB3 uptake between patients in the AD continuum and BN groups for all meta-VOIs except for the medial temporal region (Fig 4A; p = 0.5; delta = 0.48). In contrast, tau pathology in the SNAP group was similar to that of the BN group (p > 0.2 for all VOIs).
Discussion
To date, the results of only few clinical trials with 11 C-PBB3 tau tracer are available [18,19,22,40]. Therefore, this is an area where more research is needed to validate the diagnostic value of 11 C-PBB3-PET. In this study, the tau deposition using 11 C-PBB3-PET in the AD continuum, SNAP and BN patients was assessed. The quantitative analyses showed a higher global SUVR and SUVR in several cortical regions in patients within the AD continuum than in BN patients. Furthermore, the SUVR in the posterior cingulate was significantly higher in the AD continuum patients than in SNAP patients. The results indicate that 11 C-PBB3-PET is indeed a noninvasive biomarker for tau deposition.
The main strength of this study is to provide semi-automated techniques to analyse the PET data. The PET-based quantitative method was used to quantify tau-PET scans [26]. For Table 2. Voxel-wise comparing of the 11 C-PBB3 uptake between patients within the AD continuum (n = 7) and BN individuals (n = 6). Cerebellar crus grey matter was used as a reference region to calculate the SUVRs.
PLOS ONE
Quantitative analysis of tau pathology in human brain using 11C-PBB3-PET amyloid-PET quantification, the adaptive template method was utilized due to the different activity distribution patterns in amyloid-positive and -negative patients [23]. The optimized single-subject SPM approach was used to support the visual inspection of 18 F-FDG-PET images [30]. Visual assessment of PET scans is commonly used in many nuclear medicine facilities. However, the automated and semi-automated quantitative methods can significantly improve the detection and comparative assessment. Furthermore, the non-specific binding of radiotracers makes the detection of cerebral cortical binding challenging for the human eye. This process could be even more difficult for 11 C-PBB3-PET images due to the lower specific binding of the 11 C-PBB3 compared to other tau tracers [41]. Nevertheless, since the automated analysis of 18 F-FDG-PET is still a matter of debate [42], the visual assessment of 18 F-FDG-PET images was considered as the preferred method in this study. Comparing the regional 11 C-PBB3-SUVR values between patients within the AD continuum and BN, higher SUVRs were noted over the cingulate gyrus, temporo-parieto-occipital junction and frontal regions, which were similar to previous studies. Maruyama et al. reported that in the patients with AD, 11 C-PBB3 accumulation was most frequently observed in the Table 3. The median 11 C-PBB3-SUVR values of meta-VOIs with interquartile ranges (IQR) and Cliff's Delta effect sizes (delta) for the three cohorts. Patients within the AD continuum were compared with SNAP and BN patient groups using the non-parametric one-way ANOVA followed by Bonferroni post hoc test.
Region
SUVR ( limbic system and gradually spread into the temporal, parietal and frontal regions that correspond to Braak stages V-VI [18]. However, this study has included only a small number of patients (3 AD vs. 3 cognitively normal individuals). Kimura et al. evaluated the feasibility of kinetic model-based approaches to quantify tau binding using 11 C-PBB3-PET and blood data [40]. They found that the reference tissue and the dual-input model binding parameters discriminate effectively normal controls from patients with AD. Terada investigated the uptake of 11 C-PBB3 in participants with early AD [43]. He reported notable differences in tracer uptake in the temporo-parietal junction of AD patients compared to healthy controls. In our study, all BN participants were classified as Braak stage I/II, which will be explained below in more detail. Patients within AD continuum showed elevated tracer retention in regions corresponding to Braak stage III/IV. Although the patients in this study are mostly in the mild to moderate dementia category, the gradual spread of 11 C-PBB3 accumulation is clearly observed in the parietal and frontal lobes (Braak stage V/VI). Our results therefore add further evidence supporting the hypothesis that the 11 C-PBB3 tau ligand is able to discriminate cognitively normal patients from those within the AD continuum.
Patients within the AD continuum had a higher cortical 11 C-PBB3-SUVR than SNAP patients in various brain regions (Table 3). However, due to the small sample size and large standard deviation of the regional SUVR values in the SNAP group, no significant differences were found between two groups, except for the posterior cingulate area (Figs 3 and 4). The wide IQR of regional SUVRs in the SNAP patients can be explained by the heterogeneity of the dementia subtypes in this group. Moreover, different dementia subtypes have been associated with different pathological hallmarks, often showing AD co-pathology. Several studies have reported that the non-AD patients with AD co-pathology are more likely to be classified as AD [44][45][46]. This may explain the overlap of SNAP and AD continuum group in the current work.
The SNAP group was generally intermediate with regard to the distribution of 11 C-PBB3 uptake relative to the BN and AD continuum group. On an individual basis, two of ten SNAP patients (CBD [n = 1] and semantic PPA [n = 1]) showed high 11 C-PBB3 uptake in cortical regions that was compatible to AD patient uptakes. The other SNAP subjects had 11 C-PBB3-SUVR values in the same range as BN patients. Both voxel-wise and atlas-based analyses revealed no significant difference between the SNAP and the BN patient groups (Fig 4).
Although, there was an increasing tendency of 11 C-PBB3 uptake in the AD continuum group compared to the BN group in the medial temporal lobe, no significant differences for the 11 C-PBB3-SUVR values were found among the three patient groups (Fig 4A). This finding is compatible with previous studies demonstrating that neurofibrillary tangles around the medial temporal cortex are indistinguishable from those of AD in normal cognitive or SNAP elderly patients [11,19]. Recently, the new term "primary age-related tauopathy" (PART) has been proposed for such a pathological condition [47].
Both, atlas-based analysis and voxel-wise analysis were performed in this study. Taking the small sample sizes into account, the use of two methods led to more reliable results. The atlasbased approach was also investigated by a previous 11 C-PBB3-PET study [19]. In this method, the VOI's signal is typically computed by averaging over all voxel signals in a given VOI. However, the sub-region of the brain, showing statistically significant signals, does not necessarily include the whole voxels within the predefined VOIs. This average over all voxels can thus affect the effect sizes. Conversely, the voxel-wise analysis enables to detect significant signals anywhere between distinct VOIs in the whole brain. As shown in Tables 2 and 3, observed effect sizes based on the atlas-based approach are smaller than those of the voxel-wise method. Despite this observation, the voxel-wise quantitative analysis of 11 C-PBB3-PET images supported the outcome of the atlas-based analysis.
Although A/T/N biomarker classification scheme originally emerged as a research framework, applying A/T/N to our cohort of patients revealed a good but partial correspondence to the clinical diagnosis. Clinically AD-diagnosed patients (n = 3) and logopenic PPA (n = 3), which is typically associated with AD pathology, were in the AD continuum. Among SNAP patients, 3 out of 16 were identified as BN.
The main limitation of the current study is the lack of cognitively normal individuals. In addition, four spatiotemporal subtypes of tau pathology spread in AD has been recently proposed: limbic-predominant phenotype, parietal-dominant and medial temporal lobe (MTL)sparing phenotype, predominant posterior occipitotemporal phenotype and asymmetric temporoparietal phenotype [48]. Both, the heterogeneity in AD and lack of cognitively normal individuals could underestimate between-group differences (AD vs. BN and AD vs. SNAP), leading to false-negative results. However, they would not hamper the positive results presented in this study. In group classification, the use of CSF data in the absence of PET images may also be a limitation. Discordance between imaging and CSF biomarkers can cause different positive/negative labels for the same patient. In some situations, discordance in positive/ negative labels between an imaging and CSF biomarker is simply due to the borderline cases or non-optimal cutoff values. Excluding patients with a CSF value within ±10% of the cutoff value could reduce this limitation. In this study, there were no patients with CSF values within ±10% of the cut-off values. This supports the validity of combining PET and CSF data for amyloid and neurodegenerative biomarker groups. Moreover, the cutoff-calculation approach for amyloid positivity was data dependent and a larger sample size covering a wide spectrum of cases is needed to yield a more accurate result. However, the LOOCV indicated the stability of the calculated cutoff value in this dataset. | 6,521.2 | 2022-04-11T00:00:00.000 | [
"Biology"
] |
Anticancer Activity of Leaf Hydro Ethanolic Extract of Aegle marmelos in Human Lung Cancer Cell Mediated through Caspase-3 and Caspase-9 mRNA Expression
Background: Aegle marmelos (AE) is a medicinal plant that comes under the rutaceae family and the plant was used in the past for treating many diseases and illness symptoms. The plant has many effects such as anti-diarrhoeal, antimicrobial, antiviral, radioprotective, anticancer, chemopreventive, antipyretic, ulcer healing, antigenotoxic, diuretic, antifertility and antiinflammatory properties. Aim: To know the anticancer activity of hydroethanolic leaf extract of Aegle marmelos over lung cancer cells treated with caspase 3 and caspase 9 mRNA expression. Materials and Methods: The required chemicals were collected mainly from Canada. The lung Original Research Article Sukanth et al.; JPRI, 33(58B): 336-343, 2021; Article no.JPRI.74348 337 cancer cells (A549) were collected from NCCS pune and then RNA was extracted from the cells and then the study was conducted after treating it with caspase 3 and caspase 9 mRNA expression. The cells were treated with many dosage of hydroethanolic extract of Aegle marmelos and the cell viability was noted. Results: The study reported that extract of Aegle marmelos has a great anticancer activity about 1 fold change over rate of 1.7 for cells treated with caspase 3 and a fold change over of 1 in caspase 9 treated lung cancer cells. Conclusion: The study concluded an innovative finding that the hydroethanolic leaf extract of Aegle marmelos has a great anticancer activity against lung cancer cells treated with caspase 3 and caspase 9 mRNA expression.
INTRODUCTION
Cancer is one of the deadliest diseases that is more prevalent over the growing world and the death due to cancer increases each and every day and there are many solutions available for this deadly disease but as we get cured from the cancer we get affected by the side effects of the treatments [1]. There should be some alternative for these side effects causing medicines and the natural answers for this question [2].
Nature never fails to fascinate us with its power, and nature gives us everything we need, in the same way we get the solution for cancer from nature itself [3,4]. In nature we have many medicinal herbs and trees that we did not discover until now and if we do that we would get almost solutions for all the problems we face in the present days [5,6]. One such medicinal herb is Aegle marmelos commonly known as bael tree, contains many medicinal effects such as antidiarrheal [7], antimicrobial, antiviral, radioprotective, chemopreventive, antipyretic, ulcer healing, antigenotoxic, diuretic, antifertility and anti-inflammatory properties [8].
This old herb was used in olden days by our ancestors to cure many diseases [9]. Now interestingly this plant is found to have anticancer properties also and if this property of the plant is used wisely we could create a potential anticancer drug [9,10]. The aim of this study is to know the anticancer of Aegle marmelos over lung cancer cells treated with caspase 3 and caspase 9 mRNA expression [11,12].
Aegle marmelos was used in the countryside part of India as dried pulp in summer drinks as it helps in overcoming sunstroke. The Bale leaves are also used in the preparations of salads. Bale fruit absorbs toxins produced by bacteria and other pathogens in the intestine and hence helps in treating dysentery [13]. The bale leaves are also used in ayurvedic medicine to treat loss of appetite. Our team has extensive knowledge and research experience that has translate into high quality publications [14][15][16][17][18][19][20][21][22][23].
The oil extracted from its fruits and leaves are used to cure respiratory disorders. As it had an anti-inflammatory effect the fruit was used for the cure of tuberculosis [14].
Cell Viability by MTT Assay
Cell viability was assayed with a modified colorimetric technique that is based on the power of live cells to convert MTT, a tetrazolium compound into purple formazan crystals by mitochondrial reductases (Mosmann, 1983). Briefly, the cells (1 ×10 4 /well) were exposed to different concentrations of Aegle marmelos extract (100-500µg/ml) with A549 cells for48 h. At the end of the treatment, 100 µl of 0.5 mg/ml MTT solution was added to each well and incubated at 37 •C for an hour. Then the formed crystals were dissolved in dimethyl sulfoxide (100 µl) and incubated in the dark for an hour. Then the intensity of the color developed was assayed with a Micro ELISA plate reader at 570 nm. The number of viable cells was expressed in the form of percentage of control cells cultured in serumfree medium. Cell viability over the control medium with no treatment was represented as
Gene Expression Analysis by Real Time-PCR
Samples from each group were submerged in 2 ml Trizol (Invitrogen, Carlsbad, CA, USA) for RNA extraction and stored at −80°C until further processed. cDNA synthesis was performed on 2 μg RNA in a 10 μl sample volume using Superscript II reverse transcriptase (Invitrogen) as recommended by the manufacturer. Real-time PCR array analysis was performed in a total volume of 20 μl including 1 μl cDNA, 10 μl qPCR Master Mix 2x (Takara, USA) and 9 μl ddH 2 O. Reactions were run on an CFX96 Touch Real-Time PCR Detection System (Bio-Rad, USA) using universal thermal cycling parameters (95°C for 5 min, 40 cycles of 15 sec at 95°C, 15 sec at 60°C and 20 sec at 72°C; followed by a melting curve: 5 sec at 95°C, 60 sec at 60°C and continued melting). For the purpose of quality control, melting curves were acquired for all samples. The specificity of the amplification product was decided by melting curve analysis for every primer pair. The data were analyzed by comparative CT method and the fold change is calculated by 2−ΔΔCT method described bySchmittgen and Livak (2008) using CFX Manager Version 2.1 (Bio Rad, USA).
Cell lines and cell culture
Human Lung cancer cell line (A549) was purchased from the National Centre for Cell Sciences (NCCS), Pune, India. Cells were cultured in DMEM 1640 medium (Thermo Fisher Scientific, CA, USA) containing 10% fetal bovine serum (Thermo Fisher Scientific, CA, USA), 100 U/ml penicillin and 100 μg/ml streptomycin (Thermo Fisher Scientific, CA, USA) at 37°C with 5% CO 2 .
Statistical Analysis
The obtained data were analyzed statistically by one-way analysis of variance (ANOVA) and Duncan's multiple range test with a computerbased software (Graph Pad Prism version 5) to analyze the significance of individual variations among the control and experimental groups. The significance was considered at p<0.05 level in Duncan's test.
RESULTS
The cell viability of the lung cancer cells (A549) when treated with hydro ethanolic leaf extract of Aegle marmelos was discussed in Fig. 1. The cell viability showed a decrease with increase in concentration of the leaf extract of AE and at 400 and 500 microgram of leaf extract the cell viability reduced notably. The effect of Aegle marmelos on the caspase 3 mRNA treated lung cancer cells in the form of fold changeover was discussed in Fig. 2. It showed that as the expression is positive, there is an increase in the fold change as the cell viability decreases. The effect of Aegle marmelos on caspase 9 mRNA expression treated lung cancer cells in the form of fold over change was discussed in Fig. 3. It showed that as the expression is positive the fold change increases as the cell viability decreases.
DISCUSSION
From the results obtained within the limit of study it could be seen that the hydroethanolic leaf extract has anticancer activity over lung cancer cells(A549) treated with caspase 3 and caspase 9 mRNA expression [24,25]. Phytochemical studies shows that fruits and leaves of the medicinal herb Aegle marmelos contains many phytochemical compounds like flavonoids, tannins and carotenoids which are the main reason behind the anticancer and other medicinal properties of the Aegle marmelos [26] [27]. The anticancer activity is not only seen toward the lung cancer cells but also many other cells such as breast cancer cell lines (MCF7) and melanoma cancer cells [28] [29] The anticancer activity of the Aegle marmelos could be even seen in the Swiss albino mice [30][31][32][33].
The anticancer activity of the plant is mainly due to the free radical scavengers that occur in the phytochemical aspects [24,25,34]. The lung cancer cells at initial stage remain actively dividing and then when the hydroethanolic extract of the Aegle marmelos is added the cell viability starts to decrease slowly as the over proliferation of the cell is stopped by the plant extract [35]. The cell viability reduces to the most at the concentration of 400 and 500 micrograms of the hydroethanolic extract and this dosage was taken as the dosage for the conduction of the MTT assay [27,36].
The anticancer activity of the Aegle marmelos now is tested only with the in vitro samples for now but in future the hydroethanolic extract of Aegle marmelos could be tested with the in vivo samples and if positive results are acquired [35,37], it could be used as an effective anticancer drug [30][31][32].
The anticancer drugs used at present in chemotherapy even though give a great result, it also gives many side effects that could not be avoided [38,39]. This could be stopped by bringing in a nearly effective natural remedy that could solve the problems nearly as effectively as the present synthetic drugs but does not give any side effects [40,41]. The anticancer activity of the natural plants that are found could be used to solve this problem [42]. The study was time consuming and costly hence it was hard to conduct and complete the research [43].
CONCLUSION
From the results gathered from the study and experiment it is clear that the hydroethanolic extract of the Aegle marmelos has anticancer activity against the lung cancer cells (A549) mediated through caspase 3 and caspase 9 mRNA expression. This property could be used in future with further in vivo studies and experiments.
The present project is supported by · Saveetha Institute of Medical and Technical sciences · Saveetha Dental College and Hospitals, · Saveetha University and · Uma maheswari fire works.
CONSENT
It is not applicable.
ETHICAL APPROVAL
It is not applicable. | 2,361.8 | 2021-12-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
Strain Hardening Exponent and Strain Rate Sensitivity Exponent of Cast AZ31B Magnesium Alloy
: The flow curves of as-cast AZ31B magnesium alloy during high temperature deformation were obtained with a thermal compression test, and the effects of deformation amount, grain size, strain rate, and deformation temperature on the flow stress, strain rate sensitivity index, and strain hardening index were analyzed. The results showed that deformation and grain size were negatively correlated with both the strain rate sensitivity index and strain hardening index. The increase in strain rate increased the strain hardening index but made the strain rate sensitivity index show an opposite trend. Increasing temperature reduced the strain rate sensitivity index and strain hardening index but, when the temperature exceeded 700 K, the strain rate sensitivity index was no longer affected by temperature. Since the strain rate sensitivity index m and strain hardening index n are important parameters for measuring the plastic deformation of metal materials, this study has great significance for guiding the selection of process parameters in the plastic processing of as-cast AZ31 magnesium alloy.
Introduction
With the increase in fuel costs, the demand for lightweight automobiles is growing more and more. Magnesium alloy is the lightest metal structural material at present. Compared with steel and aluminum alloys, magnesium alloy has the advantages of low density, high specific strength and specific stiffness, good thermal conductivity and damping, good electromagnetic shielding performance, easy cutting, and the capacity to be recycled. Based on these advantages, it has broad application prospects in the aerospace, transportation, electronic communication, and home appliance industries [1][2][3]. However, magnesium alloy has a hexagonal, close-packed crystal structure, which leads to a weaker start-up slip system and poor plastic deformation ability during deformation at room temperature, severely limiting its practical applications [4,5].
It is well-known that the strain rate sensitivity index m and strain hardening index n are important parameters for measuring the plastic deformation of metal materials. The strain rate sensitivity index m is the parameter for the material's tendency toward strengthening when the strain rate changes; the strain hardening exponent n is the parameter that describes the work hardening behavior of metal materials during deformation and reflects the ability of materials to resist plastic deformation. Therefore, it is of great significance to study the responses of the strain rate sensitivity index m and strain hardening index n of magnesium alloy to process parameters and microstructure changes during isothermal compression.
Wang et al. [6] studied the strain rate sensitivity and anisotropic behavior of a rareearth magnesium sheet alloy ZEK100 and found that the strain rate sensitivity of the ZEK100 sheet depended strongly on both the loading orientation and the strain amplitude. E. Karimi et al. [7] studied the instantaneous strain rate sensitivity of wrought AZ31 magnesium alloy and found that the instantaneous strain rate sensitivity of the AZ31 alloy was significantly affected by the strain rate and imposed strain. N. Sriraman et al. [8] determined the different stages of strain hardening exhibited by variously processed Mg-4Li-0.5Ca alloy test specimens and discovered that, after more plastic strain (dislocation density) was accumulated in the KAM [9] mapping, the AR350 alloy exhibited a higher strain hardening rate in the later stage.
Although research on the strain rate sensitivity index and strain hardening index has begun to increase in recent years, most studies are not systematic enough. This study took as-cast AZ31B magnesium alloy as the research object and analyzed the influence of different process variables on flow stress, calculated the strain rate sensitivity index m and strain hardening index n of as-cast AZ31B magnesium alloy under different forming parameters, and analyzed the influence of microstructure and process parameters on the two indexes. This has important theoretical significance and engineering value for improving the material properties of as-cast AZ31B magnesium alloy, making it possible to improve the process to optimize the microstructure.
Experimental Materials
The experimental raw material was a casting billet of AZ31B magnesium alloy produced by a company. The original microstructure is shown in Figure 1. It can be seen that the microstructure of as-cast AZ31B magnesium alloy is composed of irregular original coarse grains, the crystal boundary is clear, and fine recrystallized grains appeared at the grain boundaries of some large grains. The chemical constituents of the material are shown in Table 1. earth magnesium sheet alloy ZEK100 and found that the strain rate sensitivity of the ZEK100 sheet depended strongly on both the loading orientation and the strain amplitude. E. Karimi et al. [7] studied the instantaneous strain rate sensitivity of wrought AZ31 magnesium alloy and found that the instantaneous strain rate sensitivity of the AZ31 alloy was significantly affected by the strain rate and imposed strain. N. Sriraman et al. [8] determined the different stages of strain hardening exhibited by variously processed Mg-4Li-0.5Ca alloy test specimens and discovered that, after more plastic strain (dislocation density) was accumulated in the KAM [9] mapping, the AR350 alloy exhibited a higher strain hardening rate in the later stage.
Although research on the strain rate sensitivity index and strain hardening index has begun to increase in recent years, most studies are not systematic enough. This study took as-cast AZ31B magnesium alloy as the research object and analyzed the influence of different process variables on flow stress, calculated the strain rate sensitivity index m and strain hardening index n of as-cast AZ31B magnesium alloy under different forming parameters, and analyzed the influence of microstructure and process parameters on the two indexes. This has important theoretical significance and engineering value for improving the material properties of as-cast AZ31B magnesium alloy, making it possible to improve the process to optimize the microstructure.
Experimental Materials
The experimental raw material was a casting billet of AZ31B magnesium alloy produced by a company. The original microstructure is shown in Figure 1. It can be seen that the microstructure of as-cast AZ31B magnesium alloy is composed of irregular original coarse grains, the crystal boundary is clear, and fine recrystallized grains appeared at the grain boundaries of some large grains. The chemical constituents of the material are shown in Table 1.
Experimental Procedures
Generally, there are three methods used to study the thermal deformation behavior of materials: uniaxial tension, uniaxial compression, and torsion. Figure 2 shows the process for the compression simulation experiment. In this experiment, the uniaxial compression process of AZ31 magnesium alloy under a series of different temperatures and strain rates was simulated with a Gleeble−3800 compression testing machine from Fleur Instrument Technology (Shanghai) Co., Ltd., China. The flow stress data for the material during hot compression were automatically collected and recorded by the computer of the thermal simulator of the testing machine. The compressed specimen used in this study was a cylindrical standard test sample processed by WEDM (specification: Φ8 × 12 mm) from Suzhou AVIC Technology Equipment Co., Ltd., China. Lubricants (graphite and oil) were coated on the two ends of the specimen to reduce the friction between the punch and the sample and avoid uneven deformation. Since magnesium alloy has an HCP(hexagonal closepacked structure) structure, the formability is poor at room temperature and increases significantly with increasing temperature. It is known that most wrought magnesium alloys have good formability at high temperatures of 623~773 K ( [10,11]) and that the deformation of magnesium alloy is affected by stress state and alloy structure ( [12][13][14]). Therefore, the compression deformation temperature range was set to 623-773 K; the strain rates were 0.1 s −1 , 1 s −1 , and 10 s −1 ; and the compression amount was 60% After the experiment, the samples were cooled to room temperature in air; 400#, 800#, 1000#, 1500#, 2000#, and 3000# water-grinding sandpaper was used to grind the metallographic samples; and a mixed solution composed of 5 g picric acid + 5 g glacial acetic acid + 10 mL distilled water + 80 mL anhydrous ethanol was used to erode the polished metallographic samples. Then, the microstructures of the samples were observed with an optical microscope.
rates was simulated with a Gleeble−3800 compression testing machine from Fleur Instrument Technology (Shanghai) Co., Ltd., China. The flow stress data for the material during hot compression were automatically collected and recorded by the computer of the thermal simulator of the testing machine. The compressed specimen used in this study was a cylindrical standard test sample processed by WEDM (specification: Φ8 × 12 mm) from Suzhou AVIC Technology Equipment Co., Ltd., China. Lubricants (graphite and oil) were coated on the two ends of the specimen to reduce the friction between the punch and the sample and avoid uneven deformation. Since magnesium alloy has an HCP(hexagonal closepacked structure) structure, the formability is poor at room temperature and increases significantly with increasing temperature. It is known that most wrought magnesium alloys have good formability at high temperatures of 623~773 K ( [10,11]) and that the deformation of magnesium alloy is affected by stress state and alloy structure ( [12][13][14]). Therefore, the compression deformation temperature range was set to 623-773 K; the strain rates were 0.1 s −1 , 1 s −1 , and 10 s −1 ; and the compression amount was 60% After the experiment, the samples were cooled to room temperature in air; 400#, 800#, 1000#, 1500#, 2000#, and 3000# water-grinding sandpaper was used to grind the metallographic samples; and a mixed solution composed of 5 g picric acid + 5 g glacial acetic acid + 10mL distilled water + 80 mL anhydrous ethanol was used to erode the polished metallographic samples. Then, the microstructures of the samples were observed with an optical microscope.
Following this, in accordance with the flow stress data automatically recorded in the experiment, the change rule for the value of the strain rate sensitivity coefficient m with changes in temperature and strain rate and the change rule for the value of strain hardening coefficient n with changes in strain and temperature could be calculated. Figure 3 shows the typical true stress-strain curve of as-cast AZ31B magnesium alloy after hot compression, indicating the influence of different deformation conditions on the flow stress of AZ31B magnesium alloy under the condition of 60% deformation [15][16][17]. It can be seen that the change trend for the flow stress-strain curves of AZ31B magnesium alloy under different deformation conditions were similar; that is, in the initial stage of deformation, with the increase in strain, the flow stress increased rapidly and reached the peak stress, then decreased gradually, and, finally, tended toward stability. Researchers believe that the reasons for this phenomenon are as follows [18][19][20]. In the initial stage of deformation, strain hardening occupies a dominant position, so the flow stress increases with the increase of strain. However, with the deepening of deformation, the softening effect of dynamic recrystallization and dynamic recovery gradually increases, which Following this, in accordance with the flow stress data automatically recorded in the experiment, the change rule for the value of the strain rate sensitivity coefficient m with changes in temperature and strain rate and the change rule for the value of strain hardening coefficient n with changes in strain and temperature could be calculated. Figure 3 shows the typical true stress-strain curve of as-cast AZ31B magnesium alloy after hot compression, indicating the influence of different deformation conditions on the flow stress of AZ31B magnesium alloy under the condition of 60% deformation [15][16][17]. It can be seen that the change trend for the flow stress-strain curves of AZ31B magnesium alloy under different deformation conditions were similar; that is, in the initial stage of deformation, with the increase in strain, the flow stress increased rapidly and reached the peak stress, then decreased gradually, and, finally, tended toward stability. Researchers believe that the reasons for this phenomenon are as follows [18][19][20]. In the initial stage of deformation, strain hardening occupies a dominant position, so the flow stress increases with the increase of strain. However, with the deepening of deformation, the softening effect of dynamic recrystallization and dynamic recovery gradually increases, which gradually offsets the strain hardening effect, resulting in a slow decline in flow stress. When the dynamic recrystallization softening and strain hardening reach equilibrium, the flow stress acquires a stable state. In addition, the wavy stress-strain curves in Figure 3a,c should be derived from the electronics of the testing machine and did not affect the results discussed in this paper. gradually offsets the strain hardening effect, resulting in a slow decline in flow stress. When the dynamic recrystallization softening and strain hardening reach equilibrium, the flow stress acquires a stable state. In addition, the wavy stress-strain curves in Figure 3a,c should be derived from the electronics of the testing machine and did not affect the results discussed in this paper. We can intuitively and quickly discern from Figure 3 that the flow stress and its peak value for AZ31B magnesium alloy diminished gradually as the deformation temperature increased at a certain strain rate. The reason is that the essence of metal deformation is the process of fracturing and re-bonding of metal bonds. The higher the deformation temperature, the more kinetic energy the metal atoms obtain during deformation, so that the atoms are more likely to break away from the metal bonds to soften the metal materials; that is, the flow stress of the metal decreases, and the peak stress that needs to be overcome in the deformation decreases. At the same time, comparing Figure 3a-c, it can be seen that the flow stress of AZ31B magnesium alloy increased with the increase in the strain rate at the same deformation temperature. This was because, as the strain rate increases, the time required for the material to produce the same amount of deformation becomes shorter and shorter, so that the alloy does not have enough time to complete dynamic
Flow Behavior of the AZ31B Alloy
We can intuitively and quickly discern from Figure 3 that the flow stress and its peak value for AZ31B magnesium alloy diminished gradually as the deformation temperature increased at a certain strain rate. The reason is that the essence of metal deformation is the process of fracturing and re-bonding of metal bonds. The higher the deformation temperature, the more kinetic energy the metal atoms obtain during deformation, so that the atoms are more likely to break away from the metal bonds to soften the metal materials; that is, the flow stress of the metal decreases, and the peak stress that needs to be overcome in the deformation decreases. At the same time, comparing Figure 3a-c, it can be seen that the flow stress of AZ31B magnesium alloy increased with the increase in the strain rate at the same deformation temperature. This was because, as the strain rate increases, the time required for the material to produce the same amount of deformation becomes shorter and shorter, so that the alloy does not have enough time to complete dynamic recrystallization, which means that the softening effect is not obvious and the work hardening effect is more and more significant, finally causing the increase in flow stress [21].
In summary, the flow stress of as-cast AZ31B magnesium alloy was highly sensitive to deformation temperature, strain rate, and strain. Figure 4 displays the effects of deformation temperature and strain rate on flow stress under different strain conditions. On the whole, the flow stress of AZ31B magnesium alloy decreased gradually with the increase in the deforming temperature, which was due to the dynamic softening phenomenon in the material with the increase in deformation temperature, and the dynamic softening of the material was sufficient to offset its strain hardening during isothermal compression [22]. recrystallization, which means that the softening effect is not obvious and the work hardening effect is more and more significant, finally causing the increase in flow stress [21].
In summary, the flow stress of as-cast AZ31B magnesium alloy was highly sensitive to deformation temperature, strain rate, and strain. Figure 4 displays the effects of deformation temperature and strain rate on flow stress under different strain conditions. On the whole, the flow stress of AZ31B magnesium alloy decreased gradually with the increase in the deforming temperature, which was due to the dynamic softening phenomenon in the material with the increase in deformation temperature, and the dynamic softening of the material was sufficient to offset its strain hardening during isothermal compression [22]. The trend for the curve in Figure 4 shows that, under the same strain rate and deformation temperature, the flow stress of the alloy gradually decreased with the increase in the strain. This was due to the fact that, with the increase in strain, the original coarse grains in the microstructure of the alloy were completely crushed, and dynamic recrystallization occurred under the action of higher deformation temperature, which triggered the softening mechanism of recrystallization and led to the gradual decrease in the flow stress of the alloy. Meanwhile, we found that, under the same deformation conditions, as the strain rate increased, the flow stress of the alloy increased gradually. This was because the larger strain rate led to a shorter time being required for the material to complete the deformation, so the alloy did not have enough time to complete the nucleation and growth of dynamic recrystallization, resulting in the softening effect of dynamic recrystallization not being obvious. At this time, the strain hardening mechanism gradually occupied the The trend for the curve in Figure 4 shows that, under the same strain rate and deformation temperature, the flow stress of the alloy gradually decreased with the increase in the strain. This was due to the fact that, with the increase in strain, the original coarse grains in the microstructure of the alloy were completely crushed, and dynamic recrystallization occurred under the action of higher deformation temperature, which triggered the softening mechanism of recrystallization and led to the gradual decrease in the flow stress of the alloy. Meanwhile, we found that, under the same deformation conditions, as the strain rate increased, the flow stress of the alloy increased gradually. This was because the larger strain rate led to a shorter time being required for the material to complete the deformation, so the alloy did not have enough time to complete the nucleation and growth of dynamic recrystallization, resulting in the softening effect of dynamic recrystallization not being obvious. At this time, the strain hardening mechanism gradually occupied the dominant position, so the flow stress increased gradually with the increase in strain rate [23].
In addition, Figure 4 also shows that with the strain increase, the difference between the flow stress curves corresponding to the three strain rates in the figure became smaller and smaller, which was similar to the trend for the curve in the graph. This phenomenon shows that the increase in deformation led to the gradual dominance of recrystallization softening.
In general, the flow stress of the alloy gradually decreased with the increase in deformation temperature, and the higher the strain rate, the faster the flow stress decreased.
As can be seen in Figure 5, under the same deformation temperature, the flow stress of the alloy increased with the increase in strain rate, which was due to the strain hard-ening phenomenon gradually offsetting the softening phenomenon caused by the high temperature deformation and occupying the dominant role in the increase in the strain rate, increasing the flow stress.
In addition, Figure 4 also shows that with the strain increase, the difference between the flow stress curves corresponding to the three strain rates in the figure became smaller and smaller, which was similar to the trend for the curve in the graph. This phenomenon shows that the increase in deformation led to the gradual dominance of recrystallization softening.
In general, the flow stress of the alloy gradually decreased with the increase in deformation temperature, and the higher the strain rate, the faster the flow stress decreased.
As can be seen in Figure 5, under the same deformation temperature, the flow stress of the alloy increased with the increase in strain rate, which was due to the strain hardening phenomenon gradually offsetting the softening phenomenon caused by the high temperature deformation and occupying the dominant role in the increase in the strain rate, increasing the flow stress. In the case of low deformation temperature, there was a great difference in the flow stress values under different strains, while the difference gradually decreased with the increase in deformation temperature. The principle is as follows: with the increase in deformation temperature, atoms obtain more kinetic energy, making the dislocation movement easier and promoting the start of the dynamic softening mechanism in the material, In the case of low deformation temperature, there was a great difference in the flow stress values under different strains, while the difference gradually decreased with the increase in deformation temperature. The principle is as follows: with the increase in deformation temperature, atoms obtain more kinetic energy, making the dislocation movement easier and promoting the start of the dynamic softening mechanism in the material, which gradually occupies the dominant position, resulting in the decrease in flow stress [24].
Strain Rate Sensitivity Exponent
The strain rate sensitivity index m refers to the parameter for the sensitivity of the flow stress of the metal material to the strain rate when plastic deformation occurs; that is, the parameter for the strengthening tendency of the material when the strain rate increases. In this study, the strain rate sensitivity index was determined with the following expression [25]: where m is the strain rate sensitivity index; σ is the flow stress (MPa) measured in the compression simulation experiment; . ε is the strain rate (s −1 ); ε is the strain; and T is the deformation temperature (K). It can be seen from Equation (1) that, when the strain and temperature are constant, the value of m is related to the strain rate and stress, and the size is negatively correlated with the strain rate and positively correlated with the stress.
In accordance with the experimental data from the thermal compression simulation, the lnσ − ln . ε curves for strains of 0.2, 0.4, 0.6, and 0.8 were respectively fitted, and the corresponding mathematical expression was as follows: The values of their corresponding coefficients a1, a2, and a3 were obtained. The strain rate sensitivity index m was obtained by derivation of ln . ε on both sides of the above formula, as follows: The corresponding values of m were calculated with the above two formulas. Assuming that the temperature was the x-axis, ln was the y-axis, and m was the z-axis, contour map 6 was drawn in Origin software. The m contour map reflects the change in strain rate sensitivity index m with temperature and strain rate (see the figure below).
The strain rate sensitivity is usually used to determine the superplastic behavior and deformation mechanism of materials. Figure 6 shows the variation trend for the strain rate sensitivity index m with strain rate and deformation temperature under different strain conditions. Overall, with the increase in strain, the peak value of the strain rate sensitivity index decreased first and then increased. At the same time, the peak value of the strain rate sensitivity index of AZ31B magnesium alloy under different strain conditions almost appeared in the low strain rate region, up to 0.21. This indicates that, in the plastic deformation stage, low strain rate and high strain were conducive to improving the strain rate sensitivity index. The reason is the low strain rate stage involves a long deformation time, and the material has enough time for dynamic recrystallization nucleation and growth; at the same time, the softening and hardening mechanisms of the material are fully initiated, Overall, with the increase in strain, the peak value of the strain rate sensitivity index decreased first and then increased. At the same time, the peak value of the strain rate sensitivity index of AZ31B magnesium alloy under different strain conditions almost appeared Metals 2022, 12, 1942 8 of 13 in the low strain rate region, up to 0.21. This indicates that, in the plastic deformation stage, low strain rate and high strain were conducive to improving the strain rate sensitivity index. The reason is the low strain rate stage involves a long deformation time, and the material has enough time for dynamic recrystallization nucleation and growth; at the same time, the softening and hardening mechanisms of the material are fully initiated, which promotes the increase in strain rate sensitivity. This also provides some help and inspiration for the plastic processing of the material: in the plastic processing of magnesium alloy materials, the selection of the strain rate should be appropriate, especially for the selection of a high strain rate.
It can also be seen from Figure 6 that, when the strain value is constant, the strain rate sensitivity index m of AZ31B magnesium alloy gradually decreases with the increase in strain rate. This is because, as the strain rate increases, the time required for the same deformation of the metal materials becomes shorter and shorter, resulting in there being insufficient time for the metal materials to complete the nucleation and growth of dynamic recrystallization in the deformation process, leading to a gradual decrease in the strain rate sensitivity index. It was also found that, when the deformation temperature was greater than 700 K, it had little effect on the strain rate sensitivity index under the condition of constant strain rate. This phenomenon was more obvious when the strain was 0.8, and the contour line in the figure was almost parallel to the abscissa.
J. Luo et al. [26] pointed out that the alloy composition, grain size, and phase volume fraction also have some influence on the strain rate sensitivity index. Figure 7 shows the influence of different deformation conditions on the microstructure of AZ31B magnesium alloy during isothermal compression. J. Luo et al. [26] pointed out that the alloy composition, grain size, and phase volume fraction also have some influence on the strain rate sensitivity index. Figure 7 shows the influence of different deformation conditions on the microstructure of AZ31B magnesium alloy during isothermal compression. It can be seen from the figure that there are a large number of fine, recrystallized grains in Figure 7a,b, as well as large original grains with relatively large sizes, but the degree of dynamic recrystallization in Figure 7a is significantly higher. Therefore, the strain rate sensitivity index is higher when the deformation temperature is 623 K and the strain rate is 0.1 s −1 . A previous study [27] also found that the strain rate sensitivity index gradually decreased with the increase in grain size.
It can be seen from the comparison of Figure 7c,d that the strain rate sensitivity index gradually decreased with the increase in strain rate. This can be explained well by the change in the microstructure in Figure 7; that is, under the same deformation temperature, with the increase in the strain rate, the grain size of the AZ31B magnesium alloy gradually It can be seen from the figure that there are a large number of fine, recrystallized grains in Figure 7a,b, as well as large original grains with relatively large sizes, but the degree of dynamic recrystallization in Figure 7a is significantly higher. Therefore, the strain rate sensitivity index is higher when the deformation temperature is 623 K and the strain rate is Metals 2022, 12, 1942 9 of 13 0.1 s −1 . A previous study [27] also found that the strain rate sensitivity index gradually decreased with the increase in grain size.
It can be seen from the comparison of Figure 7c,d that the strain rate sensitivity index gradually decreased with the increase in strain rate. This can be explained well by the change in the microstructure in Figure 7; that is, under the same deformation temperature, with the increase in the strain rate, the grain size of the AZ31B magnesium alloy gradually increased, which led to the decrease in the strain rate sensitivity index.
Strain Hardening Exponent
The strain hardening index n reflects the ability of metal materials to resist uniform plastic deformation and is the performance index used to characterize the work hardening behavior of metal materials. H. P. Stüwe et al. [28] pointed out that the strain hardening exponent n is caused by the mutual balance between the strain hardening and softening mechanisms. The calculation formula for the strain hardening exponent n used in this study was as follows [29]: where n is the strain hardening index; σ is the flow stress (MPa); . ε is the strain rate (s −1 ); ε is the strain; and T is the deformation temperature (K). It can be seen from Equation (1) that, when the material deforms, the strain hardening index depends on its internal range (strain interval); that is, the variable is ∆ε. Under the same strain rate and temperature conditions, the larger the strain interval, the larger the denominator and the smaller the strain hardening index n. The results show that the strain hardening strength of AZ31B magnesium alloy decreased with the increase in strain; that is, with the deepening of deformation, AZ31B magnesium alloy gradually changed from work hardening to softening. Figure 8 shows the variation in the strain hardening index of AZ31B magnesium alloy with deformation conditions (deformation temperature and deformation amount) under different strain rates. The drawing method is similar to Figure 6.
It can be seen from the figure that, under different strain rates, the strain hardening exponent of AZ31B magnesium alloy was almost positive when the strain was small; that is, at the early stage of deformation, the work hardening effect of AZ31B magnesium alloy played a dominant role in the deformation process. This was because the dislocation density increased rapidly at the beginning of deformation and the distortion energy was also high, which meant that the alloy material was in the working hardening state, and the strain hardening index was positive. Moreover, the strain hardening index of AZ31B magnesium alloy decreased with the increase in strain, which indicated that, with the deepening of deformation, AZ31B magnesium alloy gradually changed from work hardening to softening. Figure 8 further shows that the peak value of the strain hardening index of AZ31B magnesium alloy occurs at low deformation and low temperature, and the peak value of the strain hardening index and the work hardening region gradually increased with the increase in the strain rate, up to 0.32. Combined with Figure 5, this shows that the flow stress in the alloy increased with the increase in the strain rate, which also fully reflected the change in the strain hardening index of the alloy with the strain rate. Figure 9 shows the effects of different deformation conditions on the microstructure of AZ31B magnesium alloy. It can be seen from the figure that the strain hardening index of AZ31B magnesium alloy gradually decreased with the increase in deformation temperature. This can be explained well by the change law for the microstructure. As shown in the figure, as the deformation temperature increased, the grain size of AZ31B magnesium alloy gradually increased; that is, the nucleation and growth of recrystallized grains involved a great number of dislocations and variable properties, resulting in a decrease in the n value. In addition, the strain hardening index at 773 K and a strain rate of 0.1 s −1 was smaller than that at 773 K and a strain rate of 1 s −1 . This was because, as the strain rate increased, the time required for the material to produce the same deformation decreased, resulting in insufficient time for the material to complete the nucleation and growth of dynamic recrystallization during deformation, which made the material harden. Therefore, the strain hardening index was also large when the strain rate was large.
This can also be explained by the change in the microstructure. It can be seen from Figure 9 that the grain size at the strain rate of 0.1 s −1 was significantly larger than that at the strain rate of 1 s −1 . Combined with the above description, under the condition shown in Figure 9c, the recrystallized grains grew significantly and the ability of metal materials to resist uniform plastic deformation decreased, which made the n value decrease at this time.
of AZ31B magnesium alloy. It can be seen from the figure that the strain hardening index of AZ31B magnesium alloy gradually decreased with the increase in deformation temperature. This can be explained well by the change law for the microstructure. As shown in the figure, as the deformation temperature increased, the grain size of AZ31B magnesium alloy gradually increased; that is, the nucleation and growth of recrystallized grains involved a great number of dislocations and variable properties, resulting in a decrease in the n value. In addition, the strain hardening index at 773 K and a strain rate of 0.1 s −1 was smaller than that at 773 K and a strain rate of 1 s −1 . This was because, as the strain rate increased, the time required for the material to produce the same deformation decreased, resulting in insufficient time for the material to complete the nucleation and growth of dynamic recrystallization during deformation, which made the material harden. Therefore, the strain hardening index was also large when the strain rate was large.
This can also be explained by the change in the microstructure. It can be seen from Figure 9 that the grain size at the strain rate of 0.1 s −1 was significantly larger than that at the strain rate of 1 s −1 . Combined with the above description, under the condition shown in Figure 9c, the recrystallized grains grew significantly and the ability of metal materials to resist uniform plastic deformation decreased, which made the n value decrease at this time.
Conclusions
The hot deformation behavior of as-cast AZ31B magnesium alloy was studied at 623~773 K; strain rates of 0.1 s −1 , 1 s −1 , 10 s −1 ; and 60% strain. It was found that the flow stress of as-cast AZ31B magnesium alloy was highly sensitive to deformation temperature, strain rate, and strain. The strain rate sensitivity index m and strain hardening index n of AZ31B magnesium alloy were studied in depth, as shown in the following conclusions: 1. The flow stress of as-cast AZ31B magnesium alloy decreased with the decrease in the strain rate, the increase in the strain, and the increase in the deformation temperature.
With the increase in the degree of deformation and the temperature of the magnesium alloy, recrystallization softening gradually occupied the dominant position, and the influence of the strain rate on the flow stress was gradually reduced; 2. The strain rate sensitivity index m was affected by the amount of strain, the strain rate, the deformation temperature, and the grain size as follows. Firstly, the strain rate sensitivity index m of AZ31B magnesium alloy gradually decreased with the increase in the strain rate and particle size. Secondly, the peak m of the strain rate sensitivity index appeared in the low strain rate region, decreased, and then increases with the increase in strain. In addition, the strain rate sensitivity index m decreased with increasing temperature until it was almost unaffected by temperature (above 700 K); 3. The strain hardening exponent n was affected by the strain and strain rate, the deformation temperature, and the grain size as follows. Firstly, at different strain rates, the positive strain hardening indexes of AZ31B magnesium alloy almost all appeared in the small strain region. In addition, the strain hardening index of AZ31B magnesium
Conclusions
The hot deformation behavior of as-cast AZ31B magnesium alloy was studied at 623~773 K; strain rates of 0.1 s −1 , 1 s −1 , 10 s −1 ; and 60% strain. It was found that the flow stress of as-cast AZ31B magnesium alloy was highly sensitive to deformation temperature, strain rate, and strain. The strain rate sensitivity index m and strain hardening index n of AZ31B magnesium alloy were studied in depth, as shown in the following conclusions: 1.
The flow stress of as-cast AZ31B magnesium alloy decreased with the decrease in the strain rate, the increase in the strain, and the increase in the deformation temperature. With the increase in the degree of deformation and the temperature of the magnesium alloy, recrystallization softening gradually occupied the dominant position, and the influence of the strain rate on the flow stress was gradually reduced; 2.
The strain rate sensitivity index m was affected by the amount of strain, the strain rate, the deformation temperature, and the grain size as follows. Firstly, the strain rate sensitivity index m of AZ31B magnesium alloy gradually decreased with the increase in the strain rate and particle size. Secondly, the peak m of the strain rate sensitivity index appeared in the low strain rate region, decreased, and then increases with the increase in strain. In addition, the strain rate sensitivity index m decreased with increasing temperature until it was almost unaffected by temperature (above 700 K); 3.
The strain hardening exponent n was affected by the strain and strain rate, the deformation temperature, and the grain size as follows. Firstly, at different strain rates, the positive strain hardening indexes of AZ31B magnesium alloy almost all appeared in the small strain region. In addition, the strain hardening index of AZ31B magnesium alloy decreased with the increase in the strain, temperature, and particle size but increased with the increase in the strain rate. Finally, the peak value of the strain hardening strength of AZ31B magnesium alloy gradually increased with the increase in the strain rate. | 8,762.6 | 2022-11-12T00:00:00.000 | [
"Materials Science"
] |
Meeting Highlights: Genome Informatics
We bring you the highlights of the second Joint Cold Spring Harbor Laboratory and Wellcome Trust ‘Genome Informatics’ Conference, organized by Ewan Birney, Suzanna Lewis and Lincoln Stein. There were sessions on in silico data discovery, comparative genomics, annotation pipelines, functional genomics and integrative biology. The conference included a keynote address by Sydney Brenner, who was awarded the 2002 Nobel Prize in Physiology or Medicine (jointly with John Sulston and H. Robert Horvitz) a month later.
In silico data discovery
In the first of two sessions on this topic, Naoya Hata (Cold Spring Harbor Laboratory, USA) spoke about motif searching for tissue specific promoters.The first step in the process is to determine the foreground (positive) dataset and the background (negative) dataset and then search for over-or under-represented n-mers (where n = 6-12) in foreground sequences with respect to the background.Their tool can also be used to look for binding sites of dimers, by looking for two sequences (allowing for incomplete conservation) separated by n nucleotides (n = 0-12).
They have accumulated data on 10 000 mouse promoters, from mouse Refseq and RIKEN cDNAs, and on 13 000 human promoters, from human Refseq and the database of transcription factor start sites (DBTSS website).Using data from a microarray study of the expression of ∼19 000 genes in 49 mouse tissues (Miki et al., 2001), they identified 9000 data points with corresponding promoter sequences.They then tested their tool by trying to build a liver specific promoter database (LSPD).
Their foreground set was the promoter regions (an ∼1000 bp stretch upstream of genes) of those genes showing a log ratio of expression of >3 in liver compared to other tissues and their background set was genes which had a log ratio of ∼0 in liver.Their approach found 17 of 17 known promoters with a specificity of 17/28.None of the sites they identified was located downstream of a TSS and all showed an excess in the foreground sample compared to the background sample.They have also looked at muscle specific promoters and promoters specific for bone and kidney.So far, they see very little overlap in motifs between tissues, except for liver and kidney, which have several motifs in common.They hope to use this to build a discriminator function for tissue type.
Klaus Hornischer (Biobase GMBH, Germany) presented a search for composite regulatory elements.Applying a transcription factor binding motif search to an entire mammalian genome finds many hits (as expected, many are only 6-mers).One would expect to find a number of sites and elements in front of a gene, but enhancers also contain multiple sites, so the clusters of motifs that they observe are not always near to transcription start sites.His group have performed an analysis of clusters of sites observed on human chromosome 21.They found that the clusters were often at the start of genes and commonly showed high GC.They then classified the composite elements by function: inducible, constitutive or tissue-restricted.Using this approach, they see cross-coupling of functions (pathways) and can roughly predict the function or tissue role of a gene (they have several cases that match data on known proteins).This gives leads for expression experiments and functional analyses.A further benefit of this work is that the clusters can confirm gene models, or cause correction of models (typically elongation of models, or identification of missing 5 UTRs).
Other talks in this session were given by Elena Rivas (Washington University, St Louis, USA) and Göran Sperber and Jonas Blomberg (Uppsala University, Sweden).
In the second session, Uwe Ohler (MIT, USA) presented work on annotating the core promoter regions of Drosophila genes.They used stringent criteria to cluster 5 cap-trapped ESTs from the Drosophila Gene Collection, and then identified the transcription start sites (TSSs) of around 2000 genes.They then compared their dataset with Drosophila core promoter data from the Eukaryotic Promoter Database (EPD) and the core promoter database (CPD), finding good agreement for a number of criteria.Their search for motifs within these regions showed that a surprisingly low proportion of them contained binding sites for general transcription factors, such as TATA boxes.They also identified shared motifs that had not been described previously, which they then used to retrain their ab initio promoter prediction system (McPromoter), thereby enhancing its ability to recognize promoters (McPromoter prediction server).
Abel Ureta-Vidal (EBI, UK) described the analysis and comparison of multiple genomes in EnsEMBL.In the first step, they use 'exonerate' to compare DNA vs.DNA to locate synteny anchors.In the human vs. mouse comparison, 1 kb mouse fragments were located on the human genome by their best hit.Of these comparisons, ∼19 000 had informative high-scoring segment pairs (HSPs) and from these they selected very highly conserved regions.Just less than one-quarter of matches were in coding regions and around half were in intergenic regions, with the remainder in introns; the figures are roughly the same when looked at from the human or mouse perspective.For their protein level comparison, ∼20 000 human proteins and about the same number of mouse proteins were compared in an all vs. all search to find reciprocal best hits.While the majority of proteins found a good match, and so could be used as seeds (other genes are located with reference to these, based on genomic coordinates), significant numbers of paralogues, cuckoos (genes that have recently moved) and orphan proteins were also detected.The plan is to perform other comparisons of pairs of animal genomes (C.elegans vs. C. briggsae and Drosophila vs. the mosquito) and then to link them, but there are no plans to include plant genomes, as teams at other institutes already have this well in hand.
Other talks in this session were given by Damian Smedley (Imperial College, London, UK) and Heng Dai (Johnson & Johnson Pharmaceutical R&D, USA).
Comparative genomics
Orly Alter (Stanford University, USA) described the application of generalized singular value decomposition to comparing expression profiling datasets from two species.They compared the Spellman et al. (1998) yeast cell cycle and the Whitfield et al. (2002) human cell cycle expression profiling datasets.Their results showed that the two datasets have the same gene patterns, but at different levels of significance.Human and yeast genelets with similar significance indicate common processes and they also found genelets exclusive to human or yeast, such as the yeast pheromone response genes.Plotting the data in circles by time showed the phases of cell division, with the expected patterns seen for known cell cycleregulated genes.Even though the experiments were not synchronized at the same point, it was possible to see the conservation of phases; they were just out of step.
Bin Liu (Baylor College of Medicine, USA) discussed a comparison of the human genome with draft sequences of mouse chromosome 11.
In the first phase of the project, a draft sequence for mouse chromosome 11 was constructed from the available data, which resulted in three large contigs and two gaps (which they believe to only be 2-3 BACs long).This was then compared to the human genome sequence.After clean-up of non-specific matches, they saw matches to almost every human chromosome.The largest block of homology is with human chromosome 17, and there are also significant blocks of homology with human chromosomes 7, 2, 5 and 22.About 7% of matches are in a mouse gene but outside of a human gene, 20% are in a human gene but outside of a mouse gene, 25% are non-genic in both species and the rest are gene-gene matches.He gave some detailed examples of the group's further work on specific regions.They have shown that the Smith-Magenis syndrome region is highly conserved in the mouse, most genes are in the same order, and they see some intergenic matches.In the p53-wnt3 inversion region they see variation of conservation across genes and some matches outside of genes, in particular some matches upstream of one gene, which could be its promoter.They have made mice with three different inversions of the syntenic region, which they plan to cross with ENU mutants.
Aleksandar Milosavljevic (Baylor College of Medicine, USA) presented the preliminary results of comparative clone mapping and assembly of the Rhesus macaque and human genomes.A pooled genomic indexing approach was first tested with rat BACs.An array of BACs is pooled and the pools are sequenced to ∼0.5× coverage.When a row pool and a column pool share the same human best hit, the intersection BAC is assigned to that location.A second set of pools (constructed from the same clones, but using a different design) is used to help eliminate false positives.For the Rhesus macaque they have 27 000 BACs, which gives ∼1.5× clone coverage.They have constructed pools of these and aim to sequence 144 reads/pool.Their comparative assembly approach uses the human assembly as a guide for the selection of BACs and for the assembly of the BAC sequences.A pilot study of this approach showed that it required over 20% less reads than using unassembled macaque sequences to achieve comparable assembly.
Other talks in this session were given by Seraphim Batzoglou (Stanford University, USA), Jo Dicks (John Innes Centre, UK), Irmtraud Meyer (Wellcome Trust Sanger Institute, UK) and Roman Tatusov (NCBI, USA).
Annotation pipelines
Robert Citek (Orion Genomics, USA) presented a system for managing a local copy of GenBank on a PC (or even on a laptop, with a pareddown version of the database).To use it requires MySQL and selected Perl modules, BASH (or another UNIX-like shell) and ∼60 GB of space.
The schema is just one table, with attributes of each entry.The sequence is held separately, in Perl-administered files.The set-up allows you to limit which parts of the database are used, e.g.viruses only.Taxonomy data is typically stored as an adjacency tree of parent to child, which makes it difficult to find descendants or ancestors.His system uses a nested set model, which allows fetching of all descendants of a parent, enabling searches for all vertebrate genes of a kind, for example.It is also possible to look at parsing errors (blank fields).He has found 68 000 cases (although many are molecule type, which is not a required field) and unexpected divisions (such as fungi in plants); the system has identified 115 000 of these.He also found other anomalies such as entries of less than 10 nucleotides in length, of which there seem to be several thousand (including 64 that are only one nucleotide long, some of which are N).
This talk prompted much discussion of how people use GenBank, which resulted in a show of hands that demonstrated that a significant proportion of the delegates prefer to hold a local copy.This was not seen in any way to reflect upon the service provided by NCBI, but rather to reflect on the reliability and speed of networks.
James Galagan (Whitehead Institute, USA) discussed the annotation and analysis of the Neurospora crassa genome using the CALHOUN system.This filamentous fungi is an important model, and has genome of ∼40 Mb spread across 7 chromosomes.They had 39.1 Mb in the current assembly, which had 833 contigs as ∼170 scaffolds.They combine three gene-calling tools, Fgenesh, Fgenesh+ and Genewise, which have different strengths, for the best results.They had 10 082 predicted protein coding genes at that time and were collaborating with over 30 members of the N. crassa and wider research community to analyse them.They have identified RIP which, during reproduction, detects duplicated sequences (including repeats, transposable elements, gene duplications and larger duplications) above a certain size with >80% sequence similarity, and mutates and thereby silences them.This is thought to be very important in the evolution of the N. crassa genome and may be widespread in fungi.Their multigene family analysis has shown that N. crassa has far fewer genes in families than would be expected from its genome size, when compared to other fungi, or to a broad range of species.Within those families that do exist, there are very few highly similar gene pairs, i.e. it has almost no paralogues.This implies that it must have an alternative mechanism (to gene duplication) for gene evolution; he suggested perhaps gene sharing or lateral transfer, (although he pointed out that this is not widely documented in fungi).The work of the Fungal Genome Initiative should provide many more data that might help answer this question; they hope to have eight more fungal genomes in 2003 from the Whitehead Institute's efforts alone.
This session also included talks by Colin Weil (University of California, Berkeley, USA), Jeff Nie (Medical College of Wisconsin, USA), Feng Cao (Third Wave Technologies, USA), Carol Bult (Jackson Laboratory, USA), Michelle Clamp (EBI, UK) and John Quackenbush (TIGR, USA).
Functional genomics
Michael Eisen (Lawrence Berkeley National Laboratory, USA) spoke about the detection of transcription factor binding site motifs.He explained that existing structurally aware motif detectors are all based on the EM algorithm and versions of the finite mixture model.They first developed CMEME, which uses motif family specific constraints on entropy curves to limit the shapes of motifs that it searches for.This approach performed better than MEME but was too slow to apply to a whole genome.A second approach, TF-EM, has positions constrained as highly, medium or weakly conserved and they specify a vector of constraints for each motif.This approach works faster than CMEME, but is still not ready to be applied to a whole genome.They have built contact maps from known protein-DNA complexes and incorporated these into the motif detection, by using TF-EM with a penalty for deviation from a specified profile.Using these methods, they have successfully found known Drosophila motifs, and have had good results with Saccharomyces cerevisiae binding sites.The software will eventually be available on one common platform, which will be open source.
Michael Reich (Whitehead Institute, USA) presented the next generation of array analysis tools from the Whitehead Institute Center for Genome Research (WICGR Cancer Genomics Software site).They have updated their popular data pre-processing and clustering tool, GeneCluster, which attracted 3000 downloads.The extra features of Version 2 include supervised classification, gene selection and permutation test methods.It has algorithms for building and testing supervised models using the weighted voting (WV) and k -nearest neighbours (KNN) algorithms, and has modules for batch SOM clustering and visualization.GeneCruiser is a new gene annotation tool that provides a quick, bidirectional link between Affymetrix probe IDs and gene information in public databases such as GenBank, Uni-Gene and SwissProt.Users can also find out where Affymetrix probes are located in the human genome using the GoldenPath genome browser.The keyword search facility allows users to find out how many genes of a type (say, receptors) are represented by probes on each chip type.Their next generation pipelines will range from languages and object libraries for programmers who want to write their own pipelines to complete packages for users who prefer a 'black box' approach.
Wyeth Wasserman (Karolinska Institute, Sweden) presented work on enhancing regulatory analysis using familial binding profiles of transcription factors (TFs).Finding control signals by using genome-wide expression profiling followed by sensitive pattern discovery techniques to look for shared over-represented sequences in the control regions of co-regulated genes has been successfully applied to yeast, but has not been so successful for metazoan genomes.This new approach is based on the shared familial binding characteristics of TFs.The group developed a new algorithm for pairwise comparison of binding profiles and used this to align the profiles of well-known TF families to build family models for 11 major structural classes.They were able to use these models to predict the structural class of TFs acting via regulatory elements, and to enhance the detection of binding sites in metazoan promoter sequences.The approach is also less affected by the problems associated with analysing longer sequences.
There were also presentations by Peter Lee (McGill University, Canada), Y. Ramanathan (International Center for Public Health, USA), Xiaokang Pan (Cold Spring Harbor Laboratory, USA) and Jennifer Bryan (University of British Columbia, Canada).
Integrative biology
The last session of the conference included a range of speakers involved with integrating informatics and annotation to obtain functional insights from genomic data.The session opened with Kim Pruitt (NCBI, USA), who described projects concerned with integrating sequence data with functional information extracted from PubMed.The Ref-Seq project analyses transcripts through an automatic pipeline, followed by manual curation, to produce a high quality non-redundant resource for the genomic community.LocusLink GeneRIF incorporates functional data from PubMed abstracts into LocusLink.GeneRIFs can also be submitted by external users to aid the public annotation effort.
The new Sanger Institute Gene Resources project introduced by Jennifer Ashurst and Gareth Howell (Sanger Institute, UK) combines manual gene curation of individual chromosomes with experimental validation of the putative gene set, alongside extension of partial genes.Preliminary results from a pilot study of chromosome 20 annotation examined a total of 675 genes, 279 of which required experimental validation.Of these genes, 20% were confirmed with experimental evidence from cDNA pools and a further 20% of predictions had their structures changed when additional sequences were obtained.
Simon Twigger (Medical College of Wisconsin, USA) and Fredrik Ståhl (Göteborg University, Sweden) described the two official rat databases, both of which are involved in distributing rat gene nomenclature.The Rat Genome database (RGD) based in Wisconsin, uses the generic genome browser (http://www.gmod.org) to display quantitative trait loci (QTL), mouse and human comparative analysis, unigene data and microarray data, all mapped onto the genomic sequence.This enables the user to make connections from disease to QTL to gene.RatMap, which originates from Göteborg, concentrates on collecting and curating information about rat genes from literature sources and other databases.Over 1000 orthologues between mouse, rat and human genes have been curated and the database contains over 6000 new rat genes.
Producing structured vocabulary to describe biological annotations is a major goal for all model organism databases.Judith Blake (MGI, Jackson Laboratory, USA) described further extensions to the Gene Ontology (GO) project.Over 18 000 mouse genes have been curated using the primary literature and these can be queried using the standard ontology vocabulary.GO now includes the mouse anatomical dictionary and phenotype classification, which enables standardized annotation of gene expression and QTL analysis, and detailed description of experimental mouse mutants.
Imre Vastrik (EBI, UK) described the Genome Knowledgebase (GK), which utilizes information derived from the GO project.The GK project aims to capture all available information involved in cellular processes.These processes are broken down into two classes: Events and Physical Entities.Events consist of a series factors describing a process, which could include location, catalysts, inputs and outputs, etc.Alternatively, Physical Entities can be related to sequence accession numbers, GO identifiers for biological function, or biochemical activities.This should enable users to navigate easily through data involved in a particular process, e.g.DNA replication, and find all the genes, proteins and compounds involved in every step of that process.
There were also presentations by Junji Hashimoto (University of Tokyo, Japan) and David Block (Genomics Institute of the Novartis Research Foundation, USA).
Keynote speaker
In his keynote speech, Sydney Brenner presented his view for 'the way ahead'.He feels that, while they have contributed much, bioinformatic approaches will not find everything that we want to know and that we cannot get all of the answers from the genome sequence.He proposed that research should now focus on cells, rather than the genome, with the aim of reconstructing pathways and understanding systems.He talked about his vision of making a map of every cell (commenting that histology studies indicate that there are ∼200 cell types in the human body) in terms of noncontingent entities, i.e. not those that are only expressed when cells are stimulated or stressed.He stated that the map would 'need to be accurate and complete; all databases should be like that', and commented that having standards would be important for the project.In answer to those who have responded that it is too complex, he argued that it is very rare for a protein to work alone in a cell, and suggested that the project is tackled as one component at a time (such as a ribosome, which would include ∼100 entities).The components would then become nodes in a giant graph that will be assembled.He has called his idea the 'instantiation program', where one instantiation is the expression of one form of a protein in a cell type, and the expression of a different form of the protein in the same, or a different, cell is another instantiation.The idea is then to take a cell and identify each instantiation of each protein in that cell.His group and others have shown that comparisons with Fugu genomic sequence can be used to find human and mouse promoters (which are what causes instantiation) as regions that have been conserved over time.Some Fugu promoters have been shown to work in the mouse, but he wants to also prove that they are necessary, and sufficient, for regulation.He is confident that all of his suggestions are possible, and expects the program to work by 2020, assuming that a large international project can be assembled. | 4,824.4 | 2003-10-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Pyramidal core-shell quantum dot under applied electric and magnetic fields
We have theoretically investigated the electronic states in a core/shell pyramidal quantum dot with GaAs core embedded in AlGaAs matrix. This system has a quite similar recent experimental realization through a cone/shell structure [Phys. Status Solidi-RRL 13, 1800245 (2018)]. The research has been performed within the effective mass approximation taking into account position-dependent effective masses and the presence of external electric and magnetic fields. For the numerical solution of the resulting three-dimensional partial differential equation we have used a finite element method. A detailed study of the conduction band states wave functions and their associated energy levels is presented, with the analysis of the effect of the geometry and the external probes. The calculation of the non-permanent electric polarization via the off-diagonal intraband dipole moment matrix elements allows to consider the related optical response by evaluating the coefficients of light absorption and relative refractive index changes, under different applied magnetic field configurations.
the structure, giving rise to a spatially direct exciton and that through the applied electric field it is possible to polarize the system giving rise to a spatially indirect exciton with changes in lifetime ranging from nanoseconds up to milliseconds. The analysis of the probability distributions shows the evolution between QD and quantum ring induced by the electric field. To date there are no further known developments of this type of cone/shell novel structure nor about similar pyramidal/shell structures. Taking into account the high degree of development that pyramidal QDs have had, we consider that the implementation in the laboratory of a pyramid/shell QD to be viable without much effort. Therefore, using the work from Heyn et al. 32 as a departing point, we have assumed the theoretical investigation of the pyramid/shell QDs as the subject of this research. We will go further and include the effects of a static magnetic field parallel to the vertically applied electric field. We shall focus our attention on the electronic structure, the wave function symmetries, and the intra-band optical absorption. The possible electric-field-induced appearance of indirect excitonic complexes, related to effective spatial separation of electron and hole states is briefly discussed as well. The article has the following organization: The theoretical framework is presented in section II. The section III contains the results and discussion. Finally, in the section IV we outline the conclusions. theoretical framework Figure 1 shows the 3D projection of the structure while a schematic view of the pyramidal core-shell quantum dot (PCSQD) is shown in Fig. 2, with θ labeling the vertex angle, and h i the height of each pyramid. The center of gravity of the PCSQD is assumed to be at z = 0. So, our problem is to study the energy states and their corresponding wave functions for an electron confined in a pyramidal structure like the one shown in Fig. 1 and subjected to the effects of stationary electric or magnetic fields, both applied in the z-axis, parallel to the symmetric axis of the pyramid. Within the framework of the effective mass, the Hamiltonian for this problem, in Cartesian coordinates, takes the form: where e is the electron charge, ⁎ m w b , is the effective mass (b means the barrier region or innermost and outermost pyramid and w means the well region or the pyramid in the center), and V(x, y, z) is the confinement potential for the PCSQD which is V 0 in the innermost and outermost pyramid, zero for the pyramid in the center, and ∞ outside the PCSQD.
The particular gauge chosen to describe the magnetic field in the system implies the conditions www.nature.com/scientificreports www.nature.com/scientificreports/ for the magnetic vector potential, where → B properly represents the field. The expanded form of the Hamiltonian (Eq. (1)) gives Using the expression of the magnetic potential Eq. (2) the final form of the Hamiltonian Eq. (3) is: The energies and wavefunctions of the bound states can be obtained by solving the Schrödinger equation: i ii The eigenvalues and eigenstates (Eq. (5)) are calculated with the software COMSOL-Multiphysics 33 , which uses a FEM to solve the partial differential equation numerically. A complete description of the COMSOL-Multiphysics licensed software that includes the foundation of the finite element method, the construction of meshes, the discretization of the differential equations, the methods to optimize the processes, the construction of geometries and the convergence criteria can be found in 34,35 . Since Ψ i (x, y, z) is finite, the Dirichlet boundary condition implies that any of its values outside the PCSQD are equal to zero, i.e. the wave functions are zero at the interfaces between the outermost pyramid and the infinite potential region (see Fig. 2). For layered structures such as the one in the current study, the Schrödinger equation interface accounts for the discontinuity in the effective mass by implementing the BenDaniel-Duke boundary conditions.
One of the optical coefficients to be evaluated in this work are the light absorption one, which derives from the imaginary part of the dielectric susceptibility. So, we have: and owing to take into account possible damping effects associated with intraband transitions induced by photon absorption, the Dirac delta term is usually substituted by a Lorentzian term, thanks to the well known relation f i fi f i in which Γ fi (=10 meV in this work) accounts for the corresponding damping rates. In the former expressions, ω represents the incident photon frequency, c is the speed of light in the vacuum, n r is the static value of the refractive index, and 0 is the vacuum permittivity. The quantities E f and E i are, respectively, the energy of the final state and the energy of the initial state of the light-induced intraband transition. Since we are assuming to work in the very low temperature case, the electron density per unit volume is taken to be 2/V, where V represents the PCSQD volume and the 2 indicates the possible spin contributions. This has to do with the situation in which a single electron would be excited towards the conduction band at low T. In this work the electron density was taken as 3 \10 m 22 3 is the unit vector representing the polarization of the -homogeneously intense-incident light (for instance, if the light is circularly polarized in the xy-plane, then ξ → = → ± → e ie / 2 / 2 1 2 , where → e 1 and → e 2 are the unit vectors along the x-and y-direction, respectively).
The general expression for the electric dipole moment matrix element, µ ξ f i , , is the following: in which e is the electron charge and → r is the vector position. In an analogous way, the expression for the coefficient of relative change of the refractive index comes from the real part of the dielectric susceptibility. Its final form reads: 2 fi In Eqs. (6) and (9), the summation is carried out over all possible allowed inter-state transitions.
Results and Discussion
As stated in the previous section, the present study makes use of a FEM to solve the eigenvalues differential equations. In particular, a self-adapting mesh has been used that includes tetrahedra in the volume region, triangles on the surfaces, edge elements at the intersection between two planes, and vertex elements at the intersection between three planes. For a pyramid with h 1 = 5 nm, h 2 = 20 nm, h 3 = 30 nm, and θ = π/2, the used parameters are: 86365 tetrahedra, 11242 triangles, 564 edge elements, and 15 vertex elements, which guarantees a convergence of 0.1 meV for the fifteen lowest states that have been calculated. In this paper we report results for the thirteen lowest energy states. In Fig. 3 the energies of the thirteen lowest confined electron states in a GaAs-Ga 0.7 Al 0.3 As PCSQD are depicted as functions of the innermost pyramid height. Calculations correspond to the situation in which the electric and magnetic field intensities are equal to zero, θ = π/2, h 2 = 20 nm, and h 3 = 30 nm. It is observed that all energy levels have a growing tendency as long as h 1 augments. This is due to the progressive decrease in the volume of the GaAs layer where the electron is confined.
For h 1 = 5 nm the levels (2, 3), (7,8), and (10,11) appear to be degenerate, while for h 1 > 7.4 nm, the degenerate levels are (2, 3), (7,8), and (9,10). This degeneracy comes from the square symmetry of the base of the pyramid, with respect to an axis that passes through its center of gravity and the upper vertex (see Fig. 1). From Fig. 4 -where the projections of the wave functions of the first thirteen confined states onto the xy-plane (with z = 0) and onto the xz-plane (with y = 0) are shown-, it is possible to observe that, for example, the degenerate states Ψ 2 and Ψ 3 have p-like symmetry. The states Ψ 1 and Ψ 5 exhibit s-like symmetry and the state Ψ 4 displays www.nature.com/scientificreports www.nature.com/scientificreports/ d-like symmetry. The states Ψ 2 and Ψ 3 appear rotated in the xy-plane due to an indeterminacy of the phase, which is typical to the used numerical method. By introducing a very small asymmetry in the dimensions of the base, clearly the Ψ 2 and Ψ 3 states would be oriented along the x and y perpendicular axes. In Fig. 4 the color scale is defined with green corresponding to zero, red to a maximum positive value, and the negative maximum for the blue. For Ψ 1 in the xy-plane, with h 1 = 5 nm, one notices that the wave function has finite values in the center of the structure (the wave function penetrates the center region), indicating the possibility to find the electron around the gravity center of the PCSQD, whereas for h 1 = 15 nm, the electron will completely confine inside the GaAs layer. When comparing the behavior of Ψ 1 in the xz-plane, it can be seen that for h 1 = 5 nm the electron can be found both in the lateral regions and in the base of the GaAs pyramid, while for h 1 = 15 nm, the probability density concentrates mainly towards the side walls of the central pyramid. Finally, note that the value zero of Ψ 4 along the xz-plane, for both h 1 = 5 nm and h 1 = 15 nm, is consistent with the null value of the wave functions in the xy-plane along the y = 0 line. It is interesting to note from Fig. 4 that the Ψ 7 and Ψ 8 states have exactly the same symmetries as the Ψ 2 and Ψ 3 states and that, like the first two excited states, in the entire range of calculated h 1 -values, Ψ 7 and Ψ 8 are degenerate states. Here it should be noted that Ψ 7 and Ψ 8 double the number of Ψ 2 and Ψ 3 antinodes, which is in accordance with their higher energy values. At h 1 = 7.35 nm an accidental degeneration appears with three states that have the same energy. From Fig. 4(a) it is observed that for h 1 = 5 nm, the calculation of Ψ + Ψ 2 leads to a probability density very similar to that obtained with Ψ 9 2 , while for h 1 = 15 nm, in Fig. 4(b), the calculation of Ψ + Ψ 9 2 10 2 perfectly approximates Ψ 11 2 . This is in agreement with the change of symmetries that is observed in h 1 = 7.35 nm.
Furthermore, the inset in Fig. 3 shows the evolution of the three lowest confined electron states in a PCSQD as h 1 approaches h 2 = 20 nm. That is, as the inner GaAs pyramid thickness approaches zero. Note that for h 1 = 19 nm, the ground state energy reaches the value of the potential barrier (262 meV) and from there the wave functions overflow to the Ga 0.7 Al 0.3 As region. This can be visualized in the fourth column of Fig. 5 where the, z = 0, xy-projections of the wave function for the ground state are presented as the thickness of the GaAs layer tends to zero. In that case, the electron is, actually, confined in a pyramid of Ga 0.7 Al 0.3 As of height h 3 with infinite external potential barriers. For 18.8 nm < h 1 < 19 nm, only the ground state is confined within the GaAs region. Going from h 1 = 18 nm towards h 1 = 19.6 nm it is observed how the system evolves from a 2D-confinement in the GaAs region to a 3D-one in the Ga 0.7 Al 0.3 As structure. Figure 6 shows the nonzero transition matrix elements (dipole moment divided by the electron charge, ) between the ground state (Ψ i , i = 1) and the first twelve excited states (Ψ j , j = 2, ..., 13) in a GaAs-Ga 0.7 Al 0.3 As PCSQD as functions of the structure's innermost height (h 1 ), for zero magnetic and electric fields, keeping fixed the other structure dimensions. In Fig. 6(a) the results are for circular polarization of the incoming light in the xy-plane while in Fig. 6(b) they correspond to linear polarization along the z-axis. From Fig. 6(a) we realize that nonzero off-diagonal matrix elements are present only for j = 2 and j = 3. The reason why, , comes from the fact that when calculating M x 1,4 , Ψ 1 is an even function with respect to xz-plane with y = 0 (Ψ 1 (x, −y, z) = Ψ 1 (x, y, z)), while with respect to the same plane, Ψ 4 is an odd function (Ψ 4 ; see first row of Fig. 4(a,b)]. When calculating M y 1,4 , Ψ 1 is an even function with respect to yz-plane with x = 0 (Ψ 1 (−x, y, z) = Ψ 1 (x, y, z)), whilst with respect to the same plane, Ψ 4 is an odd function (Ψ 4 ; see first row of Fig. 4(a,b)]. Because the wave functions Ψ 1 and Ψ 5 are even with respect to the xz-plane with y = 0 and the yz-plane with x = 0, it is obtained that 5 1 ,5 and, consequently, = ± M 0 x iy 1,5 [see first row of Fig. 4(a,b)]. The symmetry arguments used to justify the null values of the matrix elements for the transitions Ψ 1 → Ψ 4 and Ψ 1 → Ψ 5 are the same arguments that can be applied to justify the non-zero values of the transitions Ψ 1 → Ψ 2 and Ψ 1 → Ψ 3 . To discuss the reasons why the z-polarization induces or suppresses certain transitions, the symmetry properties of the wave functions are useful as well, taking into account, basically, the projections on the xy-plane; given that, because of the height of the pyramids, all the excited states under consideration have only one or two antinodes along the z-direction. In the case of a single node for the excited state, it appears displaced along the z-direction with respect to the ground state and when two nodes appear, clearly the corresponding wave function has opposite symmetry, along the z-direction, to that of the ground state. The increasing character of the matrix elements for circular polarization, shown in Fig. 6(a), results from the fact that, as the height of the innermost pyramid (h 1 ) increases, the region where there is the highest probability of finding the electron moves away from the origin and thereby increases the overlap between the wave functions (see comparatively Ψ 1 and Ψ 2 in the first Figure 5. Pictorial view of the wave function projections (onto z = 0 and y = 0 planes) for the ground state in a GaAs-Ga 0.7 Al 0.3 As pyramidal core-shell quantum dot, when h 1 → h 2 . The setup of the structure is as in Fig. 4. The green color corresponds to a zero value whereas the red one is associated to the positive maxima. (2020) 10:8961 | https://doi.org/10.1038/s41598-020-65442-x www.nature.com/scientificreports www.nature.com/scientificreports/ rows of Fig. 4(a,b)). A similar behavior occurs for the matrix element M z 1,5 in Fig. 6(b) (see comparatively Ψ 1 and Ψ 5 in the second rows of Fig. 4(a,b)).
In Fig. 7 we are presenting the energies of the first thirteen bounded states for an electron confined in a GaAs-Ga 0.7 Al 0.3 As PCSQD as a function of the applied electric field strength. The results are for zero magnetic field and fixed dimensions of the structure. From the figure it is possible to observe some features that can be highlighted, such as: (i) throughout the whole range of the electric field intensity, the Ψ 2 and Ψ 3 states are doubly degenerate, the same as Ψ 7 and Ψ 8 states; (ii) for electric field strengths smaller than 65 kV/cm, the Ψ 10 and Ψ 11 states are degenerate and, for that specific value of the electric field they exchange symmetry with the Ψ 9 state, giving rise to the doubly degenerated Ψ 9 and Ψ 10 states, for field values greater than 65 kV/cm; (iii) for F = −5.82 kV/cm an accidental degeneracy appears between Ψ 4 and Ψ 5 states, which is transferred to states Ψ 5 and Ψ 6 at F = 20.5 kV/ cm; (iv) for F = 65.16 kV/cm a threefold degeneracy appears between Ψ 9 , Ψ 10 , and Ψ 11 states; and, finally, (v) for F > 50 kV/cm, the behavior of the lowest eight states is linear and decreasing, thus showing a saturation effect with the electric field. It is important to note that the negatively oriented electric fields push the electronic states towards the top vertex of the pyramid, while the positive fields push them toward the bottom plane of the pyramid www.nature.com/scientificreports www.nature.com/scientificreports/ (see Fig. 1). Bearing in mind that at the top vertex of the pyramid the electronic states interact with four planes while at the bottom of the pyramid the interaction is with only one plane, this explains the reason why the energy curves are more sensitive to the electric field in the F > 0 regime. Examining, for example, the ground state, it is clearly noted that, for a finite value of the field, the energy curve is not symmetric with respect to F = 0. This is due, as already said, to the fact that the number of planes with which the particle interacts goes from four to one when the field changes from negative to positive values. The decreasing nature of this state with |F| is explained by the displacement towards lower energies of the bottom of the potential well related with the superimposition of the linear potential from the field with the confining potential of the structure. Figure 8 shows the projections, on the z = 0 and y = 0 planes, of the first thirteen wave functions for an electron confined in a PCSQD with fixed values of the geometry, zero magnetic field, and considering two values of the applied electric field. In 8(a) the electric field pushes the carriers towards the apical region of the pyramid whereas in 8(b) these are pushed towards the pyramid base (see Figs. 1 and 2). Some of the main characteristics observed from the figure are the following: (i) For F = −100 kV/cm and F = +100 kV/cm, both the ground state and the first two excited states preserve their symmetries; something that is consistent with the absence of anticrossings between these states in Fig. 7, (ii) the states Ψ 2 and Ψ 3 are degenerate with p-like symmetry, (iii) when going from F = −100 kV/cm to F = +100 kV/cm the Ψ 5 and Ψ 6 states go to occupy the positions of the Ψ 4 and Ψ 5 , respectively, and the Ψ 4 state occupies the position of the Ψ 6 one, consistently with the Fig. 7. It can be noticed that, at F = 20 kV/cm, Ψ 4 exchanges symmetry with Ψ 6 , (iv) For F = −100 kV/cm and F = +100 kV/ cm, the numerical method used introduces a phase of ±π/4 for Ψ 2 and Ψ 3 states in the z = 0 plane. This phase is also present in the Ψ 10 and Ψ 11 states at F = −100 kV/cm and Ψ 12 and Ψ 13 states at F = +100 kV/cm, (v) when comparing Ψ 1 , Ψ 2 , and Ψ 3 , in Fig. 8(a,b), it is clearly seen how, in the first case, the states are displaced towards the pyramid apex while in the second case they are directed towards the pyramid basal plane, (vi) the presence of only one antinode in the z-direction of the Ψ 1 , Ψ 2 , and Ψ 3 states -given the odd symmetry of Ψ 2 and Ψ 3 with respect to a ±π/4 rotated plane =, ensures that over the entire range of applied electrical fields, there is a non-zero value of the matrix elements for xy-circularly polarized incident radiation, as will be seen below, and (vii) in general, for all excited states, the energy is higher at F = −100 kV/cm with respect to F = +100 kV/cm, due to the greater interaction with the lateral planes at the pyramid apex. Note that the electric field implies a remarkable change of the wave function characteristics. It can be affirmed that for negative electric field strengths, the electronic probability is distributed in a 3D-region whereas for sufficiently high positive electric fields, the spatial distribution of the states primarily locates nearby the pyramid basal plane. With the electric field, the system evolves from a three-dimensional quantum dot (for negative fields) to a two-dimensional quantum dot (for positive fields).
When going from negative to positive electric field values, the superposition between Ψ 1 and Ψ 2 (Ψ 1 and Ψ 3 ) states increases along with the increase in the spatial extent of the states. This justifies the ever increasing character of transition matrix elements ± M x iy 1,2 and ± M x iy 1,3 in Fig. 9(a). For ≅ F 100 kV/cm, a saturation effect of these matrix elements is observed due to the lateral potential barriers influence on the WFs. For F = 66 kV/cm, it is observed that the transitions Ψ 1 → Ψ 10 , Ψ 11 are transformed into Ψ 1 → Ψ 9 , Ψ 10 , which is in agreement with the crossing observed in Fig. 7 for such electric field value, at E = 96 meV. For circular polarization (see Fig. 9(a)) and F = −14 kV/cm only the Ψ 1 → Ψ 2 , Ψ 3 transitions are present, the other transitions are suppressed, this despite the fact that the symmetries in each z-plane are preserved with the electric field, but they change as the plane moves in that direction giving rise to contributions that cancel each other out. In Fig. 9(b), for z-polarized incident light, it is observed, for example, that the Ψ 1 → Ψ 4 transition is transformed into the Ψ 1 → Ψ 5 transition, and this finally becomes the Ψ 1 → Ψ 6 transition. This behavior is in agreement with the observed crossings between the Ψ 4 , Ψ 5 , and Ψ 6 states in Fig. 7 for F = −6.4 kV/cm and F = 19.5 kV/cm. This situation is also evident for other permitted transitions either with circular or linear incident polarized light, as shown in the two panels of Fig. 9. The increase of M z 1,4 in the negative range of applied electrical fields is due to the fact that initially, for F = −100 kV/cm, the www.nature.com/scientificreports www.nature.com/scientificreports/ maximum probability of both states is located at the apex of the pyramid; as F grows towards zero, the Ψ 1 state extends over the entire central pyramid while Ψ 4 remains almost static at the apex of the pyramid (note that the Ψ 4 state has two antinodes in the z-direction, which guarantees the non-null value of the matrix element). The curve reaches a maximum at F = −23 kV/cm where precisely the ground state has its maximum spatial distribution. For F > 0, where the character of the transition is Ψ 1 → Ψ 5 and then Ψ 1 → Ψ 6 , the decreasing behavior of M z 1,5 and M z 1,6 is due to the fact that the ground state is compressed towards the base of the pyramid and the excited state in question undergoes a progressive displacement towards the base of the pyramid as the field grows. Similar analysis, based on the distributions of the wave functions and their symmetries, explain the behavior of the other matrix elements.
In Figs. 10-12, we present the study of the applied magnetic field effects on the electronic states in a GaAs-Ga 0.7 Al 0.3 As PCSQD. The magnetic field is applied in the z-direction, which coincides with the symmetry axis of the heterostructure. This guarantees that the symmetry of the states in the different planes where z = const. is preserved. In Fig. 10, the energies for the first thirteen confined states are reported, in Fig. 11 the figures correspond www.nature.com/scientificreports www.nature.com/scientificreports/ to results proportional to the dipole matrix elements considering circular and linear polarization for the incident radiation. Finally, Fig. 12 contains the projections of the wave functions on the z = 0 and y = 0 planes.
In Fig. 10 it is observed that the first relevant effect of the magnetic field is the breakdown of degeneracy for all reported states. Besides, much of the corresponding off-diagonal dipole moment matrix elements appear to be different to zero, as functions of B, as can be readily noticed from Fig. 11, and will be discussed below.
To interpret this situation, we resort to the wave functions and probability densities depicted in Fig. 12. Note that in Fig. 12(a), at zero magnetic fields, the states Ψ 2 and Ψ 3 (which correspond to real wave functions) have the same configuration of nodes and antinodes and are characterized by being rotated with respect to each other at an angle of 90°, taking the symmetry axis as the axis of rotation. This, as previously analyzed, explains the degeneration of the states. The same situation is valid for the Ψ 7 and Ψ 8 states of Fig. 12(a).
When the magnetic field is turned on (B = 30 T), one may observe that the wave functions become complex, with real and imaginary components, as represented in Fig. 12(b,c). Analyzing the Ψ 1 and Ψ 4 states (which at zero magnetic field correspond to the Ψ 2 and Ψ 3 states), it is observed that both the real and imaginary part of both states are displaced towards the base of the pyramid. It is also appreciated that while for Ψ 1 the real part of the wave function is always positive, in the case of Ψ 4 there are three regions of maximum positive and three regions of maximum negative contributions. In the case of the imaginary parts, the projections in the z = 0 plane show a positive maximum and a negative maximum for the Ψ 1 state while for Ψ 4 there are three positive and three negative maxima. The combination of the real and imaginary parts, which corresponds to the probability density (as shown in Fig. 12(d)), leads to the fact that the Ψ 1 state (of lower energy) is located in the region near the axis of the pyramid with a wide volumetric distribution of the probability density. In the same manner, for the Ψ 4 state (of larger energy), the electron tends to concentrate in a thin layer near the base of the pyramid, with well-defined maxima near the vertices of the square cross section. A similar situation is exhibited by the Ψ 7 and Ψ 8 states which, when the magnetic field is turned on until B = 30 T, evolve to become the Ψ 10 and Ψ 6 states, respectively.
Then, a second point to highlight in Fig. 10 is the presence of ground state oscillations as the magnetic field increases. Note from Fig. 11 that, for B = 30 T, the ground state (Ψ 1 ) has a real part whose symmetry coincides with that of the ground state (Ψ 1 ) at B = 0 and that the imaginary part of Ψ 1 at B = 30 T has the same p-like symmetry of Ψ 2 at B = 0. This explains the change in symmetry presented by the ground state at B = 21.2 T. As a third aspect, note also the presence of anticrossings between states which are induced by the magnetic field effects. Near B = 15 T, an anticrossing appears between the states that have been labeled as Ψ 3 and Ψ 7 at zero magnetic field. At B = 21 T there is another anticrossing, this time between Ψ 5 and Ψ 9 . Finally, Fig. 10 shows multiple accidental degeneracies. For example, the ground state presents accidental degeneracy at B = 21 T. What is most relevant to our investigation is that all these crossings or anticrossings between states are reflected in changes in the symmetry of the wave functions and, consequently, in changes in the selection rules for optical transitions between states.
This can be clearly seen in Fig. 11, where the squared absolute expected values of the dipole matrix elements with ξ = x ± iy (for circular polarization) and ξ = z (for linear polarization) are presented for transitions between the ground state and the first twelve excited states and between the first excited state and the next eleven excited states. Unlike the cases discussed in Figs. 6 and 9, here it has been necessary to include transitions from the first excited state given the crossing between Ψ 1 and Ψ 2 at B = 21 T. The complex character, with real and imaginary parts, of the wave functions, explains the absolutely different response presented by the system to light with right and left circular polarization, as seen from Fig. 11(a,b). Note that each line in the three panels of this figure is composed of transitions between multiple different states. Each open symbol indicates the change between states involved in transitions. Each open symbol appears for magnetic field and energy values that correspond to the www.nature.com/scientificreports www.nature.com/scientificreports/ crossings between states in Fig. 10. Looking at Fig. 11(c) for example, one may observe that, under field conditions, essentially four well-defined transitions appear while at B = 30 T only two transitions will be noticeable. This evidences a remarkable change in selection rules as the magnetic field grows. In the case of Fig. 11(a), at B = 0 there are four well-defined transitions that evolve into four others, but between different energy states.
The optical coefficients. In Fig. 13, the light absorption (a) and relative refraction index change (b) coefficients appear plotted as functions of the z-polarized incident photon energy and the applied magnetic field. The calculations considered the situation with zero applied electric field and kept constant the geometry and dimensions of the structure. Note that at B = 0, the peak of greater amplitude, both in the case of the absorption coefficient and the relative refractive index changes, is given for the 1 → 5 transition. This is consistent with the content of Fig. 11(c) where the most significant value of M z i j , at B = 0 is, precisely, given for the 1 → 5 transition. Besides, For B = 30 T, in Fig. 13(a) two peaks with approximately the same intensity are observed. This is despite the fact that in Fig. 11(c) the 1 → 9 transition has a matrix element smaller than that of the 2 → 7 transition, whose energy is lower than the one corresponding to the 1 → 9 transition.
Taking into account that the magnitude of α(ω) is proportional to the product | | E M fi z fi 2 , at B = 30 T the 2 → 7 and 1 → 9 transitions are proportional to 398.7 nm 2 meV and 401 nm 2 meV, respectively. Additionally, it can be seen from Fig. 13(a) that the 1 → 5 and 1 → 9 transitions, at B = 0 evolve to the 2 → 5 and 2 → 7 transitions at B = 30 T, which is consistent with the anti-crossing taking place near B = 20 T with E 65 = 80 meV between the Ψ 5 and Ψ 6 states.
On the other hand, when observing Fig. 13(b), one may see that the relative refractive index changes peak amplitudes exactly follow the behavior presented by those corresponding to the optical absorption coefficient in Fig. 13(a). This comes from the fact that for a particular i → j transition, the coefficient of relative refraction index change is an odd function with respect to the transition energy, E fi = E f − E i , the same at which the absorption coefficient shows the resonant peak structure. Also, it is clear that the Δn/n r coefficient has a maximum and a minimum localized at E p = E fi − ℏΓ and E p = E fi + ℏΓ, respectively. Additionally, taking into account that the magnitude of the two resonant peaks of Δn/n r are proportional to M z i j , 2 , the reason why the peaks of the 2 → 7 transition are significantly greater than the peaks of 1 → 9 at B = 30 T is explained.
conclusions
We have performed the investigation of electron states in core-shell pyramidal quantum dots considering the effect of externally applied electric and magnetic fields to the structure. The results of the calculation included modifications of the system size and geometry as well. Accordingly, we present a detailed discussion about the properties of energies and wave functions under different configurations, making emphasis in those related with the symmetry of states and how they are modified by the application of the external probes, showing both crossings and anticrossings in their evolution as functions of the field strengths. Regarding this, the study finds that a number of inter-state transitions can become forbidden, and the presence of an external probe, with its associated degeneracy breaking, activates some of them.
The information about the electronic structure allows to evaluate the coefficients of light absorption and relative refractive index change associated to allowed transitions between the lowest confined states. We comment | 8,164.8 | 2020-06-02T00:00:00.000 | [
"Physics"
] |
Endothelin@25 – new agonists, antagonists, inhibitors and emerging research frontiers: IUPHAR Review 12
Since the discovery of endothelin (ET)-1 in 1988, the main components of the signalling pathway have become established, comprising three structurally similar endogenous 21-amino acid peptides, ET-1, ET-2 and ET-3, that activate two GPCRs, ETA and ETB. Our aim in this review is to highlight the recent progress in ET research. The ET-like domain peptide, corresponding to prepro-ET-193–166, has been proposed to be co-synthesized and released with ET-1, to modulate the actions of the peptide. ET-1 remains the most potent vasoconstrictor in the human cardiovascular system with a particularly long-lasting action. To date, the major therapeutic strategy to block the unwanted actions of ET in disease, principally in pulmonary arterial hypertension, has been to use antagonists that are selective for the ETA receptor (ambrisentan) or that block both receptor subtypes (bosentan). Macitentan represents the next generation of antagonists, being more potent than bosentan, with longer receptor occupancy and it is converted to an active metabolite; properties contributing to greater pharmacodynamic and pharmacokinetic efficacy. A second strategy is now being more widely tested in clinical trials and uses combined inhibitors of ET-converting enzyme and neutral endopeptidase such as SLV306 (daglutril). A third strategy based on activating the ETB receptor, has led to the renaissance of the modified peptide agonist IRL1620 as a clinical candidate in delivering anti-tumour drugs and as a pharmacological tool to investigate experimental pathophysiological conditions. Finally, we discuss biased signalling, epigenetic regulation and targeting with monoclonal antibodies as prospective new areas for ET research.
Accepted 25 July 2014
This is the 12th in a series of reviews written by committees of experts of the Nomenclature Committee of the International Union of Basic and Clinical Pharmacology (NC-IUPHAR). A listing of all articles in the series and the Nomenclature Reports from IUPHAR published in Pharmacological Reviews can be found at http://www .GuideToPharmacology.org. This website, created in a collaboration between the British Pharmacological Society (BPS) and the International Union of Basic and Clinical Pharmacology (IUPHAR), is intended to become a 'one-stop shop' source of quantitative information on drug targets and the prescription medicines and experimental drugs that act on them. We hope that the Guide to Pharmacology will be useful for researchers and students in pharmacology and drug discovery and provide the general public with accurate information on the basic science underlying drug action.
Since the discovery of endothelin (ET)-1 in 1988, the main components of the signalling pathway have become established, comprising three structurally similar endogenous 21-amino acid peptides, ET-1, ET-2 and ET-3, that activate two GPCRs, ETA and ETB. Our aim in this review is to highlight the recent progress in ET research. The ET-like domain peptide, corresponding to prepro- , has been proposed to be co-synthesized and released with ET-1, to modulate the actions of the peptide. ET-1 remains the most potent vasoconstrictor in the human cardiovascular system with a particularly long-lasting action. To date, the major therapeutic strategy to block the unwanted actions of ET in disease, principally in pulmonary arterial hypertension, has been to use antagonists that are selective for the ETA receptor (ambrisentan) or that block both receptor subtypes (bosentan). Macitentan represents the next generation of antagonists, being more potent than bosentan, with longer receptor occupancy and it is converted to an active metabolite; properties contributing to greater pharmacodynamic and pharmacokinetic efficacy. A second strategy is now being more widely tested in clinical trials and uses combined inhibitors of ET-converting enzyme and neutral endopeptidase such as SLV306 (daglutril). A third strategy based on activating the ETB receptor, has led to the renaissance of the modified peptide agonist IRL1620 as a clinical candidate in delivering anti-tumour drugs and as a pharmacological tool to investigate experimental pathophysiological conditions. Finally, we discuss biased signalling, epigenetic regulation and targeting with monoclonal antibodies as prospective new areas for ET research.
Introduction
Since the discovery of endothelin (ET)-1 in 1988 (Yanagisawa et al., 1988;Inoue et al., 1989) the components of the ET signalling pathway have become established, comprising three structurally similar endogenous 21-amino acid peptides, ET-1, ET-2 and ET-3, that activate two GPCRs, ETA (Arai et al., 1990) and ETB (Sakurai et al., 1990). In humans, ET-2 differs from ET-1 by only two amino acids, whereas ET-3 differs by six amino acids representing more substantial changes. ET-3 is the only isoform that can distinguish between the two receptor subtypes, having a similar potency at the ETA receptor as ET-1 and ET-2, but much lower affinity than these isoforms for the ETB receptor ( Figure 1). Structur-ally, ETs are unusual among the mammalian peptides in possessing two disulphide bridges. This feature is shared by the sarafotoxins, a family of peptides that were isolated from snake venom in the same year as the discovery of ET-1 (Takasaki et al., 1988), and that provided the first selective agonist at the ETB receptor, sarafotoxin S6C (William et al., 1991).
A number of features of the ET signalling pathway are unusual compared with other peptidergic systems and these continue to intrigue investigators, with over a thousand ET-related papers still published each year. ET-1 is the most abundant isoform in the human cardiovascular system, predominantly released from endothelial cells to cause potent and unusually long-lasting vasoconstriction that may persist for many hours. ET-1 is a key mediator in regulating vascular function in the majority of organs systems balanced by opposing vasodilators, particularly NO, prostacyclin and endothelium-derived hyperpolarizing factor. Endothelial cell dysfunction occurs in pathophysiological conditions such as pulmonary arterial hypertension (PAH) and is associated with loss of these dilators and increased synthesis of ET. The consequence of this is vasoconstriction, proliferation of many different cell types particularly vascular smooth muscle, fibrosis and inflammation; processes associated with vascular remodelling. In disease, the deleterious actions of ET in the vasculature are mainly mediated by the ETA receptor, whereas activation of ETB receptors results in many of the beneficial effects of the peptide that frequently act as a regulatory counterbalance (Davenport and Maguire, 2006). The formation of the disulphide bridge in the ET peptides blocks the N-terminal amino acid conferring resistance to enzymic degradation in plasma. Internalization by ETB scavenging receptors is therefore particularly important for termination of the ET signal in health and disease.
The major therapeutic strategy ( Figure 1) to block the unwanted actions of ET in disease has been to use antagonists of ETA receptors or both receptor subtypes (Palmer, 2009) with the first clinical application being bosentan in PAH (Rubin et al., 2002). More recently, a second strategy has started to be more widely tested in clinical trials using inhibitors of ET-converting enzymes 1 (ECE-1; Xu et al., 1994) and 2 (ECE-2; Emoto and Yanagisawa, 1995), the major biosynthetic pathway of ET ( Figure 1) at least in the human vasculature (Russell and Davenport, 1999a,b). A third emerging strategy based on biosimilar agonists at the ETB receptor (molecules similar, but not identical to the endogenous ligand) has led to the renaissance of IRL1620 as a clinical candidate in delivering anti-tumour drugs and in other pathophysiological conditions such as cerebral ischaemia.
Evidence for a new ET peptide: the ET-like domain peptide (ELDP)
The ELDP has been recently identified as a peptide corresponding to prepro- ET-193-166 (Yuzgulen et al., 2013) immediately adjacent to the gene sequence encoding big ET-1. The 74-amino acid peptide has been detected by HPLC and specific double recognition site immunoassays in conditioned media from two cell lines, endothelial (EA.hy 926) and epithelial (A549), as well as from primary cell cultures of human aortic endothelial cells that are known to secrete ET-1. In the aortic endothelial cells, the peptide was co-synthesized and co-released with ET-1. Plasma levels in untreated patients were 6.5 pmol·L −1 , which compares with typical basal levels of immunoreactive ET-1 of 5 pmol·L −1 (Davenport et al., 1990). Levels of ELDP were significantly elevated in patients with heart failure suggesting a potential use as a bio-marker. While no effect was observed on BP in the anaesthetized rat, intriguingly ELDP significantly increased the duration of the pressor response of ET-1 (0.3 nmol·kg −1 , likely to be a submaximal dose). Pretreatment of rat mesenteric arteries with 10 nM ELDP also potentiated a submaximal response of ET-1 by fivefold (Yuzgulen et al., 2013). It is not unexpected that a
Figure 1
Scheme of the biosynthesis of ET peptides and their interaction with receptors. Based on information from the literature including Barton and Yanagisawa (2008), Turner and Tanzawa (1997) and Lee et al. (1999).
second peptide sharing a cleavage site with ET-1 would also be co-released, but it is intriguing that the peptide was able to potentiate ET-1 responses in vitro and in vivo. It is not yet reported, using saturation or competition-binding experiments, whether ELDP binds directly to ET receptors, binds to an allosteric site or whether the peptide modulates ET responses by other mechanisms. Intriguingly, ELDP encompasses the sequence of the putative 'endothelin-like peptide' corresponding to prepro-ET-1109-123 proposed in the original Nature paper by Yanagisawa et al. (1988). Eight out of 15 residues in the corresponding sequence in ET-1 are identical and the four Cys residues are perfectly conserved and flanked by dibasic pairs that are recognized by endopeptidase processing enzymes, to yield a cleaved peptide. However, a synthetic peptide corresponding to this sequence was devoid of agonist or antagonist activity against ET-1, in vascular preparations (Cade et al., 1990).
Global knockout of ET-2 gene reveals distinct phenotype compared with ET-1/ET A and ET-3/ET B gene deletions
ET-1-deficient homozygous mice die at birth of respiratory failure, which is secondary to severe craniofacial and cardiovascular abnormalities. ETA receptor and ECE-1 knockout mice have similar morphological abnormalities (Kurihara et al., 1994;Clouthier et al., 1998;Yanagisawa et al., 1998). The phenotype is similar to a spectrum of human conditions, CATCH 22 (cardiac anomaly, abnormal face, thymic hypoplasia, cleft palate, hypocalcaemia, chromosome 22 deletions) and established the importance of the ETA/ET-1 signalling system for cardiovascular and craniofacial development. Gene deletions for ET-3 and ETB receptors exhibit a different and non-overlapping phenotype to ET-1/ETAdeficient animals. They are viable at birth and survive for up to 8 weeks, but display aganglionic megacolon, as a result of absence of ganglion neurones, together with a disorder of the pigment in their coats Hosoda et al., 1994). In these mice, enteric nervous system precursors and neural crest-derived epidermal melanoblasts fail to colonize the intestine and skin. This phenotype resembles Hirschsprung's disease in man.
Deleting genes encoding all the key molecules, ET-1, ET-3, ETA, ETB, ECE-1 and ECE-2 has been accomplished in mice generating important information about their effect on phenotype. The deletion of the gene for ET-2 has now been reported. The physiological role of ET-2 has been puzzling. It had been assumed that the actions of ET-2 would be similar, if released, to the more widely distributed and abundant ET-1. Current antagonists block both ET-2 and ET-1 with the same potency and are not yet able to distinguish the actions of these peptides.
A key advance in the field was the generation by Chang et al. (2013) of a global ET-2 gene knockout mouse, which surprisingly exhibited a distinct phenotype to global ET-1 or ET-3 gene deletions. These mice showed severe growth retardation, internal starvation characterized by hypoglycaemia, ketonaemia and increased levels of starvation-induced genes. Mice were profoundly hypothermic and the median lifespan could be significantly extended by housing in a warm environment. The intestine was morphologically and functionally normal, which was unexpected as murine ET-2 (see Ling et al., 2013), also known as vasoactive intestinal contractor, is present throughout the gastrointestinal tract suggesting, in this tissue at least, in the absence of ET-2, ET-1 continues to mediate signalling. In agreement, intestinal epitheliumspecific ET-2 knockout mice showed no abnormalities in growth and survival. In marked contrast, dramatic changes were observed in lung morphology and function. Mice had breathing difficulties after the first week exhibiting enlarged air spaces with substantial simplification of lung alveolar structure, larger lung capacities leading to abnormally elevated carbon dioxide (hypercapnia) and deficiency of oxygen (hypoxaemia) in the blood. Hypothermia and lung dysfunction might not be specific, but may be due to a secondary effect of internal starvation because of ET-2 deficiency. However, it is possible that these studies identify an important function for ET-2 in the pulmonary system. The authors showed that mRNA encoding ET-2 was only present in epithelial cells whereas receptor mRNA was mainly present in mesenchyme, consistent with a paracrine function for ET-2 in the lung.
Pharmacological significance: is ET-2 the inducible isoform?
The dramatic effects on the lung suggest a crucial role for ET-2 at birth, at least in mice. The lungs, and potentially the heart, remain major therapeutic targets for ET antagonists in humans in the treatment of PAH. In rodents, ET-2 was less widely distributed than ET-1, mainly found in heart, lung, ovary, stomach and all regions of the intestine (de la Monte et al., 1995;Takizawa et al., 2005). ET-2 expression in human tissue was similar, being present in the human heart (Plumpton et al., 1996b), lung (Marciniak et al., 1992), kidney (Karet and Davenport, 1996), vasculature, (Howard et al., 1992) intestine and ovaries (Palanisamy et al., 2006), but has not been investigated in pathophysiological tissue. In humans, alternatively spliced mRNA variants encoding ET-2 have been detected with a specific pattern of distribution in various tissues. Some of these variants contain sites for the post-transcriptional processing of preproET-2 into mature ET-2, which may be altered in a tissue-specific manner . The best established model of spatial and temporal ET-2 signalling is in the ovary, a highly vascular tissue, which undergoes cyclic changes as follicles grow, rupture and transform into corpora lutea and eggs are periodically released (Ko et al., 2012). In rats, low levels of ET-1 are constitutively expressed throughout the ovulatory cycle, whereas ET-2 is induced transiently at much higher concentrations during the period of ovulation to luteal phases (Ko et al., 2006). ET-2 is expressed in the granulosa cells of periovulatory follicles, but not during other stages of follicular development. In mice, induced superovulation results in a dramatic increase in ET-2 mRNA expression (Palanisamy et al., 2006). ET-2 expression surged in response to gonadotropin and quickly declined by 13 h, which coincided with the time of follicular rupture. Crucially, both ET receptor subtypes are present and their ratio does not seem to change. Thus, the ET-2 gene appears to be switched on only when increased levels of ET are required, with ET-2-mediated contraction being the final signal facilitating ovulation (Ling et al., 2013).
ET-1 signalling is well established in neural crest migration. In the developing mouse retina, constitutive overexpression of ET-2 affects vascular development by inhibiting endothelial cell migration across the retinal surface and subsequent endothelial cell invasion into the retina, an action mediated by ETA receptors. Interestingly, over-expression is spatially localized as it has no obvious action on vascular structures in brain or skin . Constitutive over-expression of ET-2 signalling also protected photoreceptors from light damage (Braunger et al., 2013). Similarly, Bramall et al. (2013) found expression of ET-2 mRNA was greatly increased in the photoreceptors of mouse models of inherited photoreceptor degeneration and, using the global ET-2 knockout mice, showed increased ET-2 expression was protective of the mutant photoreceptors.
Case for re-evaluating the role of ET-2
Targeting the ET-2 gene in mice provides compelling evidence that, while both ET-1 and ET-2 can coexist in the same tissue compartments, there is a critical, but distinct ET-2 pathway. A key role has now been established for ET-2 in ovarian physiology. This may be accomplished at the level of gene expression, but differences may also exist in peptide synthesis by ECEs and chymase, which may allow the two ET peptide pathways to be distinguished pharmacologically and become separate drug targets. Additionally, pharmacological differences have been identified, for example ET-2 dissociates from receptors much more rapidly than ET-1 and higher affinity has been reported, for example in the brain (Ling et al., 2013). Detailed studies comparing rat mesenteric resistance and basilar arteries demonstrated that ET-1 and ET-2 initiate and maintain vasoconstriction by different downstream mechanisms raising the prospect of 'biased signalling' mediated by two structurally different agonists activating the same receptor (Compeer et al., 2013).
Potential new therapeutic strategies exploiting ET B receptor agonists
The pharmacological rationale for this strategy is that ET-1, tonically released from endothelial cells, also interacts with endothelial cell ETB receptors. The importance of this counter-regulatory pathway has been underestimated to date. Endothelial cells line the vasculature of every organ and tissue in the body that receives blood supply. Although the cells represent ∼1% of the weight of the vessel wall, they have a combined mass comparable with some endocrine glands. Crucially, ET-1 feeding back onto endothelial receptors to release NO not only limits ETA-mediated vasoconstriction by stimulation of vascular cGMP, but also limits further ET-1 release. Thus in the vasculature, NO and other dilators are crucial in balancing the ET system, but these may be reduced or absent in pathophysiological conditions.
ET-1 +/− heterozygous mice developed elevated BP and mild hypertension, rather than the fall in BP that might have been expected. Partial deletion of the gene allows survival and produced lower levels of ET-1 in plasma and lung tissue than wild type (Kurihara et al., 1994). These results suggest that ET-1 has an essential physiological role in cardiovascular homeostasis. Low levels promote vasodilatation whereas higher and pathophysiological concentrations of ET-1 increase BP and total peripheral vascular resistance. While ET A receptor selective antagonists such as BQ123 (Ihara et al., 1992) cause the expected vasodilatation in humans (Haynes and Webb, 1994), the ETB receptor selective antagonist BQ788 (Ishikawa et al., 1994) caused systemic vasoconstriction in healthy volunteers, showing that the main consequence of activation of endothelial ETB receptors by tonically secreted ET-1 was the physiological basal release of NO (Love et al., 2000). In agreement, initial vasodilatation can be detected in the human forearm vascular bed following infusion of low concentrations of ET-1 whereas higher doses caused sustained vasoconstriction (Kiowski et al., 1991). A contribution to vasoconstriction may also be the result of occupancy by ET-1 of the clearance ETB receptors causing an ETA-mediated vasoconstriction.
ET B agonists in chemotherapy: IRL1620
ET-1 acting on ETA receptors has been proposed to stimulate cell proliferation, migration, invasion, osteogenesis and angiogenesis in several cancers. New vessels forming in tumours are characterized by high densities of ETA receptors in smooth muscle, for example in glioblastoma multiforme in the brain (Harland et al., 1998). Conversely, ETB receptors may oppose tumour progression by promoting apoptosis and clearing ET-1 (Bagnato et al., 2011;Rosanò et al., 2013b). The strategy of stimulating ETB receptors to cause transient vasodilatation is being developed to increase the penetration of cytotoxic anti-tumour agents into tumours and to minimize the concentration in healthy tissue. IRL1620 was originally developed as a tool compound (Takai et al., 1992). The N-terminus has an N-succinyl modification, which is likely to reduce metabolism by non-specific peptidases, but it is not orally active and requires injection. Despite these unpromising pharmacokinetic features, it is being used in vivo and has emerged as a possible clinical candidate in improving the delivery of drugs to tumours. IRL1620 infused into rats improved the efficacy of doxorubicin and 5-flurouracil by significantly increasing the amount of drug in tumours in rat models of prostrate and breast cancer. In addition, radiation-induced reduction in tumour volume was enhanced, suggesting IRL1620 can significantly increase the efficacy of radiotherapy in the treatment of solid tumours. The results suggest that for a given dose of drug, the efficacy in reducing the tumour could be improved (Gulati and Rai, 2004;Rajeshkumar et al., 2005a,b;Lenaz et al., 2006;Rai et al., 2006;Gulati et al., 2012). A phase I trial to determine the safety, tolerability, pharmacokinetics and pharmacodynamics of IRL1620 (known as SPI-1620 licensed by Spectrum Pharmaceuticals, Technology Drive Irvine, CA, USA) in patients with recurrent progressive carcinoma has been successfully completed and shown to selectively and transiently increase tumour blood flow http://www.cancer.gov/clinicaltrials). A phase II trial was initiated in 2013 to determine the effectiveness of SPI-1620 in combination with docetaxel in patients with advanced biliary cancer (http://clinicaltrials.gov/ct2/show/NCT01773785) and in combination with docetaxel compared with docetaxel alone for patients with non small-cell lung cancer after failure of platinum-based chemotherapy (http://clinicaltrials.gov/ show/NCT01741155).
ET B agonists in neuroprotection
The human brain contains the highest density of ET receptors, with the ETB receptor subtype comprising about 90%, in areas such as cerebral cortex (Harland et al., 1998). Binding and functional studies have demonstrated glia mainly express ETB receptors whereas ETA receptors are localized mainly on neurones (Morton and Davenport, 1992). Smooth muscle cells from large arteries and small intracerebral vessels only express ETA receptors (Adner et al., 1994;Harland et al., 1995;Pierre and Davenport, 1999) with endothelial cell ETB receptors mediating relaxation (Lucas et al., 1996). The small pial arteries and arterioles penetrating into the brain play a major role in the maintenance of cerebral blood flow (autoregulation). These vessels are particularly sensitive to ET-1 compared with peripheral vessels and the peptide has been a long-standing candidate in the genesis or maintenance of cerebrovascular disorders such as delayed vasospasm associated with subarachnoid haemorrhage or stroke. ET-1 does not cross the blood-brain barrier from the plasma, but may do so when compromised by subarachnoid haemorrhage, stroke or head injury. Strategies for targeting cerebrovascular disease have focused previously on the use of ET receptor antagonists, firstly to block vascular receptors mediating cerebrovasospasm that may be responsible for delayed cerebral ischaemia seen after aneurysmal subarachnoid haemorrhage and could contribute to ischaemic core volume in stroke. Secondly, to block neural receptors that mediate increases in intracellular free calcium (Morton and Davenport, 1992) and initiate the pathophysiological processes leading to neuronal death.
A new emerging strategy is to use ETB receptor agonists such as IRL1620 to provide vasodilatation and neuroprotection. The peptide reduced neurological damage following permanent middle cerebral artery occlusion in rats, a model of focal ischaemic stroke. Animals received i.v. injections of IRL1620 after the occlusion, which dramatically reduced infarct volume (by more than 80% in the acute and 70% in the chronic study), prevented cerebral oedema, reduced oxidative stress markers and improved all neurological and motor function for up to 7 days (Leonard et al., 2011;2012;Leonard and Gulati, 2013). Rats treated with the amyloid peptide Aβ1-40 administered into the intracerebral vessels display increased markers of oxidative stress in the brain. IRL1620 significantly reduced oxidative stress and importantly the cognitive impairment (Briyal et al., 2014). As discussed later, a reduction in ECE activity is associated with accumulation of amyloid β-peptide and neurotoxicity early in progression of Alzheimer's disease (Eckman et al., 2001;Pacheco-Quinto and Eckman, 2013). These results are limited to disease models in a single species, and it is unclear whether the molecular mechanisms would translate to humans, but taken together, they suggest that an ETB receptor agonist might offer a new therapeutic strategy in Alzheimer's disease and provide neuroprotection following cerebral ischaemia in conditions such as stroke.
No evidence for further ET B receptor subtypes
Previous studies have suggested that ETB receptors could be further subdivided into ETB1 present on endothelial cells and ETB2 on smooth muscle cells. Studies continue to be published with this misleading nomenclature, but current evi-dence only supports the existence of two subtypes, ET A and ETB, according to NC-IUPHAR nomenclature (Davenport, 2002;Alexander et al., 2013a,b). Firstly, Mizuguchi et al. (1997) demonstrated unequivocally that in ETB receptor knockout mice, both the direct constrictor and indirect vasodilator responses to the ETB agonist sarafotoxin S6C were abolished. Selective deletion of endothelial ETB receptors in mice (demonstrated by autoradiography to leave unaltered ETB receptors expressed by other cell types) impaired, as expected, the clearance of an i.v. bolus of labelled ET-1 compared with controls (Bagnall et al., 2006;Kelland et al., 2010). Secondly, Flynn et al. (1998) were unable to distinguish pharmacologically, in extensive competition-binding experiments, between ETB receptors expressed by human isolated endothelial and smooth muscle cells in culture. In concordance, saturation-binding assays in human tissue always found ETB radiolabelled ligands bound with a single affinity and Hill slopes close to unity with no suggestion of further subtypes (Molenaar et al., 1992;Nambi et al., 1994) or in competition-binding versus radiolabelled ET-1 in human native (Peter and Davenport, 1995;Russell and Davenport, 1996) or recombinant ETB receptors (Nambi et al., 1994;Reynolds et al., 1995). Clozel and Gray (1995) showed that endothelial and smooth muscle ETB receptors cannot be distinguished functionally.
Do ETs interact with any other GPCRs?
The virtually complete sequencing of the human genome has allowed the identification of all of the human gene sequences that could potentially encode GPCRs that are currently classified as 'orphan' to indicate that their endogenous ligand is not yet known (Foord et al., 2005;Davenport et al., 2013). In this catalogue, the most closely related to the ETA and ETB receptor subtypes are the orphan receptors GPR37 (also know as ET receptor type B-like or Parkin-associated ET receptor-like receptor) and its related receptor, GPR37L1. A recent highthroughput screen tested ∼10 000 biologically active compounds for binding to 82 remaining orphan GPCRs. None of the ∼20 ET peptides tested at high concentration (including all three mature isoform and their corresponding big ET precursors, C-terminal metabolites, BQ123 and the ETB receptor agonist BQ3020) activated any of the expressed receptors, including GPR37 or GPR37L, supporting the established concept of ETs binding to only two receptor subtypes. Two orphan neuropeptides, prosaptide and prosaposin, have recently been proposed as cognate ligands for GPR37 and GPR37L (Meyer et al., 2013).
Clinical application of ET antagonists
Bosentan, ambrisentan and withdrawal of sitaxentan PAH is a progressive condition with no cure and has a major impact on the ability to lead a normal life. It is an orphan disease (∼100 000 patients in US and Europe). PAH involves constriction of pulmonary arteries and is characterized by high BP in the lungs, ultimately leading to right heart failure and death. A number of pathways have been implicated in the development of PAH including bone morphogenetic proteins, prostacyclin and ET-1. Restoring the imbalance between constriction and vasodilatation of blood vessels is the basis for current medical therapies, although the cause of death is right heart failure. Although ETA receptors are significantly increased in the right ventricle of patients with PAH (Kuc et al., 2014) and in the left ventricle of patients with heart failure (Zolk et al., 1999), surprisingly, ET receptor antagonists have clinical efficacy in the former, but not the latter group (Kohan et al., 2012).
Bosentan (Tracleer, Ro47-0203) was the first ET receptor antagonist to be introduced into the clinic for the treatment of PAH (Rubin et al., 2002) and, as an orally acting agent, at the time represented a major advance over existing therapies such as prostacyclin analogues. Bosentan is classified as a mixed ETA/ETB receptor antagonist blocking both receptors (Figures 2 and 3). The second antagonist to enter the clinic in 2007 was ambrisentan (Letairis, Volibris, LU208075, Figures 2 and 3), which was reported to display some ETA receptor selectivity (Vatter and Seifert, 2006) followed by the most highly selective ETA receptor antagonist sitaxentan (Thelin, TBC11251) (Barst et al., 2004). While hepatotoxicity is a known side effect of ET antagonists, it is usually reversible and related to dose. Unfortunately, cases of idiosyncratic hepatitis resulting in acute liver failure leading to death have been reported with sitaxentan and the compound was withdrawn in 2010 (Don et al., 2012).
Next generation of ET antagonists: macitentan
Despite the current use of ET receptor antagonists and drugs targeting the two other principal pathways-that of NO with PDE5 inhibitors and that involving prostacyclin (PGI2)meta-analysis of PAH trials shows existing therapies only moderately increased the most widely used objective evaluation of functional exercise capacity (6 min walk distance) by 11%. The prognosis for patients with PAH remains poor with ∼15% mortality within 1 year. There remains an urgent need for new efficacious treatments that has lead to the development of macitentan.
Macitentan (Opsumit, ACT-064992, Figures 2 and 3) represents the next generation of orally active ET receptor antagonists and was developed by modifying the structure of bosentan to improve efficacy and tolerability (Bolli et al., 2012). Macitentan is described as a dual antagonist that blocks both ETA and ETB receptors and it inhibited [ 125 I]-ET-1 binding to human recombinant ETA receptors with an IC50 of 0.2nM and to ETB receptors with an IC50 of 391 nM. On the basis of these results, macitentan displays about 800-fold
Figure 2
Structures of ET receptor antagonists in clinical use bosentan, ambrisentan and macitentan. The structures of the NEP/ECE inhibitor pro-drug SLV306 and its active metabolite are also shown. BJP Endothelin@25 selectivity. A phase III clinical trial was successfully completed in 2012 (Pulido et al., 2013), and the compound gained approval from the US FDA in 2013 for the treatment of PAH. Macitentan is metabolized by the cytochrome P450 system, predominantly CYP3A4 and to a lesser extent the CYP2C19 iso-enzyme. Unlike other antagonists currently in use, one of the metabolites of macitentan, ACT-132577 (Figure 2), is pharmacologically active. Although it has a lower potency than the parent compound ACT-132577 reaches higher plasma concentrations, with a longer half-life of about 48 h (Iglarz et al., 2008;Sidharta et al., 2011;2013a,b). These factors are likely to contribute to improved activity of macitentan compared with bosentan. While in vitro studies suggested macitentan was likely to interact with other drugs (Weiss et al., 2013), other observed pharmacokinetic benefits included fewer interactions with other drugs at clinically used concentrations, no requirement to alter doses in patients with renal or hepatic impairment, improved hepatic safety and reduced oedema/fluid retention compared with bosentan. Key differences were also identified in the pharmacodynamic parameters. For example, in calcium release assays macitentan was more potent (KB = 0.1 nM) than bosentan (KB = 1.1 nM) and had a significantly longer receptor occupancy (17 min compared with 70 s) (Iglarz et al., 2008;Bruderer et al., 2012a,b;Gatfield et al., 2012). The authors suggested that the macitentan binding site differed slightly from the bosentan binding site and that this difference in interaction with amino acids in the receptor contributed to the slow dissociation of macitentan from the receptor, particularly leading to insurmountable antagonism. A number of clinical trials are actively recruiting (Patel and McKeage, 2014) including the use of macitentan for the treat-ment of digital ulcers in patients with systemic sclerosis, Eisenmenger's syndrome and, perhaps the most challenging, in patients with brain tumours (glioblastoma).
Compounds interacting with ET-1 synthesis and metabolism
Members of the neprilysin (NEP)-like family of zinc metalloendopeptidases play key roles in the ET pathway (Turner and Murphy, 1996;Turner et al., 2001). Neutral endopeptidase (NEP) is a membrane-bound thermolysin-like zinc metalloendopeptidase, which is particularly abundant in human kidneys. The enzyme metabolizes a number of peptides including enkephalins, tachykinins, natriuretic peptides as well as the ETs (Turner and Tanzawa, 1997). Inactivation of ET-1 is via a two-stage process, opening of the Ser 5 -Leu 6 bond, followed by cleavage at the amino side of Ile 19 resulting in an inactive peptide, which is inhibited by phosphoramidon (Skolovsky et al., 1990). Pharmacological intervention in the pathway is challenging because NEP-like enzymes also include the synthetic enzymes ECE-1, ECE-2 and KELL. The ECEs are also inhibited by phosphoramidon and ECE inhibitors currently in clinical trials have significant NEP inhibitory activity and it seems counter-intuitive to inhibit the degradative pathway. However, in practice, inactivation of ET-1 is thought to be mainly via binding and internalization of the ETB receptor and ET-1 is essentially stable in plasma. Binding to ETB receptors, particularly in those organs such as the lung expressing high densities of the subtype, are critical for inactivation of the peptide. After internalization of the ligand-
Figure 3
Selectivity of ET receptor antagonists for ETA versus ETB receptors shown on the vertical axis as reported by the companies that discovered the compounds. Selectivity was mainly determined by measuring affinity constants in separate competition assays against [ 125 I]-ET-1 using human recombinant ETA versus ETB receptors and may not reflect selectivity measured in clinically relevant native tissues. Bosentan, ambrisentan and macitentan are currently approved for clinical use and are highlighted. receptor complex to the lysosome, ET-1 is thought to be degraded, like other peptides, by cathepsin A. In support, cathepsin A knockout mice showed reduced ET-1 degradation and significantly increased arterial BP. In humans, genetic defects of cathepsin A include hypertension and cardiomyopathies (Seyrantepe et al., 2008).
ECE-1
It is now well established that ET is synthesized in a three-step process, with pre-pro-ET-1 initially cleaved by a signal peptidase to proET-1, which is in turn cleaved by a furin enzyme to an inactive precursor big ET-1 (Figure 1). Although low MW inhibitors of furins have been reported, furins cleave a number of other proteins to mature or active forms and therefore are not an easy tractable drug target for selectively reducing ET-1, without altering other pathways. Targeting the ECE enzymes responsible for transformation of big ET-1 to the mature, biologically active ET-1 has been more promising (Xu et al., 1994;Turner and Murphy, 1996). In humans there are four isoforms, ECE-1a-d, derived from a single gene by the action of different promoters. Structurally, they differ only in the amino acid sequence of the extreme N-terminus. ECE-1 localizes to the small secretory vesicles of the constitutive pathway from where ET-1 is continuously released to maintain normal vascular tone. Unusually for vasoactive peptides, ET-1 is also synthesized by ECE-1 and stored in specialized Weibel-Palade bodies within endothelial cells until its release following an external physiological or pathophysiological stimulus (the regulated pathway) to produce further vasoconstriction (Russell et al., 1998a,b;Russell and Davenport, 1999b).
In addition to intracellular endothelial cell ECE, the enzyme is also present on vascular smooth muscle, efficiently converts big ET-1 in human vessels in vitro and is up-regulated in atherosclerosis (Maguire et al., 1997;Maguire and Davenport, 1998). Given the larger volume of the smooth muscle compared with the single layer of endothelium, smooth muscle ECE may be a more important source of ET-1 in pathophysiological conditions.
ECE-2
ET-1 is also synthesized by a second membrane-bound metalloprotease, ECE-2 (Emoto and Yanagisawa, 1995;Yanagisawa et al., 2000;Lorenzo et al., 2001) with ∼60% sequence similarity to ECE-1. It is distinguishable from ECE-1 by having an optimum pH of 5.5 for activity. In human endothelial cells, ECE-2 was localized to the acidified environment of vesicles of the secretory pathway, but unlike ECE-1 it is not found in storage granules (Russell and Davenport, 1999b). Four isoforms exist, differing in their N-terminus: ECE-2a-1 and ECE-2a-2 are expressed predominantly in peripheral tissues and ECE-2b-1 and ECE-2b-2 in the brain, possibly representing the neuronal isoforms (Ikeda et al., 2002). The physiological importance of this pathway for ET-1 synthesis remains to be determined, as ECE-2 also metabolizes other peptides such as bradykinin. However, the requirement for an acidic pH suggests a role in pathophysiological conditions associated with low pH such as ischaemia. ECE-1/ECE-2 knockout mice display increased developmental defects compared with deletion of ECE-1 or ECE-2.
Alternative, non-ECE synthetic pathway: chymase
ET-1 can also be synthesized indirectly by chymase, a serine protease present in mast cells. Big ET-1 is converted to ET-11-31 by cleaving the Tyr 31 -Gly 32 bond (Figure 1), which in turn is converted to the mature peptide via Trp 21 -Val 22 bond (Fecteau et al., 2005;D'Orleans-Juste et al., 2008). The existence of an alternative pathway was originally predicted when ET-1 and ET-2 were detected in embryos of the ECE-1/ECE-2 doubleknockout mouse (Yanagisawa et al., 2000).
The importance of this alternative pathway remains unclear, but importantly ET-11-31 was equipotent with big ET-1 in causing vasoconstriction in human isolated vessels, including coronary arteries, and this was associated with the appearance of measurable levels of ET-1 in the bathing medium. ET-11-31 displayed no selectivity between ETA and ETB receptors in human heart and vasoconstriction was fully blocked by ETA receptor selective antagonists, reflecting the predominance of the ETA receptor on vascular smooth muscle (Maguire et al., 2001;Maguire and Davenport, 2004). The precise physiological role of mast cells within human blood vessels is unclear, but following degranulation, which may occur under pathophysiological conditions, the mast cell chymase is associated with interstitial spaces with the potential to convert circulating big ET-1 and provide a further source of ET-1. Mast cell expression is increased in cardiovascular disease, for example in atherosclerotic lesions. It is therefore possible that the contribution of this pathway within the vasculature, leading to over-expression of ET-1, may be underestimated particularly in conditions of endothelial malfunction where opposing levels of endogenous vasodilators may be reduced. It is unclear whether under conditions of NEP/ECE inhibition the rising levels of big ET-1 would favour increased conversion by the serine protease pathway, thus increasing the pressor effect via ETA receptors or whether excretion of unmetabolized big ET-1 by the kidney would be sufficient to remove the elevated levels of precursors (Johnström et al., 2010).
KELL and ET-3 synthesis
Although big ET-3 is converted by ECE-1 to ET-3, owing to difference in the C-terminus the efficiency is much less than for ET-1. In contrast, big ET-3 is reported to be efficiently converted by KELL (Lee et al., 1999). KELL is a membranebound glycoprotein expressed in human erythrocytes and one of the major antigens; it is also related to mammalian NEP-like enzymes including ECE-1 and ECE-2 (Turner and Tanzawa, 1997). If KELL is the main synthetic pathway for ET-3, a possible benefit of inhibiting ECE would be to increase the ratio of ET-3 to ET-1, which could then differentially produce beneficial vasodilatation via the ETB receptor, but this speculative hypothesis has not been tested.
Pharmacological inhibition of ECE by research compounds
A combination of phosphoramidon and thiorphan has been widely used to identify ECE activity. This is based on the finding that the conversion of big ET-1 to ET-1 is inhibited by phosphoramidon, but not by thiorphan, and has been shown both in vitro and in vivo. Importantly for evaluating the sig-BJP Endothelin@25 nificance of animal models, both compounds have also been used in clinical studies to characterize big ET-1 conversion (see Webb, 1995;Plumpton et al., 1996a;Hand et al., 1999). Low MW, non-peptide ECE inhibitors have been developed and one that has been widely used in vitro and in vivo animal models and is commercially available is CGS26303 (De Lombaert et al., 1994). CGS26303 inhibited conversion of all three big ETs in human isolated blood vessels but, importantly, did not interfere with the interaction of mature peptides with ET receptors (Yap et al., 2000). Although primarily an NEP inhibitor, SOL1, a more recent combined NEP/ECE non-peptide inhibitor with modest inhibition of ECE-1 in vitro, was remarkably potent in vivo, fully blocking the big ET-1-induced rise in BP at a dose of 10 μmol·kg −1 (Nelissen et al., 2012).
A disadvantage of using phosphoramidon is that it is not selective for ECE. An alternative tool compound is PD159790, which inhibits ECE-1 with an IC50 value of 3 μM; at this concentration the compound is selective for ECE-1 over NEP (Ahn et al., 1998). PD159790 has been shown experimentally in HUVECs to inhibit conversion of big ET-1 at pH 6.9, optimum for ECE-1, but did not affect big ET-1 conversion to the mature peptide at pH 5.4, optimum for ECE-2 (Russell and Davenport, 1999a). The compound did not inhibit the further metabolism of ET-11-31, the chymase product of big ET-1 (Maguire et al., 2001) and can be used to distinguish between the three different pathways for ET synthesis. While the mature peptide is located in intracellular Weibel-Palade bodies or secretory vesicles within endothelial cells and a proportion of big ET-1 is converted to ET-1 intracellularly, it is not reported whether ECE inhibitors can cross the plasma membrane to access these intracellular sites. The main effects of these inhibitors may be on external ECE. In agreement with this proposal, the SLV306 metabolite KC-12615 (see later) effectively prevented conversion of exogenous big ET-1 in human vasculature (Seed et al., 2012).
Emerging NEP/ECE inhibitors
Selective inhibitors of ECE have not progressed into clinical applications. SLV306 (daglutril, Figure 2) is an orally active, mixed enzyme inhibitor of both ECE and NEP. It is a pro-drug being converted in vivo to the active metabolite, KC-12615. This latter molecule has a pharmacological profile similar to phosphoramidon, inhibiting NEP in the nanomolar range, but with more modest inhibition in the micromolar range for ECE (Meil et al., 1998;Jeng et al., 2002). The therapeutic basis is that while inhibition of NEP alone increased plasma concentrations of atrial natriuretic factor (ANP) to cause vasodilatation, NEP inhibitors are ineffective as anti-hypertensives, probably because NEP also degrades vasoconstrictor peptides including ET. A combined ECE/NEP inhibitor would be predicted to reduce the systemic conversion of big ET-1 to the mature peptide and increase dilator peptides such as ANP. SLV306 is well tolerated with few or none of the side effects such as increases in liver function, oedema, observed with ET receptor antagonists (Dickstein et al., 2004;Parvanova et al., 2013). A potential disadvantage is big ET-1 might still be converted to ET-1 by an alternative pathway such as chymase. However, in animal models with normal renal function, this did not occur and big ET-1 labelled with the positron emitter 18 F was rapidly accumulated unchanged in the kidney follow-ing inhibition of NEP/ECE, with no evidence of conversion by another pathway (Johnström et al., 2010).
The effect of a combined NEP/ECE inhibitor has been tested in volunteers in a randomized, double-blind trial. Following oral administration of three increasing doses of SLV306 (to reach an average target concentration of 75, 300, 1200 ng·mL −1 of the active metabolite KC-12615), big ET-1 was infused into 13 male volunteers at a rate of 8 and 12 pmol·kg −1 ·min −1 (20 min each). At the two highest concentrations tested, SLV306 dose-dependently attenuated the rise in BP after big ET-1 infusion. There was a significant increase in circulating big ET-1 levels compared with placebo, indicating that SLV306 was inhibiting an increasing proportion of endogenous ECE activity. Importantly, plasma ANP concentrations also significantly increased, consistent with systemic NEP inhibition (Seed et al., 2012).
SLV306 in animal models and patients with type 2 diabetes and nephropathy
Diabetes causes activation of the renal ET system, which leads to progressive renal damage by cell proliferation and interstitial inflammation. Inhibitors of the renin-angiotensin system are widely used in treatment for hypertensive patients with type 2 diabetes, but are less effective in the advanced stages of diabetic renal disease. Studies in an animal model suggested that SLV306 had a similar efficacy to the angiotensin converting enzyme (ACE) inhibitor captopril in reducing proteinuria and preventing nephrosclerosis (Thone-Reinke et al., 2004). In this study, rats were treated with streptozotocin for twenty weeks and the effects of SLV306 (30 mg·kg −1 per day) compared with those of captopril (10 mg·kg −1 per day). SLV306 significantly decreased renal interstitial matrix content as well as protein and albumin excretion in diabetic rats, independent of BP and was as effective as captopril. These results suggested SLV306 treatment on top of blocking the reninangiotensin system might have an additional benefit in reducing BP and improving renal function. Parvanova et al. (2013) tested the efficacy of SLV306 in 45 patients with type 2 diabetes mellitus who had albuminuria and were already receiving the angiotensin receptor antagonist losartan, together with up to two additional antihypertensive drugs, in a randomized, crossover, double-blind, placebo-controlled trial. Although 8 weeks of treatment with SLV306 together with losartan did not significantly alter urinary albumin excretion or renal haemodynamic measures, the authors showed for the first time that the combination decreased ambulatory BP (particularly for systolic hypertension) in this patient group that are often resistant to treatment. There was a small, but significant increase in plasma big ET-1, consistent with ECE inhibition, but surprisingly not in pro-ANP. Increases in the natriuretic peptides was measured in healthy volunteers by Seed et al. (2012). Interestingly, the effect of SLV306 in this study on BP was higher at night (10 versus 12 mmHg). This is of potential importance as increased hypertension at night is a strong cardiovascular risk factor in this patient population. The molecular mechanism is not yet known, as plasma levels of big ET-1 were not reported separately for daytime versus night. The study was comparatively short and did not reveal significant changes in albumin excretion as predicted from animal studies. Long-term trials are required to determine whether the observed lowering of BP by SLV306 will translate into longer-term renal and cardio-protection.
SLV306 and congestive heart failure
The effect of three single oral doses of SLV306 was tested in patients with congestive heart failure who underwent rightsided heart catheterization in a randomized, double-blind, placebo-controlled design (Dickstein et al., 2004). Pulmonary pressures and right atrial pressure decreased significantly in all SLV306 dose groups with the maximum decrease occurring at 6-8 h. Despite plasma levels of the drug increasing with dose, there was no clear dose-response relationship, which may have been the result of the comparatively small numbers (18)(19)(20) in the study.
Insight in NEP/ECE inhibition from animal models
The efficacy of inhibiting NEP/ECE in animal models associated with increases in the ET signalling pathway has provided clues to future clinical applications. The development of nephropathy in diabetes is associated with a poor outcome, eventually leading to end-stage renal disease. In patients with diabetes, urinary excretion of protein and albumin rises and is associated with increased risk of cardiovascular disease. In diabetic rats, SLV306 decreased renal matrix protein content, protein and albumin excretion. The magnitude of these effects was comparable to those of ACE inhibition and independent of BP (Thone-Reinke et al., 2004). Currently, there are few drugs for the treatment of chronic renal failure. SLV338, a NEP/ECE inhibitor, abolished renal tissue damage (interstitial fibrosis, glomerulosclerosis, renal arterial remodelling) in rat models of both acute kidney failure as well as chronic renal failure. The compound preserved kidney function and reduced mortality . In spontaneously hypertensive stroke-prone rats, SLV338 significantly improved survival in comparison with the vehicletreated group in a BP-independent manner and could offer a new therapeutic approach for primary stroke prevention and improvement of mortality (Wengenmayer et al., 2011). SLV338 was also tested for cardiac protection in rat model of experimental renovascular hypertension (two-kidney, oneclip). SLV338 prevented cardiac remodelling to the same extent as losartan, but in a BP-independent manner. This effect was at least partly mediated via suppression of cardiac TGF-β1 expression .
ET has been proposed to be a mediator in toxic liver injury. However, while SLV338 largely prevented the activation of the ET system it did not prevent D-galactosamineinduced acute liver injury in rats. The authors speculated that SLV388 should be tested in a less severe model of liver injury, as very severe intoxication might not be relevantly amenable to pharmacological interventions .
ECE-1 and amyloid deposition
The strategy in the cardiovascular and renal systems has been to inhibit ECE-1 activity. However, evidence is emerging that ECE-1 may function in the brain as a novel enzyme degrading amyloid β-peptides at several sites. Deposition of amyloid in the brain in Alzheimer's disease is determined not only by its production, but also by its catabolism. ECE-1 inhibition produces, in addition to extracellular accumulation, accumulation of intracellular amyloid β-peptides within endosomal/ lysosomal and autophagic vesicles and an intracellular pool, is partly regulated by ECE activity at the sites of production. Reduction in ECE activity leads to accumulation of amyloid β-peptide, which is associated with neurotoxicity early in progression of Alzheimer's disease (Eckman et al., 2001;Pacheco-Quinto and Eckman, 2013). The clearance of Aβ 1-40 in mice was almost completely inhibited by phosphoramidon as well as insulin indicating that human Aβ1-40 was degraded, at least in part, by a phosphoramidon-sensitive pathway, implicating both ECE and NEP (Ito et al., 2013).
To date, these investigations have comprised in vitro or in vivo rodent studies It is not yet clear whether enhancing ECE-1 activity is a potential drug target in Alzheimer's disease rather than inhibiting ECE-1, as in the periphery. ECE-like immunoreactivity has been localized to afferent and efferent fibres of neurones and neuronal cell bodies of mixed morphology in human brain (Giaid et al., 1991). Drugs increasing ECE activity such as enzyme enhancers or recombinant ECE would have to cross the blood-brain barrier, and it is not clear what effect this would have on ET signalling in the periphery.
What are the new ET drug targets in the future?
Epigentics Epigenetics can be defined as heritable changes in phenotype through mechanisms other than changes in DNA sequence. Epigenetic changes will therefore be preserved when cells divide and affect normal development and disease progression. Processes mediating epigenetic regulation include DNA methylation and histone modification, which involves posttranslational covalent modification of histone proteins by a range of writers, erasers and readers. This in turn modulates the ability of associated DNA to be transcribed. The histone code is read by specific families of proteins such as the bromodomains. These are of pharmacological significance because of the recent discovery of low MW inhibitors, which selectively modulate gene expression (Prinjha et al., 2012).
Epigenetic regulation is of particular importance in the ET pathway with transcription being the primary level of ET-1 regulation of the gene EDN1 by histone modifications and DNA methylation (Welch et al., 2013). Silencing of the EDNRB gene by DNA methylation during development of tumours results in the down-regulation of the receptor. As a result, promotion of apoptosis via the ETB receptor is reduced or lost, suggesting the ETB receptor could be a target for epigenetic drugs or ETB agonists where ET may be the cause of some tumour types, including melanomas and oligodendrogliomas (Bagnato et al., 2011). Intriguingly, epigenetic inactivation of ET-2 and ET-3 mRNA and protein was found in rat and human colon tumours and cancer cell lines, as a result of hypermethylation of EDN2 and EDN3 genes. Restoring expression of ET-2 and ET-3 in human cells significantly attenuated the migration and invasion of human colon cancer cells (Wang et al., 2013). As ET-3 displays high affinity for the ETB receptor, forced expression of ET-3 might BJP Endothelin@25 antagonize the actions of ET-1 mediated through ETA receptors. Such a mechanism would be consistent with proposed beneficial effects of the ETB receptor agonist, IRL-1620, in cancer.
Life before birth -is ET a critical pathway?
Maternal malnutrition and uteroplacental vascular insufficiency causes foetal growth restriction or intrauterine growth retardation. Low birthweight is linked to the later development of cardiovascular disease and hypertension. Maternal treatment with dexamethasone increased ET-1 constrictor responses and ETA receptor expression in placental arteries from the foetus (Docherty et al., 2001;Kutzler et al., 2003). Maternal nutrient restriction increased the histone acetylation and hypoxia inducible factor-1α (HIF-1α) binding levels in the ET-1 gene promoter of pulmonary vein endothelial cells (PVECs) in intrauterine growth restriction (IUGR) newborn rats, and continued up to 6 weeks after birth. These epigenetic changes could result in an IUGR rat being highly sensitive to hypoxia later in life, causing more significant PAH or pulmonary vascular remodelling. Recently, Xu et al. (2013) have shown that restricting the diets of pregnant rats so that they were undernourished increased the histone acetylation and HIF-1α binding levels in the proximal promoter region of ET-1, up-regulating the expression of ET-1, and this continued for 6 weeks after birth of the offspring. The authors speculate that that this intrauterine growth retardation could cause varying degrees of PAH later in life.
Generally, increased levels of histone acetylation are associated with increased transcriptional activity, whereas decreased levels of acetylation are correlated with suppressed gene expression. These data show that the open chromatin domains marked by histone H3 and H3K9/18 acetylation at the proximal promoter of ET-1 in IUGR rats are essential for transcription. Up-regulated ET-1 protein expression in PVEC from IUGR hypoxia rats is closely associated with the presence of increased acetylated H3 histones.
Biased signalling in the ET pathway
Pharmacology is undergoing a revolution in understanding the mechanism of 'biased signalling' via GPCRs. It was originally thought that ligands binding to a receptor would equally activate the G-protein pathway to produce a physiological response such as vasoconstriction (such as ET-1 acting on an ETA receptor) as well as activating the β-arrestin pathway, which eventually leads to desensitization, receptor internalization and 'silencing' of the pathway. It is now clear that some ligands are biased to one pathway over the other and secondly, rather than silencing, β-arrestin can activate alternative signalling pathways, some of which may be pathophysiological leading to longer-term signalling responses such as migration and proliferation.
Both ET receptor subtypes follow an β-arrestin and dynamin/clathrin-dependent mechanism of internalization, but it has been established that ETA receptors are recycled to the plasma membrane for further signalling while ETB receptors are targeted to lysosomes and degraded (Bremnes et al., 2000). In epithelial ovarian cancer, activation of ET-1/ETA receptor signalling is linked to many tumour-promoting processes including proliferation, angiogenesis, invasion and metastasis. NF-κB is an important signalling molecule in immunity, inflammation and cancer and β-arrestin is required for ET-1-induced NF-κB activation (Cianfrocca et al., 2014). ET-1 promoted podocyte migration via ET A receptors and increased β-arrestin-1, sustaining renal injury, a pathogenetic pathway that can affect podocyte phenotype in proliferative glomerular disorders (Buelli et al., 2014). β-Arrestin-1 has also been found to be a nuclear transcriptional regulator of ET-1-induced β-catenin signalling, an important mechanism for controlling cell division and progression of epithelial ovarian cancer and necessary for epigenetic modification, such as histone acetylation, and gene expression (Rosanò et al., 2009;. In addition, these effects are blocked by ET receptor antagonists and support a role for ETA-mediated/ β-arrestin-1 facilitating inter-protein interaction in invasive and metastatic behaviour of ovarian cancer.
Biased ET ligands?
Agonists that are biased towards β-arrestin signalling for parathyroid hormone and angiotensin AT1 receptors have been identified. G-protein pathway-selective agonists have been identified for nicotinic acid (nomenclature revised by NC-IUPHAR to hydroxycarboxylic acid) and μ-opioid receptors (Luttrell, 2014). The race is now to determine whether such strategies can be exploited therapeutically.
Do biased ligands (ligands that binding to the same receptor but activate different signalling pathways) exist for ET receptors? The study by Compeer et al. (2013) already mentioned suggested ET-1 and ET-2 initiate and maintain vasoconstriction by different downstream mechanisms. Biased signalling can be identified by comparing the affinities of ligands in β-arrestin recruitment assays with a G-proteinmediated response such as vasoconstriction . In this study the rank order of potency for β-arrestin recruitment at the ETA (ET-1 ≥ ET-2 > > ET-3) and ETB (ET-1 = ET-2 = ET-3) receptors was as expected and there was no obvious major differences in potency of ETs when comparing with G-protein-mediated constrictor assays in human vessels. However, at the ETA receptor sarafotoxin S6b was a partial agonist in β-arrestin recruitment, but a full agonist in causing constriction, suggesting the possibility of biased ligands. Such a bias could have been selected for during evolution by prolonging the effects of envenomation of the mammalian prey. While bosentan displays no selectivity for ETA over ETB receptors in radioligand binding and G-protein functional assays, unexpectedly, it was significantly more effective an inhibitor of β-arrestin recruitment mediated by ETA, compared with the ETB receptors . The result for bosentan is intriguing as many of the detrimental actions of ET-1, particularly in cancer, may use the β-arrestin pathway, and this suggests the potential to block a deleterious pathway while preserving activation of a beneficial pathway.
Both mixed ETA/ETB and ETA selective receptor antagonists have become established in the treatment of PAH, while NEP/ECE inhibitors such as SLV306 have promise as an alternative to receptor blockade and IRL1620 and other ETB receptor ligands have potential in improving cancer therapy.
All of these are approaches that have exploited low MW compounds. Over 50 therapeutic monoclonal antibodies have been approved for clinical use, but none yet against a GPCR target, emphasizing the technical challenge. Endomab-B1, a monoclonal antibody has recently been reported to bind with subnanomolar affinity for the ETB receptor, competed with ET-1 binding with greater efficacy than BQ788, and functions as an antagonist to block the ET-1induced IP3-calcium signalling pathway (Allard et al., 2013). Whether this antibody has clinical applications remains to be discovered. | 13,088.6 | 2014-11-24T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Roadmap for double hypernuclei spectroscopy at PANDA
Hypernuclear Physics is currently attracting renewed attention. Thanks to the use of stored p beams, copious production of double Λ hypernuclei is expected at the Panda experiment which will enable high precision γ–spectroscopy of such nuclei for the first time. In the present work we have studied the population of particle stable, excited states in double hypernuclei after the capture of a Ξ within a statistical decay model. In order to check the feasibility of producing and performing γ–spectroscopy of double hypernuclei at PANDA, an event generator based on these calculations has been implemented in the PANDA simulation framework P R.
Introduction
The Panda experiment [2] which is planned at the international Facility for Antiproton and Ion Research FAIR in Darmstadt aims at the high resolution γ-ray spectroscopy of double hypernuclei ( [1,3]).
For that, excited states of Ξ hypernucle will be used as a gateway to form double Λ hypernuclei.The production of low momentum Ξ − hyperons and their capture in atomic levels is therefore essential for the experiment.At PANDA the reactions p + p → Ξ − Ξ + and p + n → Ξ − Ξ 0 followed by re-scattering of the Ξ − within the primary target nucleus will be employed.After stopping the Ξ − in an external secondary target, the formed Ξ hypernuclei will be converted into double Λ hypernuclei.The associated Ξ + will undergo scattering or (in most cases) annihilation inside the residual nucleus and can be used as a tag for the reaction.
Because of this multi-stage process (see Fig. 1), spectroscopic studies, based on the analysis of two-body reaction kinematics like in single hypernuclei reactions, cannot be performed and spectroscopic information on double hypernuclei can only be obtained via their decay products: γ-rays emitted via the sequential decay of an excited double hypernucleus provide detailed information on the level structure.-Once the ground-state is reached, pions and protons from the mesonic or non-mesonic weak decays can be used to tag the reaction.-A complete detection of the decay products from the excited residue is in principle possible, though a resolution comparable to nuclear emulsion would be required.In addition, except for the case of very light hypernuclei also neutral particles are emitted which usually escape the detection.As a consequence, the determination of the ground-state mass of double hypernua e-mail<EMAIL_ADDRESS>is limited to those light nuclei which decay exclusively into charged particles.
Therefore, a unique identification of the double hypernuclei can only be reached via the emitted γ-rays from excited, particle stable states [3].In the present work we explore the feasibility to perform γ-spectroscopy of double hypernuclei at the planned PANDA experiment.
Population of Excited States in Double Hypernuclei
In order to limit the number of possible transitions and thus to increase the possible signal to background ratio, we focus in the following on light nuclei with mass numbers A 0 ≤ 13, where even a relatively small excitation energy may be comparable to their binding energy.In the following we therefore assume that the principal mechanism of de-excitation is the explosive decay of the excited nucleus into several smaller clusters.To describe this break-up process and in order to estimate the population of individual excited states in double hypernuclei after the conversion of the Ξ − , we have developed a statistical decay model which is reminiscent of the Fermi break-up model( [4,5]).We assume that the nucleus decays simultaneously into cold or slightly excited fragments( [6]).In the case of conventional nuclear fragments, we adopt their experimental masses in ground states, and take into account their particle-stable excited states.For single hypernuclei, we use the experimental masses and all known excited states.For double hypernuclei we apply theoretically predicted masses and excited states( [7,8]).
In the model we consider all possible break-up channels, which satisfy the mass number, hyperon number (i.e.strangeness), charge, energy and momenta conservations, and take into account the competition between these channels.Since the excitation energy of the initially produced double hypernuclei is not exactly known, we performed the calculations as a function of the binding energy of the captured Ξ − .Calculations were performed for several stable secondary targets ( 9 Be, 10 B, 11 B, 12 C, and 13 C) which lead to the production of excited states in double hypernuclei.ex.s.Λ + SHP g.s.
Λ + Λ Fig. 2. Predicted relative yield for ground states (g.s.) and excited states (ex.s.) in double (DH), single (SH), and twin hypernuclei (TH) as a function of the Ξ binding energy for a secondary 12 C target.
Fig. 2 shows as an example the production of ground (g.s.) and excited (ex.s.) states of conventional nuclear fragments as well as single (SHP), twin (THP) and double (DHP) hypernuclei in case of a 12 C target as a function of the assumed Ξ − binding energy.According to these calculations excited states in double hypernuclei (triangles) are produced with significant probability.Fig. 3 shows the population of the different accessible double hypernuclei For the 12 C target, excited states in 11 ΛΛ Be, 10 ΛΛ Be and 9 ΛΛ Li dominate over a wide range of the assumed Ξ − binding energy.
Experimental Integration and Simulation
The hypernuclei study will make use of the modular structure of PANDA.Removing the backward end-cap calorimeter will allow to add a dedicated nuclear target station and the required additional detectors for γ spectroscopy (see 4 and ref. ([3]))close to the entrance of PANDA.While the detection of anti-hyperons and low momentum K + can be ensured by the universal detector and its PID system, a specific target system and a γ-detector are additional components required for the hypernuclear studies.
The two-step production mechanism and a devoted experimental setup has been implemented into the simulation package PR as well as the PANDA setup.
-A primary carbon target at the entrance to the central tracking detector of PANDA.-A small secondary active sandwich target composed of silicon detectors and 9 Be, 10,11 B or 12,13 C absorbers to slow down and stop the Ξ − and to identify the weak decay products.-To detect the γ-rays from the excited double hypernuclei an array of 15 n-type Germanium triple Clusterarrays will be added.To maximize the detection efficiency the γ-detectors must be arranged as close as possible to the target at backward axial angles.
Spectroscopic studies of double hypernuclei
For the first step, namely the reaction p + p → Ξ − Ξ + we have employed an event generator ( [9]) which is based on an Intra Nuclear Cascade model and which takes as a main ingredient the rescattering of the antihyperons and hyperons in the target nucleus into account.From 50505 produced events which contained a Ξ − with a laboratory momentum less than 500 MeV/c, 7396 hyperons are stopped within the secondary target.
In the next step the excited particle stable states of double hypernuclei as well as excited states of conventional nuclei and single hypernuclei produced during the decay process de-excite via γ-ray emission.For the high resolution spectroscopy of excited hypernuclear states a Germanium γ-array detector( [10]) has also been implemented in the standard PANDA framework PR ( [3]).Fig. 5 shows the total energy spectrum summed over all germanium detectors for all events where a Ξ − has been stopped in the secondary carbon target.Note, that the size of the bins (50keV) in this plot is significantly larger than the resolution of the germanium detectors expected even for high data rates at normal conditions (3.4 keV at 110 kHz ( [11])).Several peaks seen in the spectrum around 1, 1.68 and 3 MeV are associated with γ-transitions in various hypernuclei.However, for a clear assignment of these lines obviously additional experimental information will be needed.
Weak decays of Hypernuclei
For the light hypernuclei to be studied in the initial phase of the planned experiments the non-mesonic and mesonic decays are of similar importance.In the following we will focus on the case of two subsequent mesonic weak decays of the produced double and single hypernuclei.For the light nuclei discussed below this amounts to about 10% of the total decay probability.Since the momenta of the two pions are strongly correlated their coincident measurement provides an effective method to tag the production of a double hypernucleus.Moreover, the momenta of the two pions are a fingerprint of the hypernucleus respective its binding energy.
The upper part of Fig. 6 shows the momentum correlation of all negative pion candidates from the secondary 12 C target.The various bumps correspond to different double or twin hypernuclei.The good separation of the different double or twin hypernuclei provides an efficient selection criterion for their decays.
The lower part of Fig. 6 shows the γ-ray spectrum gated on each of four regions indicated in the two-dimensional scatter plot.In the plots (a) and (d) the 1.684 MeV 1 2 + and the 2.86 MeV 2 + states of 11 ΛΛ Be and 10 ΛΛ Be, respectively, can clearly be identified.Because of the limited statistics in the present simulations and the decreasing photopeak efficiency at high photon energies, the strongly populated high lying states in 9 ΛΛ Li at 4.55 and 5.96 MeV cannot be identified in (b).The two dominant peaks seen in part (c) result from the decays of excited single hyperfragments produced in the Ξ − +C → 4 Λ H + 9 Λ Be reaction, i.e. 4 Λ H at an excitation energy of 1.08 MeV ( [12,13]) and 9 Λ Be at an excitation energy of 3.029 and 3.060 MeV ( [14,15]) are also well identified ( See ref. [3] for more details).The spectrum shown in Fig. 6 corresponds to a running time at PANDA of the order of two weeks.It is also important to realize that gating on double non-mesonic weak decays or on mixed weak decays may significantly improve the final rate.
Background
Particles produced simultaneously with the double hypernuclei do not significantly disturb the γ-ray detection.
The main limitation is the load of the Cluster-array by the high particles rate from uncorrelated background reactions.The pp → Ξ − Ξ cross section of 2µb is about a factor 2500 smaller than the inelastic pp cross section of 50mb at 3 GeV/c.The total energy spectra in the crystal has been obtained summing up event by event the energy contributions of the particles impinging on the Ge array.Background reactions have been calculated by using the UrQMD+SMM( [16]) event Generator.For the present analysis 10000 p + 12 C interactions at 3 GeV/c were generated( [17]).The total energy spectra resulting from the background simulation have been filtered by using the same technique as it was done for the signal events and by applying identical cuts.For 11 ΛΛ Be as well as 10 ΛΛ Be only one single event survived the cuts.Both of these events had an energy deposition in the germanium detector exceeding 10 MeV significantly.
Several further improvements of the background suppression are expected by exploring the topology of the sequential weak decays.This includes the analysis of tracks not pointing to the primary target, multiplicity jumps in the detector planes and the energy deposition in the secondary target.Furthermore kaons detected in the central detector of PANDA at forward angles can be used to tag the Ξ production.
This research is part of the EU integrated infrastructure initiative Hadron-Physics Project under contract number RII3-CT-2004-506078.We acknowledge financial support from the Bundesministerium für Bildung und Forschung (bmb+f) under contract number 06MZ225I.
Fig. 1 .
Fig. 1.Various steps of the double hypernucleus production in PANDA.
Fig. 3 .
Fig.3.Production probability of ground and excited states of accessible double hypernuclei after the capture of a Ξ − in a12 C nucleus and the Ξ − conversion into two Λ hyperons.Excited states in11 ΛΛ Be, 10 ΛΛ Be and 9 ΛΛ Li dominate over a wide range of the Ξ − binding energy.
Fig. 5 .
Fig.5.Total γ-ray spectrum resulting from the decay of double hypernuclei produced in a12 C target and detected in the germanium array and before additional cuts. | 2,783.2 | 2010-01-01T00:00:00.000 | [
"Physics"
] |
Active Learning for Imbalanced Ordinal Regression
Ordinal regression (OR), also called ordinal classification, is a special multi-classification designed for problems with ordered classes. Imbalanced data hinders the performance of classification algorithms, especially for OR algorithms, as imbalanced class distributions often arise in OR problems. In this article, we address an active learning based solution for imbalanced OR problem. We propose an active learning algorithm for OR (AL-OR) to select the most informative samples from unlabeled samples, mark them and add them to the training set. Based on AL-OR, we put forward an improved active learning for imbalanced OR (IAL-IOR), which further adjust the sampling strategy of AL-OR dynamically to make the training data as valuable and balanced as possible. Recall rate for multi-classification and new mean absolute error are designed to measure the performance of the algorithms. To the best of our knowledge, our algorithm is the first algorithm for imbalanced OR in algorithm level. The experimental results show that the proposed algorithms have faster convergence and much better generalization ability than the classical methods and the state-of-the-art methods under the evaluation measurements for imbalance problems. In addition, we also proved the effectiveness of our algorithms by statistical analysis.
I. INTRODUCTION
Multi-classification is an important task in machine learning. As a special case of multi-classification, OR is designed to solve the problems with ordered labels. For example, a teacher always rates his/her students by giving grades (A, B, C, D, F) on their performance [1]. When we use common multi-classification algorithms to predict the grades, the order information of the grades is obviously ignored. OR algorithms are designed to make full use of the order among the labels. OR problems usually appear in many research areas, such as medical research, age predict, brain computer interface, credit rating, econometric model, face recognition, facial beauty assessment, image classification, wind speed prediction, social sciences, text classification, and more [2].
Class imbalanced problem is a big challenge for classification algorithms. In this problem, class with more samples is majority class and class with fewer samples is minority class. It is important to predict the minority classes correctly, for the The associate editor coordinating the review of this manuscript and approving it for publication was Sabah Mohammed . minority classes often represent the unusual cases to which we should pay special attention, such as, high rate in credit rating, high speed in wind speed prediction and so on. Due to the designed principles, most machine learning algorithms optimize the overall classification accuracy, while sacrificing the accuracy on minority classes. Therefore it is necessary to design some methods to improve the classification accuracy on minority classes without jeopardizing the accuracy of the majority classes severely [3].
Imbalanced problem has been widely studied for standard classification problems (binary and multi-class classification). Data level methods balance the skew distributions on the dataset by using data pre-processing, such as undersampling, over-sampling. In under-sampling algorithms, the sample size of majority classes will be dislodged to reach the desired rates of different classes. In over-sampling algorithms, such as SMOTE (Synthetic Minority Over Sampling Technique) [4], MDO (Mahalanobis Distance-based Oversampling technique) [5], the sample size of minority classes will increase by generating new samples to reach the desired rates. When imbalanced data occurs in OR, new methods should be designed to tackle the peculiarities. Currently, the Graph-Based Over-sampling can generate synthetic samples by considering the distribution of minority class data and the order of samples. During over-sampling it captures the structure of the data by constructing a sample graph and considers the paths which contain the ordinal constraints of the data. Besides, new samples are generated near the boundary of two adjacent classes to soften the ordinal structure of the samples [6]. The Cluster-Based Weighted Over-sampling clusters minority classes at first, and then oversamples them based on their distance, and finally sorts the classes [7]. Synthetic Minority oversampling technique to deal exclusively with imbalanced Ordinal Regression (SMOR) is a direction-aware oversampling algorithm [8], and it can effectively avoid wrong synthetic samples generation by considering the rank of the classes. SMOR computes the selection weight of being used to generate synthetic samples for each candidate generation direction. However, they are both designed in data level for OR. For under-sampling, it may dismiss valuable samples which are decisive for building classifiers. For over-sampling, it increases the probability of overfitting by replicating the minority classes samples.
Instead of focusing on modifying the training set in order to combat class skew, other methods aims at modifying the classifier learning procedure itself, such as cost-sensitive methods [9], ensemble learning algorithms [10], active learning [11], [11]- [14], and one-class learning [15]. Among them, active learning can counteract the harmful effects of learning under imbalanced classes by selecting the most useful samples for the classifier [11]. Since active learning is implemented on a random set of training populations rather than the entire training dataset, it can reduce the computational complexity of dealing with large imbalanced datasets [14]. Moreover, active learning provides a progressive sampling strategy which makes it is possible to adjust a sampling strategy dynamically by observing some indicators of the ease of the different classes [13], and gradually improve the performance of the classifier by selecting the samples that have the most learning value for the current classifier each time. The time complexity of active learning mainly comes from the time it takes to find the best samples from unlabeled samples and retrain. The time complexity of searching most informative sample from unlabeled data has been solved by small pools and early stopping, and the time complexity of retraining has been solved by incremental learning [12]. Active learning not only inherits the advantage of reducing computational complexity (by small pools, early stopping and incremental learning) of under-sampling, but also avoids the disadvantage that under-sampling may delete useful information. It's noteworthy that learning difficulties of the different classes that make up a learning task may be different (in terms of the number of instances required). Therefore, we need sampling strategies that can be adjusted dynamically by observing different learnability indicators. Such sampling strategies can be achieved by active learning [13], which can select the most useful samples for OR classifier by these sampling strategies. Moreover, as shown in Figure 1, we can also reasonably assume the samples inside the different boundaries of OR are balanced [12].
In this article, we propose an active learning method to deal with the imbalanced OR in algorithm level. First, we transform OR to an extended binary classification [16], [17], so that ordinal regression can be achieved by SVM (Support Vector Machine). Then we design a sampling strategy of active learning for OR, and then we adjust the sampling strategy dynamically to get a more valuable and balanced training set from imbalanced data. Finally, we propose new evaluation methods specifically for imbalanced OR to prove the efficiency of our algorithm.
The main contributions of this article are summarized as follows.
1) To the best of our knowledge, our algorithm is the first algorithm for imbalanced OR in algorithm level. 2) We put forward a sampling strategy for OR (AL-OR), and an improved active learning method for imbalanced OR (IAL-IOR). 3) We propose improved evaluation methods to evaluate the performance of imbalanced ordinal regression. We organize the rest of the paper as follows. In section II, we give a brief review on the related works. In section III, we present the transformation of OR to an extended binary classification model, and its SVM based solution.
In section IV, we put forward a sampling strategy for OR and an active learning with balanced sampling process and novel evaluation methods for imbalanced OR. In section V, we carry out our experiments on a variety of datasets and the experimental results are discussed. Finally, in section VI, we give some conclusions.
II. RELATED WORK
In this section, we give a brief overview on ordinal regression and active learning.
A. ORDINAL REGRESSION
Many real world applications present the label ordinal structure and ordinal regression has increased the number of methods and algorithms developed over the last years [2]. VOLUME 8, 2020 Over the past decade, a number of noteworthy research advances have been made in supervised learning of ordinal regression [17]- [19]. Since support vector machines (SVMs) have gained profound interest because of good generalization performance [16], there are several support vector OR (SVOR) formulations proposed to tackle OR problems. Shashua and Levin [20] proposed fixed-margin-based formulation and sum-of-margin-based formulation by finding multiple parallel hyperplanes. Chu and keerthi [19] improved the fixed-margin-based formulation by explicitly and implicitly keeping ordinal inequalities on the thresholds. Cardoso and Costa [21] proposed a data replication method and mapped it into SVM by using the fixed-margin-based formulation implicitly. Li and Lin [16] proposed a reduction framework from ordinal regression to binary classification based on extended examples. This framework allows to design a ordinal regression model based on a binary classification and derive new generalization bounds for ordinal regression from known bounds for binary classification. Moreover, it unifies many existing ordinal regression algorithms. In our paper, we build the ordinal regression by using this reduction framework and SVM. Recently, [22] presented a new Kernel Extreme Learning Machine for ordinal regression (KELMOR) by exploiting a quadratic cost-sensitive encoding scheme to deal with the efficiency of OR in the big data scenario. Reference [23] proposed a novel ordinal regression model, which is named as nonparallel support vector ordinal regression (NPSVOR), a set of possible nonparallel hyperplanes are constructed independently. However, a small number of literatures have considered the imbalanced ordinal regression [24].
Reference [6] creates synthetic samples by considering the distribution and ordering of minority data. The main assumption of this method is that when resampling samples in ordinal regression problem, the ordering of the classes should be considered, and the ordering is generally represented by a latent manifold. To take advantage of this collector, it captures the structure of the data by constructing a pattern-based graph, and considers paths preserve data order constraints when oversampling. In addition, new samples are created at the boundaries between adjacent classes to smooth the ordinal nature of the dataset.
Reference [7] aims to address the imbalanced ordinal regression by clustering the minority classes and over-sampling them based on their distance firstly, and then ordering the relationship with the samples of other classes. The final size of an oversampling cluster depends on its complexity and initial size in order to generate more synthetic instances for more complex and smaller clusters and fewer instances for more complex and larger clusters. An improved agglomerative hierarchical clustering algorithm is proposed to reduce the occurrence of superimposed composite samples during oversampling. Moreover, a new measurement method is proposed to quantify the balance between the complexity of the cluster and the initial size.
B. ACTIVE LEARNING
Active learning as a standard machine learning problem, has been extensively studied in many research filed. Based on different sample strategies, they can be grouped into these categories [25]: 1) uncertainty sampling where an active learner queries the samples about which it is least certain how to label. 2) Query-By-Committee which involves maintaining a committee, and the most informative query is considered to be the samples about which they most disagree. 3) Expected Model Change where an active learner queries the samples that would impart the greatest change to the current model. Moreover, more and more studies focus on the active learning for imbalanced data [11]- [14], [26].
Reference [12] assumes that samples inside the boundaries are balanced, and active learning is used to choose samples in the boundaries so that the learner has a more balanced training set. Completely active learning is used to solve the imbalanced problem and the experimental results show that active learning implements fast solutions with competitive prediction performance in imbalanced classification. Meanwhile, it assumes that samples inside the boundaries are balanced, as shown in Figure 1, and active learning is used to choose samples in the boundaries so that the learner has a more balanced training set.
A co-selecting method is proposed in [26] which uses twofeature-subspace classifiers to choose the balanced samples by adjusting a sampling strategy dynamically from imbalanced sentiment data. Experiments of four domains demonstrate great potential and effectiveness of the approach for imbalanced sentiment classification.
Reference [27] analyses the effect of resampling techniques used in active learning for word sense disambiguation. It's worth noting that the technique does not requires architecture or learning algorithms modification, which makes them very easy to use and extend to other areas. Experimental results show that under-sampling causes negative effects on active learning, but over-sampling is a relatively good choice.
Reference [14] proposes an ensemble-based active learning algorithm to tackle the medical diagnosis imbalance problem. The artificial data is created according to the distribution of the training set to make the ensemble diverse, and the random subspace re-sampling method is used to reduce the data dimension. In selecting member classifiers based on misclassification cost estimation, the minority class is assigned with higher weights for misclassification costs, while each testing sample has a variable penalty factor to induce the ensemble to correct current error. Experimental results show that compared with other ensemble methods, the proposed method has best performance, and needs less labeled samples.
III. SVM BASED OR SOLUTION
In this section, we first transform OR to an extended binary classification problem, and then give a SVM based solution.
A. OR AS AN EXTENDED BINARY CLASSIFICATION
The problems that OR handles can be described as follows: when an input vector x is given, we can get a label y, where x ∈ X ⊆ R d and y ∈ Y = {C 1 , C 2 , . . . , C K }, i.e., x is a sample of a d-dimensional input space, and y is one of K different labels, where C 1 < C 2 < . . . < C K . Suppose that OR is a threshold model, then a K ordinal classes problem has K − 1 ordered thresholds: θ 1 < θ 2 < . . . < θ K −1 [28]. Thus, a sample x is considered as class C i when the predictive function h(x) = w T x − b falls between θ i−1 and θ i , where w ∈ R d and b is a offset, and θ 0 = −∞ and θ K = ∞ are typically assumed. For example, the output of a class label C 3 should fall between θ 2 and θ 3 .
OR as an extended binary classification can be formed as: where e k ∈ R K −1 denotes a (K − 1)-dimensional vector whose kth element is 1 and the rest of the elements are 0, the function I [·] is an indicator function which will return 1 if the inside condition holds, otherwise zero will be returned.
The weight vector can be used to predict y k i such that wx k i = (w, −θ)x k i = w T x i − θ k . Therefore, the θ k can be calculated by feature extension. Finally, the label of each OR sample can be predicted as: where g(
B. SVM BASED SOLUTION FOR OR
Given the original OR dataset L = {(x 1 , y 1 ), . . . , (x L , y L )}, we can extend the datasets L into the corresponding dataset of binary classification and y k i ∈ 1, −1. Thus, we can minimize the structural risk function by [29] and [16] as the following primal problem: where φ denotes kernel function, C is a positive number and the ξ k i denotes slack variables that allow x i to have some error at the kth boundary. The kernel function in Equation (4), Algorithm 1 AL-OR Input: Labeled data L and Unlabeled data U Output: The OR model 1: for i < N do 2: Learn an OR classifier using current L 3: Use the classifier to predict the unlabeled data U 4: Use sampling strategy as Equation (8) shows to select informative samples for manual annotation 5: Move the informative samples from U to L 6: end for 7: return An OR classifier makes the decision function become nonlinear by virtue of the kernel trick [30].
By introducing lagrangian α k i and µ k i , the dual form of the minimization problem in Equation (4) becomes: is the resultant kernel evaluation of x k 1 i and x k 2 j . In this way, the theoretical rigor of SVM is inherited, and moreover, typical caching and optimization techniques such as SMO [31], [32] can also be used in OR [28].
IV. IMPROVED ACTIVE LEARNING FOR IMBALANCED OR
In this section, we first put forward a sampling strategy of active learning for OR. Then we design a balanced active learning method for imbalanced OR. Finally, we introduce two improved evaluation indicators for imbalanced OR. The general flow of the algorithm is shown in Figure 2.
A. ACTIVE LEARNING FOR OR
It is easy to collect large number of unlabeled data in many real-world applications, so that effective pool-based active learning, as shown in Algorithm 1, becomes more and more important [33]. The most critical step of pool-based active learning is how to evaluate informative samples.
The most commonly used technique in active learning focuses on selecting samples from the area of uncertainty (the area closest to the prediction decision boundary of the current model), and many exiting popular techniques are specializations of uncertainty selecting, including query-bycommittee-based approaches [34]- [36]. Therefore, the most frequently used active learning strategy in SVM is to check the distance of each unlabeled sample to the hyperplane, by which the most informative samples are decided for the VOLUME 8, 2020 learner [37]. We can get the parameter of ordinal regression as Equation (4) shows: Sample x i will be extended to K − 1 samples by Equation (1), therefore a sample's confidence can be calculated as: As Equation (7) shows, we can calculate the distance from the extended samples x k i to the boundary by | (w, −θ )x k i − b |. The minimum of K − 1 distances represents the distance between the sample x i and the decision surface of the final category. Then, we can get the most informative sample from the unlabeled data: According to Equation (8), we can get our AL-OR algorithm as shown in Algorithm 1.
B. BALANCED ACTIVE LEARNING FOR IMBALANCED OR
The sampling strategy for OR proposed before just find the informative samples from the entire unlabeled data. This may aggravate the imbalance rate in labeled data if the informative samples are always in majority classes. To balance the samples in different classes, once a majority classes sample is chosen by Equation (8), we will choose an informative sample from the least class. Algorithm 2 illustrates the improved active learning for imbalanced OR (IAL-IOR) in detail.
The main difference of our algorithm from the original active learning is step 6-8. If the most informative sample in current unlabeled dataset U belongs to the majority classes in current labeled dataset L, a sample of the minority classes in current L will be added by manually annotating the most informative sample of the same class in U . After many iterations, the samples in labeled dataset L will be more balanced. Learn an OR classifier using current L 3: Use the classifier to predict the unlabeled data U 4: Use sampling strategy as Equation (8) shows to select the most informative sample x in 5: Put the x in into set A 6: if x in belongs to a majority class then 7: Choose the most informative sample from the least class 8: end if 9: Manually annotate the samples in A 10: Move the informative samples in A from U to L 11: end for 12: return A balanced OR classifier
C. EVALUATION METHODS FOR IMBALANCED OR
Making proper comparisons between classification models is a complex and unsolved challenge. This task depends not only on the understanding of errors, but also the nature of the problem itself. To avoid erroneous biases in the assessment, evaluation indicator error rate is designed to evaluate the accuracy of classification algorithms, while for OR, the mean absolute error (MAE) may be a more reasonable metric [38]: where N is the number of samples. Small errors are not as important as large errors in OR. For example, when a student's true grade is A, being predicted as B is more acceptable than being predicted as F. However, error rate and MAE may be deceptive in imbalanced situation. For example, for a given dataset with 10 percent of the samples belong to the minority class and 90 percent of the samples belong to the majority class, if a classifier predicts every sample to be the majority class, it will be evaluated to have an accuracy of 90 percent by error rate. It is obviously that the classifier may be ineffective to the minority class. In research area, other evaluation indicators are used to provide a comprehensive assessment of imbalanced problems, such as Recall. It is defined as the following formula in binary classification: where TP is the number of positive samples correctly predicted and FN is the number of positive samples predicted as negative. Equation (10) can be improved for OR (also can be used in multi-classification) as: where Recall k denotes the recall rate in class k. There are also some well-accepted measures for imbalanced OR, the average mean absolute error (AMAE) [39] and and the maximum mean absolute error (MMAE) [40]: where MAE k is the MAE of class k. The MAE can also be improved for OR as: where N is the size of dataset and N k is the size of class k. N N k denotes the weight of different classes. It is easily to prove that Equation (14) is equivalent to MAE when the dataset is balanced.
V. EXPERIMENTS
In this section, we first introduce the experimental setups, and then present our experimental results and discuss.
A. EXPERIMENTAL SETUP 1) DESIGN OF EXPERIMENTS
As an active learning algorithm, we should verify the availability of our sampling strategy, and as an algorithm for imbalance OR, we should show the generalization of our algorithm.
To show the availability of our sampling strategy, we compare the performance of our algorithms (including AL-OR and IAL-IOR) with the random sampling (randomly select query samples). To show the generalization of our algorithm, we compare the performance of our algorithms with the stateof-the-art imbalance methods (including under-sampling (US) and an over-sampling methods (SMOTE [4])) and recent proposed imbalanced methods (including SMOR [8] and SMOM [41]). The performance of all algorithms will be evaluated by general measures such as accuracy, MAE, and measures especially for imbalanced OR, such as AMAE, MMAE, Recall m and MAE im .
2) IMPLEMENTATION DETAILS
We implement our algorithms and the other comparison experiments in MATLAB. Experiments are run on a 2.4-GHz Intel Xeon machine with 128-GB RAM. The active learning algorithms we designed are based on the OR classifier mentioned in Section 3. To be fair, all the base classifiers of the next comparison algorithms are this OR classifier. This classifier mainly involves two parameters: kernel functions and C. The kernel is Gaussian kernel K (x 1 , 2 ) with k = 0.1, and the C is fixed to 10 respectively. The values for the parameters of SMOR are: k = 2, w = 0.25 and r = 1/4. The nearest neighbor parameter k in SMOTE is set to 5. The parameters of SMOM is set as follows: k1 = 12, k2 = 8, rTh = 5/8, nTh = 10, w1 = 0.2, w2 = 1/2, r1 = 1/3 and r2 = 0.2. The code of SMOR [8] and SMOM [41] is available at the website https://github.com/zhutuanfei/SMOR, and the code of our algorithms is available at the website https://github.com/gjmrookie/active-learning-forimbalanced-OR.
For the experiments to verify the availability of our sampling strategy, we divide the original dataset into a test set and a training set at a ratio of 4: 1, and the samples proportions of different classes in the training and test sets are the same as in the original dataset. In the training set, we set different size of L to ensure only a small amount of data at the beginning of training. Random sampling and AL-OR take 10 samples per generation. For the experiment to verify the generalization of our algorithm, the original dataset is also divided into a test set and a training set as above.
3) DATASETS
The datasets used in our experiments are summarized in TABLE 1. The first three benchmark datasets are used for metric regression problems. Equal-length merging are used to discrete target values into ordinal numbers by dividing the range of target values into a given number of equal-length intervals [42]. The last twelve benchmark datasets are OR datasets from the real world. the test data by running 20 trials. The results show that our algorithms (AL-OR and IAL-IOR) are much better than random sampling, and our algorithms converge early. The performance of AL-OR demonstrates that active learning can deal with the class imbalanced problems. The performance of IAL-IOR is even better, however it is less effective because the samples added per generation is limited. two findings: 1) our algorithms have a similar or even better generalization than OR; 2) the accuracy and MAE can not reflect the accuracy on the minority classes. Our algorithms outperform the OR and the standard imbalanced algorithm on the AMAE, MMAE, Recall m and MAE im , which means our algorithms are better than under-sampling and SMOTE on most datasets. It also means the completely active learning is an efficient method for imbalanced OR.
In addition, we quoted two evaluation methods: Bayes SignTest and Bayes Signrank Test [43], and applied them to our three experimental indexes (accuracy, AMAE and MAE im ) for verification. The Bayes Signrank results are shown in TABLE 5 and the Bayes Signtest results are shown in TABLE 6. The comparison results of each pair of algorithms have three indicators: left, rope and right, where left represents the probability that Classifier 1 is superior to Classifier 2, rope represents the probability that the algorithms are equivalent, and right is the opposite of the first case. The experimental results come from 20 trails on 15 data sets. According to the experimental VOLUME 8, 2020 results, our active learning algorithm designed specifically for imbalanced OR (IAL-IOR) performs the best. Active learning for imbalanced OR (AL-OR) and the compared oversampling algorithms (SMOR, SMOM and SMOTE) have similar generalization performance. Both algorithms are definitely better than undersampling algorithm (US).
From Figure 3 and Figure 4, we can conclude that the proposed algorithms have faster convergence and better generalization ability on Recall m and MAE im . Our algorithms have the generalization ability similar to the classical methods (US, SMOTE) and the recently proposed (SMOM, SMOR) methods in general evaluation (accuracy, MAE) measurements, but have more excellent results under the evaluation measurements (Recall m , MMAE, AMAE, MAE im ) for imbalance problems. In TABLE 5 and TABLE 6, we also proved the effectiveness of our algorithms through statistical analysis.
VI. CONCLUSION
In this article, we put forward a sampling strategy for ordinal regression (AL-OR) and design a improved active learning for imbalanced ordinal regression (IAL-IOR). Firstly, we convert the ordinal regression problem to a binary classification problem. Secondly, we design a sampling strategy for ordinal regression and a balanced active learning based on the sampling strategy. In order to get more reasonable evaluations, we design improved recall Recall m and the mean absolute error MAE im for imbalanced ordinal regression. Moreover, we conduct experiments to compare our algorithms with other algorithms on benchmark datasets. The results show that the proposed AL-OR and IAL-IOR both can be used to deal with the class imbalance problem in OR efficiently. | 6,553.4 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Provenance of the Incipient Passive Margin of NW Laurentia (Neoproterozoic): Detrital Zircon from Continental Slope and Basin Floor Deposits of the Windermere Supergroup, Southern Canadian Cordillera
The origin of the passive margin forming the paleo-Paci fi c western edge of the ancestral North American continent (Laurentia) constrains the breakup of Rodinia and sets the stage for the Phanerozoic evolution of Laurentia. The Windermere Supergroup in the southern Canadian Cordillera records rift-to-drift sedimentation in the form of a prograding continental margin deposited between ~ 730 and 570Ma. New U-Pb detrital zircon analysis from samples of the post-rift deposits shows that the ultimate source area was the shield of NW Laurentia and the near uniformity of age spectra are consistent with a stable continental drainage system. No western sediment source area was detected. Detrital zircon from postrift continental slope deposits are a proxy for ca. 676-656Ma igneous activity in the Windermere basin, likely related to continental breakup, and set a maximum depositional age for slope deposits on the eastern side of the basin at 652 ± 9Ma . These results are consistent with previous interpretations. The St. Mary-Moyie fault zone near the Canada-U.S. border was most likely a major transform boundary separating a rifted continental margin to the north from intracratonic rift basins to the south, resolving north-south variations along western Laurentia in the late Neoproterozoic at approximately 650-600Ma. For Rodinia reconstructions, the conjugate margin to the southern Canadian Cordillera would have a record of rifting between ~ 730 and 650 Ma followed by passive margin sedimentation.
Introduction
The late Neoproterozoic record of the breakup of the Proterozoic supercontinent Rodinia for western Laurentia (ancestral North America) is in the Windermere Supergroup [1][2][3][4][5]; an unconformity bounded, mostly siliciclastic succession that crops out over 4000 km from northern Mexico to the Yukon-Alaska border ( Figure 1). The Windermere Supergroup (WSG) is associated with a still poorly understood late Proterozoic continental margin that was a precursor to the western edge of Laurentia through the Phanerozoic. To resolve this margin, it is important to recognize significant differences in Windermere basin types from north to south: in the southern Canadian Cordillera (SCC), the WSG is a rift-to-drift continental margin succession [2], whereas correlative strata in the western U.S. were deposited within intracratonic rift basins (e.g., [1,4]).
In the SCC, the WSG has at its base rift deposits that are overlain by a thick basin floor to slope to shelf succession, a motif that is interpreted as a prograding continental margin with no western boundary [2]; consistent with limited U-Pb detrital zircon provenance data indicating sediment derivation from the Laurentian craton (e.g., [6,7]). Discussion of a potential western edge to the basin is hypothetical (see [8]), but the Mesoproterozoic Belt Basin should be a proxy for a western sediment source because it contains abundant non-Laurentian 1.6-1.5 Ga detrital zircon derived from a "western craton" (e.g., [9,10]). Previously published detrital zircon data from post-rift strata of the WSG in the SCC do not have these ages, but are limited to older very low-n analyses from the 1990s (e.g., [7,11]), and three samples using higher-n methods [6,12]. Here, we employ modern techniques and a sampling strategy to cover most stratigraphic units in the SCC from syn-rift to postrift basin floor and slope depositional elements spanning eastern and western outcrop areas to test basin-scale sedimentary provenance. Anchored by the new U-Pb detrital zircon dataset, our discussion about the divergent tectonic setting in the Neoproterozoic is limited to the WSG, which is older than ca. 570 Ma in the SCC, and does not address subsequent extensional tectonic events in the latest Neoproterozoic and Cambrian (e.g., [13,14]). (Figure 1). Postorogenic uplift of the Grenville orogen (1.2-1.0 Ga) in eastern Laurentia dispersed 1.5-1.0 Ga detrital zircon across the continent to be deposited in intracontinental basins between~1.1 and 0.72 Ga [15]; although there may also have been a source west of the Canadian Cordillera prior to the breakup of Rodinia (e.g., [11]). These basins form sequence B and are a reservoir of 1.5-1.0 Ga detrital zircon (e.g., [16]). During the breakup of Rodinia, rifting along western Laurentia led to the deposition of sequence C after 0.72 Ga. The WSG represents sequence C, and in terms of detrital zircon, sequence C often comprises sequence A or B signatures, suggesting the recycling of the older orogenic and basin history of western Laurentia (e.g., [12]).
Windermere Depositional System (SCC): Rift Basin to
Continental Margin. The Neoproterozoic WSG of western North America is an unconformity bounded succession that crops out over 4000 km from northern Mexico to the Yukon-Alaska border [17]. In the southern Canadian Cordillera (SCC), the basal unconformity of the Windermere Supergroup is overlain by an up to 2 km thick synrift succession of intercalated siliciclastic sedimentary rocks and local mafic volcanic rocks (e.g., [2]) ( Figure 2) that are younger than ca. 736-728 Ma [18,19]. The volcanic rocks are correlated to the Gataga volcanics in the northern Canadian Cordillera, ca. 696-690 Ma [20], and the broader ca. 700-680 Ma range of rift volcanic rocks in Idaho (e.g., [1,21]). Detrital zircon age peaks recycled into Cambrian strata suggest that rift-related volcanism in the SCC spanned an approximate age range of 700-640 Ma [12,14,22].
The syn-rift package in the SCC is overlain by a 5-7 km postrift succession of siliciclastic with lesser carbonate rocks ( Figure 3). The WSG post-rift succession consists primarily of deep-marine strata that, because of deformation related to the Mesozoic Cordilleran orogeny, crop out extensively over~35,000 km 2 , which if conservatively palinspastically reconstructed represents a turbidite system of at least 80-85,000 km 2 , and therefore, dimensionally consistent with modern passive margin systems like the Mississippi, Congo, and Amazon turbidite systems [23]. In the eastern part of the outcrop belt, upper slope deposits, including submarine canyons filled with coarse clastic sediment flanked by fine-grained continental slope deposits crop out (Arnott and [24]). Toward the northwest, and over distances of hundreds of kilometers, these strata pass into slope and base-of-slope deposits populated locally by thickly developed (up to~200 m thick) leveed-channel complexes, and then sheetlike, sandstone-rich basin-floor strata in the northwest part of the outcrop belt [25]. Above the slope, facies are carbonate and siliciclastic strata that were deposited in shallow marine environments (e.g., [26]) (Figures 2 and 3).
Early studies recognized similar stratal successions of the WSG across the SCC, commonly turbiditic sandstone at the base overlain mostly by mudstone (now phyllitic or schistose) and capped by shallow-marine carbonates, and interpreted a single grit-pelite-carbonate sequence (e.g., [27]). Work by Pell and Simony [28] suggested that this succession consisted of two lithologically similar and southeastwardtapering wedges related to two discrete episodes of crustal extension. Ross (1991; see also discussion in [23]) reasserted that the succession was a single postrift stratal unit and interpreted a progressive upward change from deep basin floor to continental slope to continental shelf sedimentary rocks-a several-kilometer-scale, upward-shoaling trend interpreted to reflect the progradation of passive margin Laurentia into the developing Pacific miogeocline. Owing to the absence of biostratigraphic control and only poor 3 Lithosphere geochronological control, the occurrence of a distinctive marker unit, termed the Old Fort Point Formation (OFP), is central to this interpretation. Originally described by Charlesworth et al. [29] from the Jasper area in the Rocky Mountains of west-central Alberta, the lithologically and geochemically distinctive OFP was shown by Smith et al. [30], and interpreted time equivalent strata in the northern Canadian Cordillera [31], to form a single stratigraphic marker that traces-out the continent-margin clinoform marking the western margin of northern Laurentia, confirming the single wedge interpretation of Ross [2]. Moreover, a Re-Os date of 607:8 ± 4:7 Ma [32] from black shales in the Geike Siding Member provides a direct date for the OFP marker and is the only radiometric date within the postrift sedimentary pile.
A second, but less definitive, age constraint from the postrift succession is provided by Cochrane et al. [33], who suggest that an almost 200-thick mixed carbonatesiliciclastic succession with stable isotope values as low as -6 o / oo in the Isaac Formation represents deposition immediately preceding the onset of the 580 Ma Gaskiers glaciation [34]. More recently, Canfield et al. [35] correlated this same succession with the younger (571-562 Ma) Shuram-Wonoka anomaly-the deepest and longest carbon isotope anomaly in geological history [36]. However, as pointed out below, similar-aged volcanic rocks occur >3 km above this succession and also above the unconformity that caps the Windermere Supergroup, which makes this correlation problematic (see next).
In southeastern British Columbia (B.C.), the WSG is overlain unconformably by an intercalated assemblage of sedimentary and volcanic rocks of the Hamill Group with a U-Pb zircon date of 569:6 ± 5:3 Ma [8]. The upper Windermere Supergroup in the SCC, therefore, is older than ca. 570 Ma. Problematic then is the reported occurrence of a Cloudina-Namacalathus assemblage in carbonate rocks of the Byng Formation in the Jasper area [37,38], which according to Mountjoy [39] immediately underlies the unconformity at the top of the WSG. Although examples of Cloudina have been reported in rocks as old as 620-590 Ma [40,41], the Cloudina-Namacalathus assemblage has only been reported from~550-541 Ma rocks (e.g., [42,43]), which then postdate volcanic rocks in the Hamill Group. The Byng Formation was assigned to the Windermere Supergroup before U-Pb geochronological data from the Hamill Group were available and therefore its assignment to the Windermere Supergroup should be reassessed, with the possibility that the Byng Formation may instead be part of the Hamill Group, and is in turn overlain unconformably by Cambrian rocks of the Gog and McNaughton groups (e.g., [13,14]). This sub-Cambrian unconformity is related to at least a second episode of extensional tectonism, which, with accompanying subsidence, initiated the economically important Western Canada Sedimentary Basin (e.g., [44]).
Transverse faults that are coeval with deposition of the Windermere Supergroup are recognized in the SCC, such as the St Mary-Moyie fault zone near the Canada-U.S. border that demarcates the southern limit of the deep marine outcrop belt [45]. Moreover, profound facies changes within the Windermere Supergroup occur across other transverse faults in the SCC providing further evidence of syndepositional fault activity [46,47]. At a continental scale, the SW-NE faults are interpreted as initial rift and then continental margin segments with transform boundaries that separated upper and lower divergent plate boundaries (e.g., [1]).
Detrital Zircon U-PB Methods and Results
Locations for 19 detrital zircon samples are shown in Figure 1, and the sample list is shown in Figure 3: Schematic of the Windermere Supergroup in the southern Canadian Cordillera as a mixed siliciclastic-carbonate rift-to-drift continental margin system [23], showing the depositional context for sampled stratigraphy. are described in Matthews and Guest [48] and are expanded upon in the Supplementary methods and results file (available here). Preferred ages are 206 Pb/ 238 U for dates <1500 Ma and 207 Pb/ 206 Pb for dates >1500 Ma. Measurements with <5% probability of concordance were filtered from the dataset. A total of 5697 U-Pb analyses of detrital zircon grains yielded 2096 dates that passed our filtering criteria (Table DR1). Poor analytical efficiency (average n = 110) is the result of widespread recent Pb-loss (Table S1, Figures S1-2) that affected most of the samples (see supplementary information for further discussion (available here)), and it is for this reason that we used the restrictive <5% probability of concordance filter. U-Pb detrital zircon age distributions generally group into two sample sets that correspond to the syn-rift and post-rift subdivisions of Ross [2] (Figure 4).
Syn-Rift Provenance.
The syn-rift sample set has a detrital zircon signature with prominent 1.5-1.0 Ga age probability that is a characteristic of sequence B strata and also consistent with 1.4-1.0 Ga detrital zircon from equivalent syn-rift deposits of the Windermere Supergroup associated with the Gataga volcanics to the north ( [20]; Figure 1). Considering the restricted detrital zircon age signature in conjunction with the depositional setting and the many coarse and angular clasts derived from the underlying strata [45] suggests that sedimentary provenance was probably local. The local source is likely from erosion of the unconformity bounded strata at the top of the Purcell Supergroup in the SCC that Root [51] considers to be part of Proterozoic sequence B. Such lower Neoproterozoic strata would correlate to those above the Belt Supergroup in the northwestern U.S., such as the Buffalo Hump Formation, that are dominated by Mesoproterozoic detrital zircon [9,52,53].
Post-Rift Continental Margin Provenance. Post-rift samples from all stratigraphic units hosting basin floor and continental slope deposits (Kaza, Miette, and Horsethief
Creek groups and the Isaac Formation) (Figures 2, 3, and 4), at all sampling locations in Figure 1, produced the bimodal "archetype" detrital zircon signature that is so characteristic of northwestern Laurentia (e.g., [12]) (Figure 4). These results are compared to detrital zircon spectra of the Belt-Purcell Supergroup because the Windermere Supergroup unconformably overlies the Belt-Purcell Supergroup so sedimentary recycling of detrital zircon should be considered, and the Belt-Purcell Supergroup is a proxy for sampling of basement rocks by a large sedimentary system [9,10,54,55]: (1) a since-displaced western craton (1.6-1.5 Ga), (2) Laurentian basement from the western U.S. (<1.7 Ga), and (3) the Laurentian craton mainly from western Canada (prominent 1.95-1.75 Ga). The southern and northwestern areas of the Belt Basin have detrital zircon age ranges that are younger than those in the post-rift Windermere (Figure 4), eliminating the southern Belt Basin, the western US, and the western craton as source areas. Samples from the northeastern Belt Basin with U-Pb detrital zircon age probabilities that overlap with the postrift Windermere sample set identify a potential proximate source of recycled zircon and further support provenance from 1.9-1.8 Ga orogens of the northwestern Laurentian craton (Figure 1). In summary, the main provenance for the postrift continental margin system is (1) recycling of the potential eastern extent of the Belt Basin over the craton, since eroded, and (2) ultimately the Laurentian craton of western Canada.
In addition to the typical bimodal distribution of the postrift assemblage, one sample from slope channel deposits of the Miette Group at the Jasper locality (Figure 1; 17-JSP-1) yielded a significant number (26) of Neoproterozoic grains with a mean age of 665:8 ± 9:7 Ma (Figure 3). Prior to this result, grains of this age had only been reported from Cambrian strata and were inferred to have been recycled from the Windermere Supergroup [12,22]. This sample confirms a pulse of igneous activity in the Windermere Basin between ca. 676 and 656 Ma and indicates that submarine fan sedimentation is younger than the MDA of 652:2 ± 8:9 Ma.
3.3. Basin Tectonics. The extreme magnitude of subsidence to plunge a continental setting to depths forming a large basin-floor and slope turbidite system, overlain by a topset of shelf deposits, is the hallmark for initiation of a passive continental margin (e.g., [56]). In terms of reconstructing the Neoproterozoic-early Paleozoic passive margin of western Laurentia, the Windermere Basin is one of the few places where the scale of subsidence that occurs after breakup is well displayed, even so it was underlain by continental crust, and has been translated eastward by over 100 km during the Cretaceous (e.g., [46]). The syn-rift Irene volcanics remain undated but likely correlate with ca. 700-680 Ma volcanic rocks reported elsewhere (see above). A later pulse of magmatism at~670-640 Ma is reported from the western US and northwestern Canada [57,58]. Data here confirm that slope channel deposits locally contain sediment from a ca. 676-656 Ma igneous source and are younger than 652:2 ± 8:9 Ma, and we speculate that the timing of breakup and rapid continental margin subsidence was approximately 650 Ma. Post-rift subsidence resulted in the deposition of submarine fans, with at least one matching the scale of [9,54]; and Yukon-Tanana terrane basement, Piercey and Colpron [59], Dusel-Bacon et al. [60]. 6 Lithosphere present-day passive margin turbidite systems [23], and then the fan-slope-shelf system prograded to build much of the continental shelf prior to 570 Ma. The southern limit of this well preserved continentmargin wedge coincides with a transverse structural zone that separated segments of the western North American passive margin [1,46]. Lis and Price [45] recorded 9 km of Windermere strata, including boulder conglomerate with Purcell Supergroup clasts, on the north side of the approximately east-west-trending St. Mary Fault in southeast B.C. (Figure 1), but Lower Cambrian overlying Purcell Supergroup strata on its south side. South of the fault zone, in the northwestern U.S., strata of the Windermere Supergroup were deposited within rift basins and continental breakup either occurred after deposition of the Windermere Supergroup or the rifted margin lay farther to the west (see discussion, [1,4]). Accordingly, the St. Mary-Moyie fault zone could mark the southern edge of the newly formed continental margin associated with the Windermere Supergroup in the SCC, or the margin simply stepped westward to the south.
To the north of the SCC, continental margin strata of the Windermere Supergroup extend into northern BC and the southern Yukon (Figure 1). The basement of Yukon Tanana terrane includes the Snowcap Assemblage with 2.0-1.8 Ga detrital zircon typical of the Laurentian craton [59], and in Alaska the equivalent metasedimentary basement has the same detrital zircon character [60] (Figure 4). Those studies suggest that the continental margin associated with the Windermere Supergroup may have extended northward.
Conclusion
Previous work has shown that the Windermere Supergroup in the southern Canadian Cordillera has the scale and depositional architecture of a passive continental margin, for which the basal rift deposits are younger than~730 Ma, and topset shelf facies are older than 570 Ma. Our U-Pb detrital zircon analysis indicates that the detrital zircon provenance of two syn-rift samples is interpreted as local recycling from remnants of Neoproterozoic Sequence B strata. Detrital zircon samples of the post-rift assemblage of basin-floor and slope facies (17) yielded 1724 U-Pb ages that are dominated by 1950-1750 Ma age probability peaks suggesting that (1) there was no sediment input from the western Belt Basin or a western craton; (2) areas far to the south of the Canada-U.S. border are unlikely source areas; and (3) the two main options for sources of detrital zircon in the Windermere Supergroup are the since eroded eastern extent of the Belt Basin as a recycled source, and the Laurentian craton of western Canada as the ultimate source. A near depositional age fraction confirms a pulse of igneous activity in the Windermere Basin at ca. 676-656 Ma and indicates that postrift slope deposition is younger than 652 ± 9 Ma. These new detrital zircon data are consistent with an incipient passive margin setting for the Windermere Supergroup in the SCC. To reconcile different tectonic settings across the Canada-U.S. border during deposition of the Windermere, the St. Mary-Moyie fault zone was likely part of a major transform boundary separating the southern edge of the rifted continental margin in western Canada from intracratonic rift basins in the western U.S.
Data Availability
The supporting data are submitted to the GSA data repository with this manuscript.
Conflicts of Interest
The authors declare that they have no conflict of interest.
Supplementary Materials
Supplementary materials include a data table (DR1) and a supplementary U-Pb geochronology methods and results file. Figure S1: condordia diagram for sample 17-MQ-1. Figure S2: discordance versus U concentration for rejected measurements. Table S1: upper and lower intercepts of binned measurements from 17-MQ-1. Figure S3: photomicrograph of zircon grains that yield discordant dates. Figure S4: maximum depositional age plot and concordia diagram. | 4,526.2 | 2021-10-28T00:00:00.000 | [
"Geology"
] |
Design and Performance of a Compact Air-Breathing Jet Hybrid-Electric Engine Coupled With Solid Oxide Fuel Cells
A compact air-breathing jet hybrid-electric engine coupled with solid oxide fuel cells (SOFC) is proposed to develop the propulsion system with high power-weight ratios and specific thrust. The heat exchanger for preheating air is integrated with nozzles. Therefore, the exhaust in the nozzle expands during the heat exchange with compressed air. The nozzle inlet temperature is obviously improved. SOFCs can directly utilize the fuel of liquid natural gas after being heated. The performance parameters of the engine are acquired according to the built thermodynamic and mass models. The main conclusions are as follows. 1) The specific thrust of the engine is improved by 20.25% compared with that of the traditional jet engine. As pressure ratios rise, the specific thrust increases up to 1.7 kN/(kg·s−1). Meanwhile, the nozzle inlet temperature decreases. However, the temperature increases for the traditional combustion engine. 2) The power-weight ratio of the engine is superior to that of internal combustion engines and inferior to that of turbine engines when the power density of SOFC would be assumed to be that predicted for 2030. 3) The total pressure recovery coefficients of SOFCs, combustors, and preheaters have an obvious influence on the specific thrust of the engine, and the power-weight ratio of the engine is strongly affected by the power density of SOFCs.
INTRODUCTION
Combustion engines in aviation sectors are partly responsible for air pollution and carbon dioxide (CO 2 ) warming impacts Schafer et al., 2019). Widespread electrification of vehicles can contribute to mitigating the damage caused by the power systems (Needell et al., 2016). Fuel cells are advanced and highly efficient energy conversion equipment and can reduce greenhouse gas emissions (Baldi et al., 2019). Newman (Newman, 2015) concluded that proton exchange membrane fuel cells (PEMFC) and solid oxide fuel cells (SOFC) are the only two feasible energy source devices for aerospace applications when the weight and power of fuel cell systems are taken into account. The power density of PEMFCs is improved to a large degree recently, which is beneficial to the propulsion system. However, the production, transportation, and storage of hydrogen are not easy, and noble metal catalyst is needed for the PEMFCs. SOFC can be fueled by traditional hydrocarbon fuel (Chen et al., 2018) and integrated with gas turbines to improve thermal efficiency and power density (Fernandes et al., 2018).
The power density of SOFC gas turbine hybrid systems is small, compared with that of the traditional combustion engines (Collins and McLarty, 2020). Therefore, the system was proposed to apply to unmanned aerial vehicles (UAV), commuter airplanes, and distributed propulsion airplanes. This type of aircraft is sensitive to emission and specific fuel consumption. The advantage of the propulsion system in thermal efficiency can be shown when the weight of the oil load is further higher than that of the power system, which means that the endurance of aircraft is long. (Himansu et al., 2006) first proposed that SOFC gas turbine hybrid systems can serve as core engines of UAVs with high altitude long endurance (HALE) aerospace missions. (Aguiar et al., 2008) showed that the generation efficiency of the hybrid system would be improved by using three fuel cell stacks instead of one stack. Further study found that the preheating requirement for cold atmosphere air and liquid hydrogen is huge when flight altitude is high up to 15-22 km (Tarroja et al., 2009). Commuter airplanes are promising in the civil sector. (Stoia et al., 2016) revealed that the SOFC gas turbine hybrid system is suitable to provide power for all-electric aircraft. It has comparative advantages over internal combustion engines in emission, noise, efficiency etc., even though the power-weight ratio of the hybrid system is low as 300 W/kg. also pointed out that the hybrid system can achieve efficiency in excess of 60% by configuring a hot recycle blower. Moreover, (Woodham et al., 2018) completed the safety analysis for the hybrid power system. (Okai et al., 2012;Okai et al., 2015) built an analytical model of a SOFC gas turbine hybrid power system for a blended wing body distributed propulsion aircraft. The authors showed that weight reduction would be key technology if the engine is expected to come into service. Moreover, the weight problems will be mitigated if the SOFC gas turbine hybrid core is fueled by multi-fuel instead of sole hydrogen fuel (Okai et al., 2017). (Valencia et al., 2015) found that the use of SOFC gas turbine hybrid systems fueled by liquid hydrogen could contribute to reducing by 70% thrust specific fuel consumption on the aircraft with distributed propulsors and boundary layer ingestion, but the weight of the aircraft will increase 40%. (Yanovskiy et al., 2013) showed that the aviation engine with SOFCs is promising by improving fuel cell technologies, even though its weight is high. (Chakravarthula and Roberts, 2017) showed that the SOFC hybrid system for a typical commercial flight outperforms the conventional turbogeneration in both endurance and power-weight ratio at cruising altitude. (Papagianni et al., 2019) showed that the SOFC gas turbine hybrid system could provide 12% in fuelsaving under cruise conditions. (Evrin and Dincer, 2020) evaluated an integrated SOFC system for medium airplanes, which has overall energy and exergy efficiencies of 57.53% and 47.18%, respectively. An engine composing of compressors, SOFCs, and nozzles for high altitude long endurance UAVs was proposed in our previous work (Ji et al., 2019b), which are remarkably different from traditional SOFC gas turbine hybrid systems for aircraft (Ding et al., 2020). The compressor is powered by SOFCs rather than turbines. There are no turbines in the engines. Therefore, the combustion temperature can be further improved. The specific power of the engine is high, but its weight is also huge. Finding a configuration that presents a trade-off between the thrust specific fuel consumption reduction and weight increment is a crucial problem for the engine.
The novelty of this paper is as follows. A compact airbreathing jet hybrid-electric engine coupled with SOFCs fueled by liquefied natural gas is proposed and studied. The main difference between this paper and our previous work (Ji et al., 2020) is system configuration. In this work, an air preheater is integrated with the nozzle. The exhaust in the nozzle expands while exchanges heat with cold compressed air. In addition, a heat-exchanger is integrated with combustors to preheat fuel. In a nutshell, the weight of the engine is decreased, and the nozzle inlet temperature is further increased. The above content is demonstrated in Section System description and cycle analysis. The mass estimation method and thermodynamic models with verification are introduced in Section Mathematics models. Performance analysis is completed in Section Results and discussion.
SYSTEM DESCRIPTION AND CYCLE ANALYSIS
The propulsion system configuration is demonstrated in Section System description. There are some differences in the thermodynamic process between the system and the conventional combustion engine, which are analyzed in Section System description.
System Description
The configuration diagram and detailed process flow diagram of the compact air-breathing jet hybrid-electric engine coupled with SOFCs (HEFC engine) fed by liquefied natural gas are shown in Figure 1. Air from the atmosphere at state ① is compressed by an intake and a compressor in turn. Then, the air is divided into two parts. Some are provided for the SOFC cathode, and others are directly utilized by a combustor at state ③. The air exhaust preheater is integrated into the nozzle to heat air from the compressor at state ②. A fuel exhaust heat exchanger is also integrated into the combustor, which is a common method for protecting the combustor wall of ramjets (Jiang et al., 2018). The air and fuel preheated are respectively provided for SOFC cathode and anode. SOFCs generate electricity and drive the compressor by the motor. Next, SOFC exhaust, part compressed air, and some fresh fuel are mixed and burnt in the combustor. Finally, the combustor exhaust expands and outputs propulsion power in the nozzle. The alone heat exchanger and reformer are designed in our previous system (Ji et al., 2019b), which are removed or integrated with other components in this work.
The fuel exhaust heat exchanger and the air exhaust preheater are specially designed for the HEFC engine, which is demonstrated as follows. Advantages of the fuel exhaust heat exchanger: 1) the wall of combustors can be cooled by fuels. 2) The liquid fuel is converted into steam, which can be directly utilized by SOFCs. 3) Because the combustor is cooled, the limitation combustion temperature is increased. Advantages of Frontiers in Energy Research | www.frontiersin.org February 2021 | Volume 8 | Article 613205 the air exhaust preheater: 1) the wall of nozzles can be cooled by compressed air.
2) The temperature of the air can be improved. There are some differences between the HEFC engine and traditional turbojet engines. For the turbojet engines, turbines are connected to compressors via a shaft (Şöhret, 2018). However, the shaft between turbines and compressors does not exist for the HEFC engine. Efforts have been made to decrease the weight of HEFC engines, compared with a traditional SOFC gas turbine hybrid system for the electric supply (Lv et al., 2016). The water pump, the evaporator, the mixer, the turbine, the reformer, and the fuel compressor are simplified. The power density of SOFC stacks is 0.17 kW/kg, and that of SOFC systems is 0.035 kW/kg, according to (Chick and Rinker, 2010). Therefore, the improvement of the power density for the HEFC engine is possible. Besides, the power density of SOFCs increases with the increase of years. The power density of SOFCs is about 0.263 kW/kg in 2015, and the one predicted in 2030 is 0.684 kW/kg, according to (Valencia et al., 2015). With the increment of SOFC power density, the power-weight ratio of the HEFC engine may be superior to that of ICEs.
Analysis of Thermodynamic Processes
HEFC engines undergo a special thermodynamic process in the nozzle. The exhaust exchanges heat with cold air while expanding, which is shown in Figure 2A. The working fluids will undergo expansion from state four to state eight if it does not exchange heat with cold air. When the heat exchanging occurs, the working fluids undergo expansion from state four to state five. A simple way can also achieve this in Figure 2B where the combustion outlet temperature and pressure ratios are the same as that in Figure 2A. The working fluids from the compressor at state three are heated to state four. Then, the working fluids undergo heat exchange from state four to state eight and expansion from state eight to state five in turn. However, the nozzle inlet temperature is low by this method. The expansion power in Figure 2B is lower than the one in Figure 2A when the combustor exit temperature and pressure ratio in Figure 2A are the same as that in Figure 2B.
MATHEMATICS MODELS
The mathematics model of the HEFC engine is built to measure the performance of the propulsion system. First, the thermodynamics and mass models are presented in this section. Then, the performance criterion and solution methods of the system are demonstrated.
Model Assumptions
(1) The HEFC engine is in steady-state operation.
(3) Gaseous working fluids are considered as ideal gases.
(4) The air contains 21% oxygen and 79% nitrogen. (5) All components are adiabatic. (6) Carbon deposition is not considered for the SOFC with internal reforming. (7) The detailed layout of the fuel exhaust heat exchanger and the air exhaust preheater is not considered. (8) The mass of the fuel exhaust heat exchanger is neglected. (9) The mass of fuel pumps and pipelines is assumed as 10% of the total mass of the HEFC engine. Thermodynamic Models The thermodynamic model of the HEFC engine is built in this section, which includes the air exhaust preheater model, fuel exhaust exchanger model, intake model, compressor model, and fuel cell model.
Air Exhaust Preheater Model
The polytropic process from state four to state five in Figure 2A is re-described in the red zone in Figure 3 with q < 0 and w > 0. n is the polytropic index, and k is the specific heat ratio. w and q are process work and heat. In this polytropic process, the property of the gas meets the equation.
pv n constant.
(1) Therefore, Subscript 1 represents the inlet of the nozzle, and subscript 2 represents the outlet of the nozzle. The p 2 is atmospheric pressure. The process work can be calculated as, The work can be expressed by Eq. 2.
The equation of state can be expressed as, The equation of polytropic process work in Eq. 4 can be simplified by Eq. 5 as, The heat of the polytropic process can be calculated by the first law of thermodynamics.
The ratio of the heat and work in the polytropic process can be expressed as The Eq. 2 also can be expressed as, By combining Eq. 9 and Eq. 5, the polytropic index can be expressed as, The rate of heat exchange is equal to the energy provided for the compressed air. The polytropic index and nozzle outlet temperature can be acquired by combining Eqs 7, 10. The real process in the air exhaust preheater is extremely complicated. However, the polytropic process described by Eqs 1-10 can be achieved by the reasonable arrangement of the preheater and the specific geometrical design of the nozzle, which takes considerable time. In addition, the qualitative conclusion may not be drawn because the analysis of the novel thermal cycle can be limited by complex physical layout models. Therefore, preliminary performance analysis from the perspective of thermodynamics is important. The effects of real physical conditions are considered and can be reflected by total pressure recovery coefficients.
Fuel Exhaust Heat Exchanger Model
The outlet temperature of the fuel exhaust heat exchanger is designed as SOFC anode inlet temperature. The heat transfer rate can be drawn by Eq. 11. The combustion reaction is shown as Eq. 12. Particularly, the oxygen in the combustor can be used up by adding fresh fuel to the combustor. The equivalence ratio of the combustor flow is exactly stoichiometric. The combustion temperature can be calculated according to the energy conservation equation as Eq. 13. The total pressure recovery coefficient of the fuel channel is assumed as 0.92. The total pressure recovery coefficient of combustor ξ comb is assumed as 0.98. The combustion efficiency is assumed as 0.97.
For the air exhaust preheater, the heat transfer rate in the nozzle is equal to the energy provided for the compressed air. The outlet pressure of nozzles is equal to atmospheric pressure. The total pressure recovery coefficient of the air channel ξ hx is assumed as 0.95.
Intake and Compressor Model
The thermal process in an intake is considered as an adiabatic compression process. The total pressure recovery coefficient of the intake is from the practical relation recommended by NASA (Jansen et al., 2017). The outlet parameters of the intake can be drawn by Eqs 18-21. The adiabatic compressor model is assumed in this study (Korakianitis and Wilson, 1992). The outlet parameters of the compressor can be calculated by Eqs 22-25. (Tornabene et al., 2005;Valencia et al., 2015;Cirigliano et al., 2017).
Fuel Cell Model
The lumped mathematical model of SOFCs has been demonstrated in our previous paper (Ji et al., 2020). Fuel internal reforming occurs in the SOFC anode channel, which utilizes the water steam from the electrochemical reaction (Ramírez-Minguela et al., 2018) as Supplementary Equations Table S1, which produces a mass of hydrogen. Hydrogen reacts with oxygen in the three-phase boundary (TPB) as Supplementary Equations S3. The concentrations of hydrogen, water steam, and oxygen in the three-phase boundaries are calculated by porousmedia gas-phase transport models (Aguiar et al., 2004) (Chan et al., 2001), which include ohmic, concentration, and activation polarization as Supplementary Equation S11. Ohmic losses produce because of resistance to conduction of ions and electrons. This voltage drop can be expressed as Supplementary Equation S12. The electrode overpotential losses can be divided into activation and concentration overpotential, which are connected with the electrochemical reactions. When the electrode reaction is hindered by the effects of mass transport, the concentration overpotentials occur (Aguiar et al., 2004). The concentration polarization can be calculated by Supplementary Equation S13, according to Hughes and Dimitri et al. (Hughes, 2011). The kinetics of reactions on the electrode reaction surface is reflected by activation overpotentials, which is usually represented by the non-linear Butler-Volmer equation (Chan et al., 2001). The anode and cathode activation polarization can be derived as Supplementary Equations S14, S15, respectively. In addition, the anode and cathode exchange densities are affected by microstructure and operational conditions (Yonekura et al., 2011) as Supplementary Equations S16, S17. Fuel cell physical parameters can be easily found in the literature (Chan et al., 2001).
The performance of SOFCs is defined in Supplementary materials Supplementary Table S4. The output power of SOFCs is the production of the voltage, current, and fuel cell area as Supplementary Equation S18. The electric efficiency of fuel cells is the ratio of electric power and fuel energy as Supplementary Equation S19. The fuel utilization of SOFCs is equal to the ratio of molar flow rate of hydrogen consumed and the maximum molar flow rate of hydrogen from fuel as Supplementary Equation S20. The total pressure recovery coefficient of SOFCs ξ cell is assumed as 0.97.
Mass Estimation
For a traditional SOFC gas turbine hybrid system, (Tornabene et al., 2005) have built detailed mass models to estimate its performance. They analyzed the effects of thermal parameters on the component mass. The mass of components is in proportion to the mass flow and is affected by pressure. HEFC engines are similar to the SOFC gas turbine hybrid system. The difference between each other is that the compressor is powered by fuel cells for the former. Therefore, the configuration of feed systems of air and fuels for SOFCs between each other are similar. In this study, the mass equations of compressors, air exhaust preheater, and combustors are fitted by the data provided by (Tornabene et al., 2005) which are shown in Table 1 as Eqs 26-28 and 32-43. The mass of SOFCs is determined by power density as Eqs 30, 31 (Valencia et al., 2015). The specific power of planar SOFCs is predicted to be about 0.263 kW/kg in 2015 and 0.6575 kW/kg in 2030. (Cirigliano et al., 2017) fit the mass equation of motors with R 2 of 0.832 as Eq. 29. Apart from the aforementioned components, mixers, splitters, fuel pumps, and pipelines are included in the HEFC engines. The mass sum of these components is assumed as 10% of the total mass of the HEFC engine.
Performance Criterion
The performance of the HEFC engine can be evaluated by thrust weight ratio/power-weight ratio, specific impulse, specific thrust, thermal efficiency, overall efficiency, propulsion efficiency.
The mass flow of fuel injected into the engine is.
The energy of fuel is: The output power of nozzles is w, according to Section Thermodynamic models. Therefore, the outlet velocity of working fluids in the nozzle is: u nozz,out 2 · w m fuel,tota + 1 .
The effective kinetic energy produced by the working fluids is: According to momentum theory, the thrust produced by the HEFC engine is F 1 + m fuel,tota · u nozz,out · u inta,in .
The specific thrust is the ratio of thrust and air flow.
The thrust weight ratio is the ratio of thrust and weight for the HEFC engine. It can be expressed as: The power-weight ratio is the ratio of kinetic energy and quality. It can be expressed as: The specific impulse is a measure of how effectively jet engine uses fuel. It is dimensionally equivalent to the generated thrust divided by the propellant mass flow rate. The thermal efficiency of the HEFC engine is a measure of how efficient the HEFC engine is converting heat to kinetic energy. It can be expressed as: The propulsion efficiency tells us how efficient the HEFC engine is using the kinetic energy generated by the gas generator for propulsion purposes.
The overall efficiency is the production of thermal efficiency and propulsion efficiency.
Solution Methods
The computer flowchart of the HEFC engine is based on the models described in Section Model assumptions-Performance criterion, which is shown in Figure 4. The first part of this computer code contains the HEFC engine's input information, including the component efficiency, altitude, Mach number, pressure ratio, etc. In this work, the inlet temperature of the anode and cathode of SOFCs are constants. After intake and compressor calculations, the mass flow of fuel for reformers is guessed. Then, the SOFC calculation begins. The non-linear reforming, electrochemical equations, and cell's thermal equations are solved simultaneously. The outcomes of SOFC calculation include the SOFC outlet temperature, voltage loss, real voltage, electricity efficiency, etc. If the SOFC power is not equal to the compressor power, the mass flow of fuel for the reformer will be guessed again. In case the convergence conditions of the cycle are fulfilled, the calculation of the combustor, fuel exhaust heat exchanger, nozzle, and air exhaust preheater begins in turn. Finally, the performance parameters of the HEFC engine are output, which includes thrust power ratio, specific impulse, etc.
Model Verification
The purpose of verification is to quantify the error of numerical simulation by the demonstration of convergence for the particular model under consideration (Thacker et al., 2004). SOFCs are a key component in the HEFC engine. The rest of the component models have been widely cited and are without verification assessments. The polarization model of SOFCs has been verified in our previous work (Ji et al., 2019c).
Based on mass models discussed in Section Mass estimation, the calculation results have been validated with that of (Tornabene et al., 2005). The input condition for validating the mass model is shown in Table 2.
There is a small difference between our results and Tornabene et al.'s as shown in Table 3. The code-to-code comparisons as a means of calculation verification are completed.
RESULTS AND DISCUSSION
The performance of the HEFC engine is shown according to the built mathematical model and compared with that of the conventional turbine engine. Then, the effects of pressure ratios on the performance of the engine are demonstrated, and sensitivity analysis is completed.
Performance of the HEFC Engine
Stream data, component mass, performance parameters are shown in Tables 4-6 with zero altitudes and zero velocity. Under the condition, the thrust of the engine is the highest and called installed thrust. The designed compressor pressure ratio is 6. In general, SOFC gas turbine hybrid systems are equipped with a one-stage centripetal compressor. The pressure ratio of the compressor is 2- 6 Thacker et al., 2004. The performance of the HEFC engine is shown as Case A in Table 5. The performance of the traditional turbojet engines is shown as Cases B and C. Case B: The fuel equivalence ratio is always equal to the stoichiometric ratio. Case C: The combustion temperature is constant and equal to 2,000K. The models of intakes, compressors, combustors, and turbines (if any) in Case A are completely the same as those in Cases B and C. In addition, fuels for these engines are liquefied natural gas.
The combustion temperature is 2498K in Case A, and the temperature in Case B is 2379K. The equivalence ratios for the two cases are both stoichiometric ratios. Because the pressure ratio is low, the nozzle outlet temperature is considerably high in Table 4, and the thermal efficiency of the HEFC engine is only 0.254 in Table 5. The thermal efficiency in Case A increases by 9.01% and 6.28%, compared with that in Cases B and C. The energy conversion efficiency of fuel cells is high, which converts the chemical energy of fuel into electricity directly. The loss caused by fuel cells becomes heat energy, which can be utilized by the combustor and nozzle. Therefore, the thermal efficiency in Case A is higher than that in Case B and C. The high nozzle inlet temperature will produce huge thrust/power. The specific thrust of the HEFC engine is 1253 N/(kg.s −1 ). It respectively increases by .
Parameters Symbols Units Value
Pressure ratios of the compressor π -1.83 Air flow of the compressor m comp,a kg/s 1.01 Mass flow injected into the reformer m refo,f kg/s 0.0141 Mass flow injected into the combustor m comb,a kg/s 1.0369 Inlet pressure of the combustor p comb kpa 130 TABLE 3 | Simulation and reference data of the mass model (Tornabene et al., 2005).
Parameters
Tornabene (Tornabene et al., 2005) 4.4% and 20.3%, compared with Cases B and C. The HEFC engine has an obvious advantage over the conventional gas turbine engine in specific power. The specific impulse of the HEFC engine 2,189 s. It is higher than that in Case B and lower than that in Case C. The specific impulse in Case C is highest because the propulsion efficiency in Case C is higher than that in Case A. Table 6 shows that the weight of the HEFC engine is high up to 659.1kg, and Figure 5 shows the component mass distribution of the engine. SOFCs makes up most of the weight of the engine, which is over 50%. The sum of combustors weight and motors weight makes up 20%-30% weight of the engine. Decreasing SOFC weight is the key to decrease the weight of the engine. In addition, improving the power-weight ratio of the motors is meaningful work. The diagram of the weight and power of the several power sources is shown in Figure 6. The pressure ratio changes from two to six for the HEFC engine. Data for internal combustion engines and turbine engines are from Ref (Cirigliano et al., 2017). Obviously, the proposed HEFC engine has an advantage over the internal combustion engines and has a disadvantage over the turbine engines. When progress is made in fuel cells for the future, the power-weight ratio of HEFC engines will increase to a large degree. In addition, the specific thrust of the engine under the highest combustion temperature in this work is about 1.7 kN/(kg/s), which is higher than that in our previous paper of about 1.6 kN/(kg/s) under the same operating conditions (Ji et al., 2019a).
Effects of the Pressure Ratio on the HEFC Engine
Pressure ratios are an important parameter for combustion engines. Figure 7 shows the effects of pressure ratios on the HEFC engines and the traditional turbojet engines (Cases A, B, and C). The combustion temperature decreases with increasing pressure ratios for the HEFC engines. However, the temperature increases for the traditional turbojet engines. For these two engines, two stable flow opening systems are built where the inlet and outlet boundaries are the intake inlet and the combustor outlet. Work or heat is added to the systems. For the HEFC engine, the heat energy is added to the opening system. For the turbojet engine, the mechanical work is added to the opening system. As pressure ratios increase, the compressor power increases. The rate of heat transfer by the air exhaust heat exchanger decreases because of the constant inlet temperature of the SOFC cathode. Therefore, the outlet enthalpy or temperature of these opening systems varies. It is bad for the traditional turbojet engine to increase pressure ratios. The highest combustion temperature is close to 2,700K. The turbine will be malfunction. With increasing pressure ratio, the temperature is about 2,300K for the HEFC engine. The combustion temperature in the HEFC engines is the same as the one in the turbojet engine when the pressure ratio is about 11. Owing to the constant combustion temperature in Case C (2,000K), the specific thrust first increases and then decreases with increasing pressure ratios in Figure 7B. The specific thrust in Case B increases with the increase of the combustion temperature and pressure ratios. In Case A, even though combustion temperature decreases, the specific thrust still increases. The compressor power increases with the increase of pressure ratios and is converted into propulsion power in the nozzle. Thus, the conclusion can be made that the specific power of the HEFC engine increases with pressure ratios. The specific thrust in Case A is superior to that in Case C, which means that the HEFC engine has an advantage over the traditional turbojet engine. In addition, even though the assumption is made that the traditional turbojet engine can be operated at a high temperature up to 2,700K, the HEFC engine still has an obvious advantage over the turbojet engine when the pressure ratio is slightly high. If the pressure ratio is too small, the compressor power is small. The pressure loss in the air exhaust heat exchanger will lead to the performance decline for the HEFC engine. Meanwhile, the conventional turbojet with a few components will have an advantage in the specific thrust.
The specific impulse in Case A and Case B both increases with the increase of specific power under the constant fuel and air flow rate in Figure 7C. In Case C, as the pressure ratio increases, the fuel flow rate decreases. The specific impulse still increases, even though the specific thrust slightly decreases when the pressure ratio is big. It can be seen that the specific impulse in Case C is superior to that in Case A. The reason is that the fuel flow rate in Case C is lower than the one in Case A. The thrust in Case C is also lower than the one in Case A. The specific impulse is the ratio of thrust and fuel flow rate. Therefore, the low flow rate of fuel means that the specific impulse can be high because the variation degree of thrust is lower than the degree of fuel flow rate. The fuel flow rate in Case A is the same as the one in Case B, but the specific impulse in the former is higher than that in the latter. The reason is that thrust in the former is higher than that in the latter. As pressure ratios increases, the advantage increases because the SOFC power increases. In Figure 7D, the thermal efficiency in Case A is higher than that in Cases B or C, which shows that the HEFC engines perform well in the view of thermodynamic cycles. In our previous work (Ji et al., 2019d), the specific impulse of the engine first increases and then decreases where the engine is equipped with the anode and cathode exhaust recirculation. Therefore, the novelty system configuration in this paper can work well under high pressure ratios, which is meaningful.
Sensitivity Analysis
Finding the significant design parameters on the performance of the HEFC engine is important. Therefore, the sensitivity analysis is completed in this section, and the effects of some design parameters on the specific thrust and power-weight ratio of the HEFC engine are investigated and depicted in Figures 8, 9. The variation range for each parameter is about ± 10%.
The total mass flow of fuels is constant with varying fuel utilization because the equivalence ratio is always equal to the stoichiometric ratio. However, this will lead to the change of the fuel flow rate in the SOFC and the one in the combustor. The compressor power and propulsion power are hardly affected. Therefore, the specific thrust is almost not affected by fuel utilization U f in Figure 8. The total pressure recovery coefficients of SOFCs ξ fc , combustor ξ comb and heat exchanger ξ hx have a strong influence on the specific thrust of the HEFC engine. The reason is that the pressure ratio for the nozzle is strongly affected by these coefficients. The decline in the total pressure recovery coefficients will lead to the decline of the pressure ratio for the nozzle. Thus, the thrust decreases. The specific thrust is slightly affected by the transmission efficiency of motors ξ moto and air separation ratio ϕ. SOFC cathode inlet temperature T ca plays an important role in the HEFC engine. An increment of the temperature means that the combustion temperature and thrust both increase. The weight of the motors or SOFCs is extremely sensitive to the compressor power. Figure 9 shows that the power-weight ratio of the HEFC engine is strongly affected by the transmission efficiency of motors ξ moto and power density of SOFCs ψ. The weight of heat exchangers is affected by the total pressure recovery coefficient of ξ hx and air separation ratio ϕ. The weight ratio of the heat exchangers and the HEFC engine is low. Therefore, the power-weight ratio of the engine is not sensitive to these two parameters. The sensitive degrees of the rest of the parameters, such as SOFC cathode inlet temperature on the power-weight ratio, are similar to their sensitive degrees on the specific thrust.
CONCLUSION
In summary, a compact air-breathing jet hybrid-electric engine coupled with SOFCs fueled by liquefied natural gas is proposed and studied. Through simulations, the following key conclusions are drawn.
1) The specific thrust of the HEFC engine is increased by 20.3%, compared with that of the traditional turbojet engine with a combustion temperature of 2000K. Meanwhile, thermal efficiency increased by 6.3%. The weight of SOFCs makes up most of the weight of the engine, which is over 50%. The weighted sum of the combustor and motor makes up 20-30% total weight of the engine. In addition, the specific thrust of the engine under the highest combustion temperature in this work is about 1.7 kN/ (kg/s), which is higher than that in our previous paper of about 1.6 kN/(kg/s) (Ji et al., 2019a).
2) With increasing pressure ratios, the limitation combustion temperature of the traditional turbojet engine increases, but the temperature of the HEFC engine decreases. Even though the assumption is made that the traditional turbojet engine can be operated at a high temperature up to 2,700K, the HEFC engine still has an obvious advantage over the turbojet engine in specific thrust. In addition, the specific impulse increases with the increase of pressure ratios. However, the specific impulse of the engine in our previous configuration first increase and then decrease (Ji et al., 2019d). The novelty system configuration in this paper can work well under high pressure ratios.
3) According to sensitivity analysis, the total pressure recovery coefficients of SOFCs, combustors, and preheaters have a strong influence on the specific thrust of the HEFC engine. The powerweight ratio of the HEFC engine is strongly affected by the transmission efficiency of motors and the power density of SOFCs.
4) The transmission efficiency and power density of motors will increase if superconducting motors can be applied to the engine. However, the motor needs to be cooled generally, and a suitable cold source is essential. Recently, researchers paid more attention to the light materials for SOFC electrodes and electrolytes. The weight of the SOFC will decrease to a large degree if the new materials can be used.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. | 8,170.4 | 2021-02-15T00:00:00.000 | [
"Engineering"
] |
A Static Voltage Security Region for Centralized Wind Power Integration—Part II: Applications
: In Part I of this work, a static voltage security region was introduced to guarantee the safety of wind farm reactive power outputs under both base conditions and N-1 contingency. In this paper, a mathematical representation of the approximate N-1 security region has further studied to provide better coordination among wind farms and help prevent cascading tripping following a single wind farm trip. Besides, the influence of active power on the security region is studied. The proposed methods are demonstrated for N-1 contingency cases in a nine-bus system. The simulations verify that the N-1 security region is a small subset of the security region under base conditions. They also illustrate the fact that if the system is simply operated below the reactive power limits, without coordination among the wind farms, the static voltage is likely to exceed its limit. A two-step optimal adjustment strategy is introduced to shift insecure operating points into the security region under N-1 contingency. Through extensive numerical studies, the effectiveness of the proposed technique is confirmed.
Reactive limit, lower bound and upper bound of wind farm w L a
Linearity index ε i Bus type of wind farm i; ε i ∈ {−1, 0, +1} ξ Bus types of the wind farms; ( ) , ,..., Near points where all wind farm bus types are 1, −1, respectively η i Reactive power operating point of wind farm i η Reactive power operating points of the wind farms
Introduction
Centralized wind power integration in China has been beset by cascading tripping incidents involving wind farms.One of the major reasons for this is the lack of coordinated voltage/reactive power control [1][2][3][4][5][6][7][8][9].A number of techniques have been investigated to maintain the voltage within a specified range and improve the system stability for a single wind farm [10][11][12][13][14][15][16][17].However, in centralized integration of wind power, interdependency among wind farms and cascading tripping events further complicate the voltage control problem.The methods developed for a single wind farm are not applicable, and may even have an adverse effect.A static voltage security region under normal conditions and an online method for describing it were proposed in the first part of this work [1].Furthermore, in order to guarantee that the voltage will remain within limits under both normal operating conditions and wind farm N-1 tripping conditions, N-1 security region is studied in detail in this work.
Besides, it was pointed out in [1] that cascading trips tend to happen very quickly (usually in less than 2 s), rendering an effective response virtually impossible once an incident has begun.Thus, it is much more important to establish preventive control to maintain a reasonable operating status for all the closely coupled wind farms under normal operating conditions, and also to ensure that the wind farms will still be working within acceptable voltage limitations when an N-1 contingency occurs.Note that in this work, an N-1 contingency refers to a single wind farm trip for the sake of convenience.
Therefore, for any wind farm whose reactive power output is within this security region, the corresponding voltage will be within limits.If the operating point is outside the N-1 security region, a preventive adjustment is supposed to be carried out by the automatic voltage control (AVC) system, which necessitates a set of constraints on the wind farm voltages [18][19][20].The problem of how to present such voltage constraints is also considered in this paper.
However, the security region is determined with a specified active power output from the wind farms.In other words, different levels of wind power penetration create different voltage security regions, and thus it is of interest to determine how the security region varies with respect to the active power.In practice, nearly all cascading trip faults have occurred when wind power generation at the wind farms was at a high level.Hence, an analysis of the relationship between the security region and wind power penetration will be of great value.
The remainder of the paper is organized as follows: in Section 2, the security region under N-1 contingency conditions is studied, and an optimal adjustment strategy is proposed to shift insecure operating points into the security region under N-1 contingency.In Section 3, the impact of wind penetration on the security region is examined.A nine-bus system with three wind farms is studied in Section 4, and the security region under N-1 contingency is derived.Numerical results for the optimal adjustment strategy are also presented; these provide an intuitive prospective adjustable voltage range for the AVC with minimum adjustment of the wind farm reactive power outputs.Finally, observations and conclusions are stated in Section 5. [1] In the first part of work, the concept of voltage security region of wind farms could be expressed as a set of constraints limiting the reactive power of each wind farm to maintain its static nodal voltage in the secure range, given the active power generation of each wind farm, which was compared with a sampling-based approach and several different linear approximation techniques.The results showed that the proposed method expected to produce an approximate security region that was very close to the actual one, and could be easily represented in closed mathematical form, while greatly reducing the required computations.
Summary of the First Part of Work
At the same time, it was pointed out in the first part of this work [1] that in order to mitigate the cascading trips, the region should ensure secure operation both under normal operating conditions and N-1 contingencies.It was obvious that normal voltage security region was the basis for N-1 voltage security region to provide better coordination among wind farms and help prevent cascading tripping when a single wind farm was tripped.If an operating point was in the normal security region, but out of the N-1 region, this meant that cascading was probably triggered by the first tripping event.Thus, even if the current operating status was normal, it was not secure enough, and preventive control measures should be carried out according to the proposed N-1 voltage security region.Therefore, we put emphasis on the calculation of normal voltage security region in Part I [1].
N-1 Static Voltage Security Region
Based on the concepts introduced in [1], the static voltage security region when wind farm w is tripped is bounded by the 2m planes 1 where w i L + denotes the i th plane through the near point ξ + when wind farm w is tripped, and w i L − denotes the i th plane through the near point ξ − when wind farm w is tripped.Therefore, the matrices A w+ and A w− of Equations ( 2) and (3) are valid, and the overall N-1 security region can be expressed in terms of 2m(m + 1), such as The matrices will vary in real time according to active wind power generation.
Note that w = 0 denotes normal operating conditions. is the set of m planes belonging to w − .In other words: The linear approximation method (6a) of [1] can be used to determine the center of the security region.Let , η η be the security region when wind farm w is tripped.Then, the N-1 security region can be expressed as . If O a is the center of the security region, it can be written as follows: Here, <max/min>(a,b,c) denotes the operator that extracts a new vector from the vectors (a,b,c), such that each component of the new vector is the maximum/minimum from among the corresponding components of the original vectors.For example, <max>((3,2,1), (1,7,6), (2,5,4)) = (3,7,6).
Similarly, each N-1 contingency can also be assessed according to its area of intersection with normal conditions.The smaller this area, the more insecure the wind farm is.Three proposed assessment indices for each scenario, including both normal conditions and a contingency, are given in Equations ( 5)- (7), where min(a) in Equation ( 6) returns the minimum component of vector a. 6) is negative, the voltage security region does not exist.Otherwise, the index Equation (7) lies within the interval [0, 1], and the contingency is more severe when this index is close to 0:
Minimum-Adjustment Correction Method
When the current operating point is in the normal security region, but not in the N-1 security region, it is desirable to shift the operating point into the N-1 security region with minimum reactive power adjustment.An optimization model is constructed to achieve this goal.If the number of wind farms is m, the optimization model can be written as: where Ω can be further expanded as follows, using TCs: In this optimization model, linear approximation of the security regions is employed.Note that the number of constraints increases quadratically with m.
Two-Step Optimal Adjustment Strategy
Evidently, having the operating point at the center of the N-1 security region may be the best arrangement.After an insecure operating point has been shifted as far as the security region boundary, it can be moved further into the security region, so that the greatest possible margin is maintained between it and the boundary.Figure 1 illustrates a two-step adjustment strategy that will shift an insecure operating point to somewhere inside the N-1 security region.
The first step is the minimum-adjustment correction method.Denote the result of this procedure by O l , which is on the boundary of the security region.In the second step, this new operating point is moved further toward the center of the security region.It is intuitively clear that the nearer the point is to the center of the security region, the greater the aforementioned margin.Accordingly, the operating point is moved from O l to O a , and the area between the two points (defined as the "safe operating range") remains inside the security region because of its convexity.
Figure 2 shows the distribution of voltage in the proposed security region.We know that equipotential lines never intersect one another.For this reason, when the operating point is moved from O l to O a , the corresponding voltage varies monotonically.This provides an intuitive prospective adjustable voltage range for each wind farm, given by: * * ,min ,max where ,min
Impact of Wind Penetration on the Voltage Security Region
The security region is determined with a specified active power output from the wind farms.Therefore, different levels of wind power penetration create different voltage security regions.However, the initial security region is the basis of normal/N-1 security region, so we will put more emphasis on it in the following work.
From the perspective of continuation power flow (CPF), the voltage may initially rise slightly, and then decline to the point of collapse, which is perhaps a different result from the traditional CPF for a load bus.When the nine-bus system is used as an example, the CPF is shown in Figure 3. Since a wind power injection bus can be regarded as a negative load bus, the reverse horizontal coordinate axis is used.If the Thevenin equivalent is used for the point of common coupling (PCC) (Figure 4), when the penetration is low, the impact on the system side is slight, and E th can be regarded as a constant.Thus, an expression for the voltage drop is easily obtained from Equation ( 14), and indicates that the voltage rises slightly with increasing penetration.However, E th cannot remain constant when the penetration is high, since more reactive power is consumed on X th with the transfer of more active power, and more reactive power must be provided to keep the original voltage profile.This is why the security region moves toward the top and right with increasing wind penetration, as shown Figure 5, where initial voltage security regions are plotted for several different levels of wind penetration.
The security voltage region is obtained by the method proposed in [1] based on a modified nine-bus test system from [21].Table 1 lists the PCC voltage and linearity index for different wind penetration levels.With increased wind penetration, the PCC voltage decreases due to increased reactive power losses.The linearity index L a increases as well, indicating increased nonlinearity of the boundaries: ( ) where P w and Q w are the active and reactive power of the wind farm; R th and X th are the Thevenin equivalent parameters, and E th is the equivalent voltage.It is also of interest to know how the area of the security region changes with increased wind penetration.To quantify this, linear approximation of the boundaries is adopted to calculate the area enclosed by Ω S , using the following equation: where φ is the angle between 1 3
T T
and 2 4 T T obtained from: Not only does the initial security region move toward the top and right with increasing wind penetration, but its area (calculated via Equations ( 15) and ( 16), using the coordinates of the four corner points given in Table 2) also shrinks.This is because the voltage tends toward the point of collapse with higher wind power penetration, as Figure 1 indicates.If the voltage collapses, the security region disappears.Therefore, the area of the security region decreases steadily toward the vanishing point with increasing wind power penetration.
N-1 Voltage Security Region Analysis
The system tested in [1] is also used in this section.Three wind farms are considered.Assume that under normal operating conditions, the active power outputs of the wind farms are 140, 130, and 120 MW, respectively.When one wind farm is tripped, the system is lightly loaded, and the charging capacity of the branch between the PCC and the tripped wind farm is still active.Consequently, the bus voltages subsequently increase.
Suppose each wind farm total generation is given as P w1 = [120, 140], P w2 = [100, 120] and P w3 = [80, 100].It can be observed from Figure 6a-c that the voltage magnitude of each wind farm will exceeded the upper operational limit after N-1 contingency due to lower loading on the transmission lines and slow switch-off of the capacitance banks.The spiked voltages led to further tripping of other wind farms by the overvoltage protection system.Although the wind power output is still random after N-1 contingency, it is institutive that lower load will lead to higher spiked voltage magnitude.Therefore, we can choose that the worst case for further consideration, shown in Figure 6d, such that when one wind farm is tripped, the other wind farms' generation reach to their lowest possible generation.For instance, if P w1 is tripped, i.e., P w1 = 0, the worst case is that P w2 = 100 and P w3 = 80.Note that the voltages at buses 1 and 3 do not change because bus 1 is a slack bus and bus 3 is a PV bus.It should be pointed out that the reactive power of PV bus should be also limited to its upper and lower bound, and the bus type is desired to be converted from PV type to PQ type if the reactive power reaches its bound.But in this study, the reactive power doesn't reach the bound so that it leads to a constant value both at normal condition and N-1 contingencies.
Table 3 further compares the wind farm voltages in the case where the connecting capacitance is cut off and the case where the connecting capacitance remains in service.It is clear that when the capacitance is not cut off, the voltage magnitudes at the wind farms will increase sharply.Note: Ci-on (i = 1,2,3) indicates that wind farm i is tripped, but the capacitance at this wind farm remains in service, whereas Ci-off indicates that the capacitance is cut off when the wind farm is tripped.Bold and underlined entries indicate voltage violations.Each ratio entry is the ratio of the voltage variation after a wind farm trip to normal conditions.
The voltage security region under N-1 contingency conditions will be calculated in the followed steps.Note that in the following calculations, capacitances are not cut off when a wind farm is tripped to represent a worst-case scenario.
A
Step 5: N-1 voltage security region The N-1 voltage security region can be expressed as { } , where:
Q
It should be noted that although the N-1 security region shown in Figure 8 is similar in shape to the normal condition security region shown in Figure 7, it is actually a subset of the normal condition security region, and therefore significantly smaller.Coincidentally, in this case, the N-1 security region is entirely within the reactive power limits, whereas the normal condition security region is not.In Figure 8, there are 12 planes belonging to w + and 12 planes belonging to w − .Each type of line represents a different contingency, and the intersection of these planes constitutes the N-1 security region; i.e., the reactive power within this region under normal conditions could guarantee security under both normal operation and an N-1 contingency.
Figure 8. Projection of the N-1 voltage security region on the (Q w1 , Q w2 )-plane.α 0 , α 1 , α 2 , and α 3 represents the planes belonging to A 0 , A 1 , A 2 , and A 3 .Last but not least, the N-1 voltage security region may not exist when wind penetration increases radically.Intuitively, the higher the penetration is, the greater the reactive power required to maintain the voltage in the safety region.If one wind farm is tripped at such a time, the voltage at each wind farm is certain to rise because of the slow switch-off of the capacitance banks, and may not remain inside the N-1 security region.In terms of the voltage security region, the area of the normal security region will decrease steadily with increasing penetration (see Table 2 and Figure 3).A comparison of Figures 7-9 also implies that the N-1 voltage security region may shrink, so that high penetration will shift the normal voltage security region further and further from the N-1 conditions (see Figure 9).Thus, if P w increases from 0 MW, the area of the intersection decreases.The three indices defined in Equations ( 5)− (7) were calculated for different levels of wind power penetration, and the results are listed in Table 4.The following conclusions may be drawn.(i) Under normal conditions, I u decreases with increasing wind power penetration.In particular, I u would decrease further after an N-1 contingency.Thus, if I u < 0, the area of the intersection would vanish, as in No. 3.This index can therefore be used to assess the existence of the voltage security region.(ii) Under normal conditions, I t decreases with increasing wind power penetration.However, I u would increase after an N-1 contingency.This index describes the approximate size of the voltage security region.(iii) I t decreases with increasing wind power penetration, and would further decrease after an N-1 contingency.This index describes the approximate size of the intersection between normal conditions and the contingency.(iv) A wind farm with higher I t has a higher risk of insecurity after tripping.Observe that when the three wind farms have distinct penetration levels (bottom section of Table 4), I u is positive and I t is far from 0 under each condition, so that the voltage security region exists.However, the minimum of I s occurs when w 1 is tripped, and the maximum occurs when w 3 is tripped.Therefore, the insecurity risk of a w 1 trip is greater than that of w 3 .
Curtailment is an effective method for restoring the N-1 security region.Accordingly, it should be implemented, and will be studied in future work.
Two-Step Optimal Adjustment Strategy
Assume that the voltage security region exists, and at a given operating point, the reactive power outputs of the three wind farms are (20,20,10) MVar.From Figure 7, the operating point is secure under normal conditions because it is within the voltage security region.However, when a single wind farm is tripped and the connecting capacitance is still in service, the operating point will be outside the N-1 security region, as Figure 8 shows.The wind farm voltages increase beyond their upper bounds, and thus the operating point moves into the insecure region.An adjustment strategy must be employed to return the operating point to the secure region.
The proposed two-step adjustment strategy is as follows: in the first step, shift the insecure operating point to the N-1 security boundary with minimum reactive power regulation.Then, in the second step, move the operating point toward the center of the security region, so that a security margin is maintained.Of course, minimum adjustment is only one of a number of effective strategies for adjusting an insecure operating point.Depending on the N-1 security region, various adjustments with various objectives could be employed.
Step 1: Shift the insecure operating point to the security region boundary
The optimization model of Section 2 is: where α 0+ , α 1+ , α 2+ , and α 3+ were calculated in Section III.This quadratic programming problem can easily be solved, yielding an optimal objective value of 499.1831.
Step 2: Shift the operating point from the boundary to the interior of the security region Using the two near points calculated in Section III, we can obtain the reactive power range for determining the center of the security region.This range is different for each N-1 contingency condition.The intersection of the ranges is . Then, using Equation (28) of [1], the center of the security region can be calculated as O a = (−0.28,−0.35, −0.5).
The voltage magnitudes at the wind farms before/after adjustment are compared in Table 5.Under normal conditions, the voltage magnitudes are within limits.However, when one of the wind farms is tripped, the voltage magnitudes at some wind farms exceed their upper bounds, indicating that the original operating point is not within the N-1 voltage security region.After the first optimal adjustment step has been taken, the operating point moves to the N-1 voltage security boundary.For instance, when wind farm w1 is tripped, U w2 reaches 1.101 p.u., slightly exceeding the upper bound of 1.1 p.u., due to the error introduced by using linear security region boundary components to approximate the actual nonlinear boundary components.Nevertheless, the corresponding reactive power remains quite close to the security region boundary.Moreover, thanks to the larger security margin obtained in the second step of the center-adjustment strategy, U w2 is lowered to 1.080 p.u. when wind farm w 1 is tripped, which is well under the upper bound 1.1 p.u.Hence, the corrected operating point is completely within the N-1 voltage security region.
At the same time, the adjustable voltage range under normal conditions lies between the results of step 1 and step 2; i.e., U w1 = [0.954,0.985], U w2 = [0.963,0.994], and U w3 = [0.954,0.985].To further illustrate the effectiveness of the adjustable voltage range in the minimum-adjustment and center-adjustment models, 10,000 operating point samples (in the form of reactive power) from the center to the minimum-adjustment point were randomly generated by Monte Carlo simulation and tested.The voltage magnitude distribution was easily obtained from the power flow, and is shown in Figure 10 before and after wind farm tripping.As an interesting example, note that when wind farm i was tripped, the voltage magnitudes of all wind farms increased, but the voltage U i varied less than that of the other wind farms.
Conclusions
Based on the concepts and technique proposed in [1], a number of observations were made.First, simply operating below the reactive power limits does not guarantee that voltages will remain within limits, and hence a voltage security region is a must.Second, higher wind penetration leads to a higher degree of nonlinearity of the security region boundary components.Third, the size of the security region diminishes with increasing wind penetration.
The effect of wind farm tripping was also examined.Wind farm voltages will increase significantly when a wind farm is tripped if the connecting capacitance is not cut off.An optimal adjustment strategy was demonstrated on an insecure operating point outside the N-1 security region.The minimum-adjustment correction model was used to shift the point to the boundary of N-1 security region, and ultimately the adjustable voltage range of each wind farm was obtained under normal conditions.
The proposed voltage security region and adjustment strategy can be used to achieve better coordination among wind farm reactive power controls, and help prevent cascading tripping following a single wind farm trip.However, the N-1 voltage security region shrinks to the vanishing point when wind penetration increases radically.Curtailment is an effective method of restoring the security region, and will be studied in future work.
∇
Tangent plane at a near point Δ Cutting plane through an inner point and a near point Ω Initial static voltage security region Ω VSR Final static voltage security region 1 VSR N − Ω N-1 static voltage security region of m planes belonging to w + and
I
represents the area of the approximate security region for each scenario, while u w I and s w I represent the approximate areas of each N-1 contingency and normal conditions, respectively.If the index Equation (
Figure 1 .
Figure 1.A two-step optimal adjustment strategy for correction.
1
and maximum voltages of wind farm i in the adjustable region.as voltage constraints in future AVC applications.
Figure 2 .
Figure 2. Voltage distribution in the proposed voltage security region.
Figure 3 .
Figure 3. CPF of the PCC in a nine-bus system.
Figure 4 .
Figure 4. Thevenin equivalent of a wind farm.
Figure 5 .
Figure 5.Initial voltage security region Ω S for different wind penetration levels.
Figure 6 .
Figure 6.Bus voltage magnitude under normal condition and N-1 contingencies (a) wind farm 1 is tripped and the other wind power generation is stochastic; (b) wind farm 2 is tripped and the other wind power generation is stochastic; (c) wind farm 3 is tripped and the other wind power generation is stochastic and (d) the worst case of N-1 contingencies.
A A .It is bounded by 24 planes, of which 12 are associated with w + and the other 12 are associated with w − , as shown in Figure8.With reactive power limits taken into account, the final N-1 voltage security region is
Figure 9 .
Figure 9. Projection of the N-1 voltage security region on the (Q w1 , Q w2 )-plane (with higher penetration).
Table 1 .
PCC voltage and linearity index for different wind penetration levels.
Table 2 .
Area of the security region for different wind penetration levels.
Table 3 .
Voltage magnitudes at wind farms under normal and N-1 contingency conditions (with/without capacitance cut-off).
Table 4 .
Three indices for different levels of wind power penetration.
Table 5 .
Comparison of the voltage magnitudes at the wind farms under normal and N-1 contingency conditions before/after adjustment. | 6,195 | 2014-01-22T00:00:00.000 | [
"Computer Science"
] |
Effects of doxofylline as an adjuvant on severe exacerbation and long‐term prognosis for COPD with different clinical subtypes
Abstract Objective This study aimed to investigate the effectiveness of doxofylline as an adjuvant in reducing severe exacerbation for different clinical subtypes of chronic obstructive pulmonary disease (COPD). Methods The clinical trial was an open‐label non‐randomized clinical trial that enrolled patients with COPD. The patients were divided into two groups (doxofylline group[DG] and non‐doxofylline group[NDG]) according to whether the adjuvant was used. Based on the proportion of inflammatory cells present, the patients were divided into neutrophilic, eosinophilic, and mixed granulocytic subtypes. The rates of severe acute exacerbation, use of glucocorticoids, and clinical symptoms were followed up in the first month, the third month, and the sixth month after discharge. Results A total of 155 participants were included in the study. The average age of the participants was 71.2 ± 10.1 years, 52.3% of the patients were male, and 29.7% of the participants had extremely severe cases of COPD. In the third month after discharge the numbers of patients exhibiting severe exacerbation among the neutrophilic subtype were 5 (6.6%) in the DG versus 17 (22.4%) in the NDG (incidence rate ratio[IRR] = 0.4 [95% CI: 0.2–0.9] P = 0.024). In the sixth month after discharge, the numbers were 3 (3.9%) versus 13 (17.1%; IRR = 0.3 [95%; CI: 0.1–0.9], P = 0.045), and those for the eosinophilic subtype were 0 (0.0%) versus 4 (14.8%), P = 0.02. In the eosinophilic subtype, the results for forced expiratory volume in the first second and maximal mid‐expiratory flow were significantly higher in the DG. The mean neutrophil and eosinophil levels were significantly lower than in the NDG among the neutrophilic subtype, and the neutrophil percentage was lower than in the NDG among the eosinophilic subtype. At the six‐month follow‐up, the dose adjustment rates of the neutrophilic and eosinophilic subtypes showed a significant difference (P< 0.05). Conclusions As an adjuvant drug, doxofylline has a good therapeutic effect on patients with the neutrophilic and eosinophilic clinical subtypes of COPD. It can reduce the incidence of severe exacerbation, the use of glucocorticoids, and inflammatory reactions in the long term (when used for a minimum of 3 months).
It can reduce the incidence of severe exacerbation, the use of glucocorticoids, and inflammatory reactions in the long term (when used for a minimum of 3 months).
K E Y W O R D S
COPD, different clinical subtypes, dose adjustment rate of glucocorticoid, doxofylline, severe acute exacerbations
| INTRODUCTION
Chronic obstructive pulmonary disease (COPD) is a common and high-incidence disease affecting the respiratory system. At present, the prevalence rate of COPD among people over 40 years old in China is 13.7%, 1 equating to about 100 million patients. The overall disease burden of COPD is ranked third among acute and chronic diseases, meaning that it represents a heavy burden to the social economy and public health globally. 2 From continuous in-depth research and developments in precision medicines, it has been realized that there are individual differences among patients with COPD in terms of, for example, susceptibility levels, exacerbation numbers, and lung function decline rates. In the past, pulmonary function was the core concern in the diagnosis and treatment of COPD; however, recent studies have found that forced expiratory volume in the first second (FEV 1 ) alone cannot objectively reflect the complexity and heterogeneity of COPD. 3,4 The subtypes of COPD describe the disease attributes (single or multiple) among individual differences in patients, which are closely related to clinical prognosis (symptoms, acute exacerbation, response to treatment, disease progression rate, or time until death). 5,6 It is therefore suggested that subtypes should be taken into account to maximize the risk/benefit ratio of COPD treatment.
Bronchodilators and corticosteroids are the primary drugs used for the treatment of COPD at the acute exacerbation and stable stages, and they can be used independently or in combination. However, some patients have low sensitivity to corticosteroid therapy, meaning that using high doses can lead to decreased sensitivity and adverse reactions such as pneumonia and osteoporosis. 7 Theophylline has been used in the treatment of COPD and asthma since 1937; however, because of the narrow safety treatment window, the GOLD Management Strategy guidelines recommended that it should only be used in patients who do not benefit from other bronchodilators and cannot afford treatment. 8 Doxofylline is a new derivative of methylxanthine: Its pharmacological effect is so different from theophylline that it cannot simply be regarded as modified theophylline. It has no significant effect on any known phosphodiesterase isotype, no significant antagonistic effect on the adenosine receptors, and no direct effect on histone deacetylase, and it interacts with the β 2 -adrenoceptor. [9][10][11] At the same time, combining it with corticosteroids can increase sensitivity and reverse the corticosteroids' drug resistance. 12 In the current context of the disease burden of COPD in China, doxofylline treatments are still widely used. However, at present, there is insufficient evidence regarding the effectiveness of doxofylline as an adjuvant on the deterioration, hospitalization rate, symptom improvement, and prognosis of different clinical subtypes of COPD. There are some studies in China, but the results are different from those in other countries, and the results are controversial. It is not yet clear whether doxofylline can be used as an adjuvant to the standard treatment of COPD, and there is a lack of domestic data on the clinical and economic benefits for patients who are unable to obtain adequate control from other pharmacological categories and have difficulty using medicine that must be inhaled.
The objective of this study was to compare the effectiveness of doxofylline in reducing severe acute exacerbation and improving long-term prognosis in different clinical subtypes of COPD and to explore whether it can be used as an adjuvant for standard COPD treatments. Positive findings would indicate clinical and economic benefits for patients who cannot get enough control from other treatment options and find it difficult to use inhaled medicine and would provide a reference basis for the control of treatment drugs in the stable phase of COPD.
2 | METHODS 2.1 | Study design and oversight G * Power was used to estimate the sample size, effect size f = 1, α = 0.05, 1 À β = 0.8. The estimated required sample size is 73, the lost rate was 5%, and the final sample size is 76. A total of 155 samples met the inclusion and exclusion criteria, and the sample size will continue to increase in the follow-up study.
This clinical trial was an open-label non-randomized clinical trial that enrolled patients with acute exacerbation of COPD in the Department of Respiratory and Critical Diseases at the Central Hospital Affiliated to Shenyang Medical College between September 13, 2019, and July 31, 2020. The final follow-up ended on January 31, 2021. The study comprised a 1-week drug adaptation period followed by a treatment phase. The patients were divided into two groups (doxofylline group [DG]: n = 68; non-doxofylline group [NDG]: n = 87) according to whether doxofylline was used in the treatment plan. After discharge, the DG continued to take doxofylline sustained-release tablets (0.2 g bid, oral for 6 months) alongside inhaled drugs, whereas the NDG only used inhaled drugs. Adherence was assessed by counting the remaining pills at drug returns at the first-, third-, and sixth-month follow-ups. Baseline data were collected by face-to-face assessments conducted within 72 h of admission. The members of the research group, who each received the same rigorous training, were responsible for the follow-ups with chronic disease management. The follow-ups, including medication return and dispensing of new medication, were carried out at the dates of the first, third, and sixth months after discharge.
This study was conducted in line with the Helsinki Statement and approved by the Ethics Committee of Shenyang Medical College Hospital. All patients provided written informed consent before undertaking the study.
| Participants
The inclusion criteria for participants were as follows: (1) aged 40-85 with a predominant respiratory diagnosis of COPD (FEV 1 /forced vital capacity [FVC] ratio of <0.7); (2) able to cooperate to complete the postbronchodilator spirometry; (3) able to understand and independently complete the COPD Assessment Test (CAT), the modified Medical Research Council (mMRC) Questionnaire, and related questionnaires after explanation by the investigators; and (4) willing to voluntarily participate in the study and sign the informed consent form. The exclusion criteria were as follows: (1) participating in other clinical studies; (2) chronic or acute respiratory diseases other than COPD, such as active pulmonary tuberculosis, lung tumor, interstitial pneumonia, and pleural effusion; (3) suffering from primary cardiovascular disease or severe liver, kidney, cerebrovascular, or hematological diseases; (4) suffering from a malignant tumor; (5) unclear consciousness, mental disorder, or neurological history and physical activity disorder; (6) allergic to doxofylline or xanthine derivatives; and (7) unwilling to cooperate, found it difficult to fill out the questionnaire, or unable to communicate at all. The withdrawal criteria were as follows: (1) failed to take medicine regularly as required; (2) serious adverse events occurred and patient should not continue to undergo the trial; and (3) subject asked to withdraw.
| Outcomes
The primary outcome was the number of participantreported severe exacerbations requiring hospital admission during the 6-month treatment period. In addition to exacerbation data, the following secondary outcomes were collected: dose adjustment rate of inhaled drugs containing glucocorticoid; adverse events; clinical symptoms; COPD-related health status (CAT scale, ≤5 being the norm for healthy nonsmokers and >30 indicating a very high effect of COPD on quality of life) 13 ; mMRC dyspnea score (range: 0 [not troubled by breathlessness except on strenuous exercise] to 4 [too breathless to leave the house or breathless when dressing or undressing]) 14 ; inflammatory cells in serum (leukocyte, neutrophil percentage, lymphocyte percentage, eosinophil percentage); and changes in post-bronchodilator spirometry. For all of these outcomes, the stable stage was compared with the sixth month after discharge. The classification for COPD severity was based on GOLD criteria.
| Statistical methods
The analysis was carried out according to the intention-to-treat principle. A per-protocol analysis, excluding participants classed as non-adherent (<80% of doses taken), was performed to measure sensitivity. The primary clinical outcomes of the number of COPD exacerbations for each subgroup were compared using a negative binomial model with an appropriate dispersion parameter to adjust for inter-participant variability. Estimates were adjusted for baseline covariates known to be related to outcome: age, smoking index, GOLD stage, number of exacerbations in the previous 1 year, body mass index, and baseline treatment for COPD. The subgroup analyses were undertaken by adding a treatment  variable interaction term to the model using the primary outcome. Analysis was performed using SPSS version 26.0. The clinical subtypes data of the DG and NDG were analyzed with mean ± SD (normal distribution) or median/interquartile (n, %), tested with the chi-square test. The data with irregular variance and non-normally distributed detailed range (non-normal distribution) were tested by Mann-Whitney U and Kruskal-Wallis. Counting data were described by ratio or composition ratio, tested with the Mann-Whitney U or the Kruskal-Wallis test. A 5% two-sided significance level was used throughout.
| Participant characteristics
A total of 172 participants were included: 82 in the DG and 90 in the NDG. During the 1-week drug adaptation and follow-up period, there were 17 further exclusions because of adverse reactions (n = 5), withdrawal from the study (n = 4), low compliance (n = 3), or loss to follow-up (n = 5). A final total of 155 patients completed the study: 68 to the DG and 87 to the NDG. Participant involvement in the trial is outlined in Figure 1. There were no clinically significant differences in baseline data characteristics between the DG and the NDG ( Table 1). The mean age of the participants was 70.8 ± 10.8 years, 52.3% were male, the mean BMI was 22.6 kg/m 2 , and 56.1% were smokers. The mean age of smoking was 27.0 years, and the smoking index was 600. The mean length of diagnosis was 8 years. The most common complications were hypertension/coronary heart disease (69.7%), diabetes (16.8%), digestive system diseases (11.6%), cerebrovascular diseases (9.7%), and cor pulmonale (7.1%). According to FEV 1 testing, the highest proportion of participants (38.0%) had severe COPD: In total, 29.7% had very severe COPD, 67.8% moderate to severe, and 2.6% mild. The CAT scores indicated that COPD was severely affecting participants' lives (mean [SD] = 22.6 ± 2.7). In terms of treatments, 81.6% of participants were using combination therapies of long-acting muscarinic antagonists + ICS, long-acting β2-agonists There was a clinically significant difference between DG and NDG.
+ ICS, or long-acting muscarinic antagonists + longacting β2-agonists + ICS. The mean leukocyte level of the participants was 8.5 ± 3.5 Â 10^9/L, and the mean percentages of neutrophil, lymphocytes, and eosinophil were 68.5%, 20.2%, and 1.5%, respectively. The participants were divided into three clinical subtypes according The outcomes for the participants in the doxofylline group (DG) and the non-doxofylline group (NDG).
T A B L E 3 Outcomes for participants to doxophylline group and non-doxophylline group, clinical phenotypes, per-protocol population. Table 2). In total, there were 81 exacerbations: 28 in the DG and 53 in the NDG. In the DG, 11 of the exacerbations were in the neutrophilic subtype, five in the eosinophilic, and 12 in the mixed granulocytic. In the NDG, 33 were neutrophilic, five were eosinophilic, and 15 were mixed granulocytic. There was no significant difference between the numbers of acute exacerbations for the neutrophilic, eosinophilic, or mixed granulocytic subtypes in the first month after discharge. In the third month after discharge, there was a clinically significant difference (P = 0.024) in the neutrophilic subtype, with five exacerbations in the DG (6.6%) compared with 17 (22.4%) in the NDG (incidence rate ratio [IRR] = 0.4; 95% CI: 0.2-0.9), but there were no T A B L E 4 Outcomes for participants to doxophylline group and non-doxophylline group, clinical phenotypes, per-protocol population. Table 2 and Figure 2.
For the secondary outcomes of FEV 1 , CAT score, mMRC dyspnea score, and adverse events (COPD-related and overall), there were no significant differences between the DG and the NDG for the three clinical subtypes. In the sixth month after discharge, results for inflammatory cells in the serum and post-bronchodilator spirometry were collected and compared with the stable stage. For the eosinophilic subtype, the FEV 1 and maximal mid-expiratory flow (MEF) levels in the DG were significantly higher than those in the NDG (55.0% vs. 46.5%; 45.0% vs. 34.0%; P > 0.05), but the difference was not statistically significant. The mean neutrophil percentages were significantly different in the neutrophilic subtype (62.9% vs. 66.8%; P = 0.023) and the eosinophilic subtype (55.1% vs. 60.9%; P = 0.017), and the eosinophil percentages were significantly different in the neutrophilic subtype (1.0% vs. 1.8%; P = 0.009). The incidence rates of exacerbations for each clinical subtype at the first, third, and sixth months compared with the baseline are presented in Tables 3 and 4
| Dose adjustment rate of inhaled drugs containing glucocorticoid
One hundred twenty-five participants were treated with inhaled drugs containing glucocorticoid during the 6 months of follow-up, comprising 54 participants in the DG and 71 in the NDG, and there was no significant difference in use of these drugs between the two groups (P > 0.05). During the follow-up period, 42 patients in the DG reduced their dose of inhaled corticosteroids because the disease was well controlled or stable, and T A B L E 5 The dose adjustment rate of inhaled drugs containing glucocorticoid between doxophylline group and non-doxophylline group. However, in the first and third months, there were no significant differences in the dose adjustment rate of inhaled drugs containing corticosteroids between the two groups (P > 0.05). In the sixth month, 25 patients in the DG reported reduced frequency of inhaled glucocorticoid, whereas six participants had increased use. In the NDG, 20 participants had reduced and 19 participants had increased the frequency of their use of inhaled glucocorticoid. The difference between the two groups was statistically significant (P = 0.033). The dose adjustment rates of inhaled drugs containing glucocorticoid for the DG and the NDG are presented in Table 5.
For the neutrophilic subtype, 25 participants in the DG and 35 in the NDG were treated with inhaled drugs containing glucocorticoids. There was no significant difference in the dose adjustment rate between the two groups in the first month (P > 0.05). In the third month, 10 participants had reduced and two had increased their dose in the DG, whereas in the NDG, seven participants had reduced and 17 had increased their dose. There was thus a significant difference between the two groups (P = 0.003). In the sixth month, in the DG, 13 participants had reduced and three had increased their dose, whereas in the NDG, six participants had reduced and nine had increased their dose, representing a significant difference between the two groups (P = 0.016).
For the eosinophilic subtype, 10 participants in the DG and 11 in the NDG were treated with inhaled drugs containing glucocorticoids. There was no significant difference in the dose adjustment rate between the two groups in the first-or third-month follow-ups (P > 0.05). In the sixth month, seven participants had reduced and 0 had increased their dose in the DG, whereas in the NDG, one participant had reduced and four had increased their dose. There was thus a significant difference between the two groups (P = 0.007).
For the mixed granulocytic subtype, 18 participants in the DG and 26 in the NDG were treated with inhaled drugs containing glucocorticoids. However, there were no significant differences in the dose adjustment rate among this subtype during the 6-month follow-up period (P > 0.05). The rates of dose adjustment of inhaled drugs containing glucocorticoid for each of the clinical phenotypes are presented in Table 6.
| DISCUSSION
The occurrence of COPD is determined by both environmental and genetic factors, and it is closely related to chronic inflammation, oxidative stress, imbalance of protease and antiprotease, apoptosis, and so on. The pathogenesis and pathological changes of the condition are complex and heterogeneous. 15 Acute aggravated hospitalization and stable long-term maintenance treatment are the main sources of medical burden, and efficacy, safety, individualization, and high drug prices are the urgent problems to be solved in the treatment of COPD.
In this prospective cohort study, patients with the clinical subtypes of neutrophilic and eosinophilic COPD who were treated with doxofylline in addition to an inhaled bronchodilator were found to be less likely to have severe exacerbations than patients treated with the inhaled bronchodilator only. Graham Devereux previously found that patients with bronchial asthma experienced the greatest benefit after using doxofylline for 6 weeks, whereas patients with COPD observed the greatest benefit from the eighth week. 16 Among the neutrophilic subtype, the mean number of patients with exacerbations was lower in the DG than in the NDG in the third month after discharge (6.6% vs. 22.4%), showing a 15.8% reduction in the risk of severe exacerbations. In the sixth month after discharge, a 13.2% reduction was seen (3.9% vs. 17.1%). Among the eosinophilic subtype, the mean number of patients with exacerbations was lower in the DG than in the NDG in the sixth month after discharge (0.0% vs. 14.8%), showing a 14.8% reduction in the risk of severe exacerbations. According to the current trial, theophylline can reduce the number of severe COPD exacerbations requiring hospital admission, with the most benefit being evident in the subgroup of those patients frequently hospitalized with COPD. 17 This differs from the results of a study published in 2019. 18 It is unknown whether this may be because of the use of a low-dose treatment.
COPD is a condition involving chronic inflammation of the airway that always occurs repeatedly and develops progressively, resulting in the remodeling of the airway structure and the destruction of the alveolar structure. It has been found that the main airway cells involved in the inflammation include activated neutrophils, macrophages, eosinophils, and lymphocytes. 19 Of these, the levels of neutrophils and eosinophils are closely related to acute exacerbation and deterioration of COPD. 20,21 As one of the most important phenotypes of COPD, the inflammatory phenotype is advantageous for assessing acute exacerbation of COPD and evaluating prognosis. The inflammatory response is closely related to inflammatory markers, which can be expressed more accurately. In this study, the mean levels of neutrophils and eosinophils were significantly lower in the DG than in the NDG (62.9% vs. 66.8% and 1.0% vs. 1.8%, respectively) in the neutrophilic clinical subtype, and the neutrophil level was significantly lower in the DG than in the NDG (55.1% vs. 60.9%) in the eosinophilic subtype. These findings are similar to the results presented by Page and Culpitt. 22,23 In contrast, for the mixed granulocytic clinical subtype, there were no significant differences between inflammation cell percentages. Our results support that doxofylline can reduce acute aggravation and deterioration of neutrophilic and eosinophilic clinical subtypes and reduce airway inflammation. Other researchers using doxofylline in the treatment of COPD have found that it can increase the release of IL-10 and exhibit an antiinflammatory and immunomodulatory effect, inhibit the release of inflammatory mediators by mast cells and the oxygen reactive of neutrophils, 24 inhibit the translocation of proinflammatory transcription factor nuclear factor B (NF-κB) into the nucleus and reduce the expression of inflammatory genes, 25 and promote the apoptosis of neutrophils in vitro by reducing the anti-apoptotic protein Bcl-2. 26 Doxofylline can decrease the recruitment of airway inflammatory cells and the release of inflammatory mediators; reduce leukocytes, neutrophils, and eosinophils in a variety of ways; and reduce airway inflammation and hyperresponsiveness in patients with COPD. In a study by Rajanandh and a pharmacological trial in Italy, the use of corticosteroids was shown to be lower in patients who took doxofylline as part of their respiratory disease treatments than those who did not. 27,28 In the current study, 125 of the participants were treated with inhaled drugs containing glucocorticoid during the 6-month follow-up period. In the sixth month, there was a significant difference between the dose adjustment rates of inhaled drugs containing glucocorticoid for the DG and the NDG (20.0% vs. 16.0% reduced, 4.8% vs. 15.2% increased; P = 0.033). In the third month after discharge, there was a significant difference in the adjustment rates among the neutrophilic subtype (16.7% vs. 11.7% reduced, 3.3% vs. 28.3% increased; P = 0.019). In the sixth month, the drug dose adjustment rates were 21.7% vs. 10.0% reduced and 5.0% vs. 15.0% increased (P = 0.016). For the eosinophilic subtype, there was also a significant difference in the sixth month (33.3% vs. 4.8% reduced, 0.0% vs. 19.0% increased; P = 0.019). However, there were no significant differences in dose adjustment rates for the mixed granulocytic clinical subtype during the 6-month follow-up period (P > 0.05). The rate of increasing drug dose in the DG was significantly lower than that in the NDG. This shows that the reported doses of inhaled drugs containing glucocorticoid for the neutrophil and eosinophil subtypes were significantly lower in the DG than that in the NDG, which supports that the use of doxofylline as an adjuvant therapy can reduce the demand for corticosteroids. Ford 29 found that theophylline can reduce the dosage of glucocorticoids and improve corticosteroid resistance in patients with COPD. The combined use of these drugs has anti-inflammatory effects, which can synergistically induce and enhance the responsiveness of steroid hormones to reduce the dosage of corticosteroids. 30 This may be related to the fact that theophylline can activate histone deacetyl 2 in the macrophages of patients with COPD, 28 restoring its activity to normal levels to increase glucocorticoid sensitivity and reverse corticosteroid resistance. At the same time, it works together with glucocorticoids to enhance the transcription of inflammatory cell genes and reduce the synthesis of pro-inflammatory mediators. 31 Pulmonary function is an important and intuitive index for measuring airflow limitation with good repeatability. It has great significance in the diagnosis, severity evaluation, disease progression, prognosis, and treatment response of COPD. 32 In this study, when comparing the changes of pulmonary function between the stable stage and the sixth month after discharge, we found an interesting phenomenon: FEV 1 and MEF levels in the DG for eosinophilic COPD were higher than for the other subtypes, but there were no significant differences between the three clinical subtypes. This suggests that doxofylline can delay the decline of pulmonary function and that the protective effect on pulmonary function in patients with eosinophilic COPD is better than that for the neutrophilic and mixed granulocytic subtypes, which is similar to the findings presented by Lal. 33,34 MEF is mainly determined by the non-force-dependent part of FVC, which can reflect the severity of airway obstruction and indicate the respiratory reserve strength, muscle strength, and dynamic level of a patient. Related studies have found that doxofylline has a direct relaxing effect on bronchial smooth muscle. 35 It can strengthen the contractile force of the ventilator and eliminate ventilator fatigue, 36 promote the movement of airway cilia, enhance the speed of mucociliary transport, and remove airway secretions. 37 Drug safety is one of the core issues in the treatment of COPD. Theophylline is mainly metabolized through the cytochrome P450 microsomal enzyme system of CYP1A2 in the liver. 38 Doxofylline lacks the ability to interfere with the cytochrome enzymes CYP1A2, CYP2E1, and CYP3A4, which prevents the significant interaction of other drugs metabolized in the liver through these pathways to produce a stable serum concentration. 39 Recent pharmacological studies have shown that doxofylline does not directly inhibit any HDAC enzyme or any PDE enzyme subtype, nor does it antagonize any known adenosine receptors. This may explain why the safety of doxofylline has been improved. 40 During the 1-week drug adaptation period at the beginning of the current study, in the DG, acid regurgitation occurred in two patients (2.9%), who each had a history of chronic gastritis, and palpitation occurred in three patients (5.9%) with a history of arrhythmia. This suggests that doxofylline should be used cautiously in patients with chronic gastritis or peptic ulcers and arrhythmia. During the 6-month follow-up period, there were no serious adverse reactions among the DG, indicating that the incidence of adverse reactions is very low and the safety of clinical use is high.
| Advantages and limitations
The design of this study had several advantages. First, unified standards and procedures were adopted and implemented by a respiratory professional attending physician with more than 5 years of working experience and skilled knowledge of how to operate the pulmonary function meter. Second, the data collection was jointly undertaken by two researchers to ensure that the observation forms were filled out in a detailed and objective manner. All the survey data and experimental data collected were input into the Epidata database. After data entry, consistency testing (comparison of differences after double entry of the questionnaire) and reliability testing (quality control of the input REC files) were carried out. Third, the researchers working on the study did not participate in the formulation of treatment plans. For follow-up on the basis of chronic disease management, the participants had a high degree of fit, and the follow-up intervals were designed to mitigate the risk of patients forgetting and thus control the rate of loss of follow-up and improve the compliance of participants.
In terms of limitations, the research was a singlecenter study, so the disease severity distribution and treatment of the study population may not represent the general disease population. In addition, the follow-up period was 6 months, which is relatively short, and it is difficult to decide on causal effects about the changes of the level of the cell types. Finally, in consideration of medical accessibility and the disease treatment burden, the effects of doxofylline on mild and moderate acute exacerbation were not quantified.
| CONCLUSIONS
According to the findings of a 6-month follow-up study, doxofylline, when used as an adjuvant drug, displays a favorable therapeutic effect on COPD patients with neutrophilic and eosinophilic clinical subtypes. It is capable of minimizing severe acute exacerbations, and in the case of neutrophilic subtypes, a consistent reduction in the number of exacerbations was observed after 3 months of use, whereas eosinophilic subtypes displayed a consistent reduction in exacerbations after 6 months, although further long-term studies may be necessary to establish a concrete causal relationship. In addition, doxofylline can reduce reliance on glucocorticoids and promote longterm reduction of inflammation (after at least 3 months of usage). Moreover, the incidence of adverse reactions associated with its use is low, making it a safe choice for clinical treatment.
AUTHOR CONTRIBUTIONS
Mei-Feng Chen: the conception and design of the study; acquisition of data; drafting of the article. Wei He: defining the cases and revising them critically for important intellectual content. De-Sheng Huang: analysis and interpretation of data. Hui Jia, Zhao-Shuang Zhong, Nan Li, Shan-Shan Li: acquisition of data. Shu-yue Xia: revising them critically for important intellectual content; final approval of the version to be submitted.
CONFLICT OF INTEREST STATEMENT
The authors declare that they have no competing interests.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.
ETHICS STATEMENT
This study was conducted in line with the Helsinki Statement and approved by the Ethics Committee of Shenyang Medical College Hospital. All patients provided written informed consent before undertaking the study. | 6,876.8 | 2023-08-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Safety and Security Concept for Software Updates on Mixed-criticality Systems
—The raising connectivity of critical embedded systems makes them vulnerable to cyber-security attacks that compromise not only privacy but also safety. This results in intricate dependencies between functional safety and security, and higher demands to address both disciplines simultaneously. However, there are still many gaps on the common application of functional safety and cyber-security standards. Over-The-Air (OTA) software updates are a clear example of this challenge. While the installation of regular software upgrades is a crucial cyber-security practice to keep the system up-to-date with the latest security patches, they might involve high re-certification efforts and costs from a safety standpoint. In this paper, a safety and security concept for software updates on mixed-criticality systems is presented. Particularly, a combined safety and security risk assessment on an automotive use case is performed and risk mitigation measures proposed.
I. INTRODUCTION
The rapid evolution of hardware and software in Mixed-Criticality Cyber-Physical Systems (MCCPS) has surpassed the capabilities of current safety-and security-oriented design methodologies. Generally, standards used in the certification process of such systems reflect the state of practice in industry rather than the state of art. As a result, they do not evolve as fast as technology, and they do not provide explicit guidance for next generation architectures yet [1].
Over-the air (OTA) updates are a clear example of this trend, a technology commonly used in consumer electronics market that is now being adopted by critical industries such as the automotive [2], [3]. The benefits of over-the-air updates improve maintenance (e.g., bug fixing, security patching) and give enhanced flexibility to the systems, making it a key technology to stay competitive in the market. However, software updates or modifications in general have a very different treatment and relevance on the safety and security-critical domains [4]. This difference is highly motivated by the asymmetric impact that in-service experience has in both domains [5], [6]: • In the safety domain, product operation hours and history, together with field failure data, are key indicators to gain evidence on the absence of systematic design faults in a product. As a result, confidence on a system increases with its time in service. • In the security domain, on the contrary, new security flaws and weaknesses are disclosed every day and the security trust level decreases over time.
As a consequence, software updates are a required practice according to security standards in order to regularly solve new security vulnerabilities. On the contrary, modifications on safety-critical systems are discouraged and usually limited to unavoidable maintenance activities like solving faults that resulted in incidents or adapting to new or amended safety legislation (e.g., IEC 61508 [7]) and they might involve high re-certification efforts and costs [4], [6]. In addition, the increased connectivity of critical systems result in intricate dependencies between safety and security and security threats and vulnerabilities could jeopardize functional safety. For all these reasons it is increasingly important to simultaneously address safety and security needs from early design stages.
The UP2DATE European project [8], [9] seeks to address the main dependability challenges brought by OTA updates to the critical domain, with special focus on safety, security, availability, maintainability, and the increasing platform complexity of emerging heterogeneous Multiprocessor System on a Chip (MPSoC) devices. This paper presents a safety and security concept of a mixed-criticality software updates enabled system, based on the common application of IEC 61508 [7] and IEC 62443 [10] for functional safety and cybersecurity respectively. To this end, a combined safety security risk assessment methodology is presented. This methodology is then applied to the UP2DATE architecture, a mixedcriticality system enabling OTA updates, presented in [8], [9] and, as a result, safety and security risks are identified. Finally, the safety and security countermeasures that shall be applied to reduce system risks are defined. All this process is followed based on a next generation automotive use-case that combines advanced high-performance functionality with critical functions. This paper is organized as follows: after this introduction, the employed safety and security risk assessment and treatment methodology is presented. After that, the UP2DATE architecture is described and the system concept specified. Following, the safety and security risk assessment is provided. Lastly, related work is presented and conclusions drawn.
II. METHODOLOGY
For the systematic safety and security risk assessment, the well-known ISO 31000 [11] and ISO 27005 [12] standards are considered. This process is aligned with the risk assessment method described by ISO/SAE 21434 [13], which also references ISO 31000. It should be pointed out that the IEC 62443 [10] standard also recommends (among others) the ISO 27005 as basis for risk identification and assessment. Figure 1 shows the followed high-level safety and security risk assessment methodology. Besides, for the detailed risk analysis, the MAGERIT [14] (version 2) risk analysis and management methodology, elaborated by the Spanish National Cryptologic Centre (in Spanish "Centro Criptológico Nacional, CNI") is used. This methodology, which extends and tailors the requirements and processes of ISO 27005 [12], is endorsed and recommended by both national and international cybersecurity agencies, such as the European Union Agency for Cybersecurity (ENISA) and INCIBE (in Spanish "Instituto Nacional de Ciberseguridad"). Table I shows the employed threat catalogue. III. SYSTEM CONCEPT SPECIFICATION Critical system development processes always starts with the the system concept specification aligned with the lifecycles dictated by standards. This section summarizes the UP2DATE architecture [9] that supports safe and secure software updates for both intelligent and resource intensive mixed-criticality systems as well as for legacy control devices. To this end, UP2DATE architecture is characterized by the inclusion of a high-performance mixed-criticality gateway in the system. The aim of this gateway is twofold: • To provision the system with higher computation power.
This allows consolidating in a single powerful computer the growing range of software functions that often present different safety and security implications and reducing in this way the overall number of control units present in the system. In addition, the increased performance allows to handle next generation of autonomous and intelligent systems that often rely on complex algorithms that demand high computation capabilities, as well as the execution of mixed-criticality functions. • To enable the remote update of existing control devices in a secure way. The end-devices are commonly resource constrained (legacy) devices and therefore provide low computation capabilities. In this context, these devices might not be able to execute and enforce the required technical security functions. As compensating measure, these devices are deployed behind a security-aware gateway that manages their remote software updates enforcing defence-in-depth as required by IEC 62443. The gateway has the capability and flexibility to connect and update multiple and diverse end-devices, a solution that is scalable across different existing processors. Therefore, the UP2DATE architecture is comprised of a high-performance gateway that connects to a server and multiple end-devices as depicted in Figure 2 that shows the update cycle explained below. The update-cycle is comprised of 10 steps classified into three main phases [9]: (i) Design and release of updates, (ii) Update deployment phase and (iii) Runtime phase. The cycle starts at the moment at which the new or updated software component is available. Software modularity is adopted as the design principle to facilitate software modification in line with the recommendations of functional safety standards. A precondition is that each critical component is designed and developed according to the safety-security requirements for their target safety integrity and security level. In step 2, the design time checks are performed. After that, the software update is released (step 3). In the update deployment phase, compatibility and integration tests are carried out before authorizing update installation in the device. The update is then transferred from the server to the gateway and the installation accomplished. Just after, the correctness of the new software installation, configuration and dependencies with other software updates is verified. Finally, in the runtime phase, offline and online monitoring services are executed. On the one hand, online monitoring checks that the system meets its specification, and that safety and security metrics are within their safe and secure range at system operation. On the other hand, the offline monitoring service continuously sends data to a remote server for further analysis that serves to detect system malfunction.
IV. CONTEXT ESTABLISHMENT
Prior to the safety and security risk assessment, the context shall be defined, which includes the description for the system or product under test, as well as the circumstances and conditions in which the study is performed. This risk assessment focuses on a gateway and an end-device in the scope of an automotive use case. The system under evaluation, composed by these components, will provide functions such as diagnosis and safety, energy and thermal management, and driver interface among others. In addition, the gateway hosts diverse automotive grade domains such as Advanced Driver Assistance Systems (ADAS), In-vehicle infotainment (IVI), safety co-pilot etc. that are generally compute intensive and therefore need higher performance than that provided by regular automotive safety Electronic Control Units (ECUs).
The end-devices under consideration include safety functions and are therefore compliant with the ISO 26262 standard for Road Vehicle Functional Safety requirements, with the highest Automotive Safety Integrity Level (i.e., ASIL D). The gateway instead, can host both safety and non-safety related functions, following the previously defined mixed-criticality architecture on top of a certified hypervisor, that provides the required separation. In addition, this gateway also includes the update and monitoring middleware for update execution. It should be noted that in the scope of this analysis, the complete automotive case study is considered fail safe, i.e., a safe state can be reached either by the safety functions or diagnostics.
Concerning security, the system does not provide any security capability, except that a Virtual Private Network (VPN) is used for the communication of external entities with the gateway. Besides, the gateway and the end-devices are connected and communicated by a CAN bus. For the analysis, a single end-device is considered. Figure 3 shows a simplified application and deployment of the system. As depicted, an OBD-II connector, providing access to the internal bus is usually installed in the vehicle.
V. SAFETY & SECURITY RISK ASSESSMENT
A risk assessment process is a systematic identification and evaluation process of all risks associated to a given scenario or purpose. The risk assessment processes play an important role in the safety and security management processes, since the identification and qualification of the safety failures, security threats and risks are essential when it comes to the protection of assets and people. This task shall be jointly addressed by all entities involved in the safety and security management process of the system under consideration.
A. Risk Identification
Risk identification is the process of determining the assets, the dependencies among them and the identification of safety and security threats associated to them (see threat catalogue in Table I), which may impact or compromise a given system property, denoted dimension. These dimensions are, according to MAGERIT [14]: Confidentiality (C), Integrity (I), Availability (A), Authenticity of service users (A S), Authenticity of data origin (A D), Accountability of service use (T S) and Accountability of data access (T D). Figure 4 depicts the adopted threat model. The identification and definition of the assets is an essential task to be performed in the risk assessment process. Assets are the resources included in the (sub)system or related to it that are necessary for the organisation (asset owner) to operate correctly and achieve the objectives proposed by its management. In this task, the asset classification and definitions provided by "Said to be material, physical goods, designed to directly or indirectly support the services provided and for the execution of computer applications". Communication networks [COM] "Means of transporting data from one place to another." the MAGERIT [14] methodology are used. More particularly, the five type of assets described in Table II are considered. It must be pointed out that these assets interplay jointly for achieving the goals of each use case. For instance, the risk analysis of services may depend on the analysis of other assets. Therefore, in this risk assessment process, the dependencies among assets in each use case are also examined. Following this asset categorization, on the one hand, Table III shows the identified assets in the gateway component. On the other hand, Table IV shows the identified assets in the end-device component.
B. Risk Analysis
After the identification of assets, the security threats associated to such elements (shown in Table I) are determined. The association of safety and security threats to the assets, as well as impacted dimension, is accomplished depending on the asset type, as specified by MAGERIT [14]. In the risk analysis stage, the potential impact in each of the affected dimension and the failure/attack likelihood is estimated. To this end, two complementary strategies are used for safety and security. For safety, a simplified Failure Mode Effects and Critically Analysis (FMECA) is developed with the focus on the identified assets and the potential causes obtained from the MAGERIT catalogue. A Failure Mode and Effects Analysis (FMEA) is a systematic procedure for the analysis of a system in order to identify the potential failure modes, their causes and effects on system performance. FMECA is an extension to the FMEA to include means of ranking the severity of the failures modes to allow prioritization of countermeasures. This is done by combining the estimation of the severity of failure effects with a ranking estimation of the probability of the failure cause and the ability to detect potential failures on time.
On the contrary, in security, the computation of a finegrained attack probability cannot be computed. Therefore, an attack likelihood estimation is done, in which an approximated attack potential is valued. To this end, several attack factors, such as required expertise, equipment, and window of exposure are considered.
1) Potential Impact: The assessment of the potential impact is performed considering all the dimensions which might affect or compromise an asset. This metric is evaluated from 0 to 10, an impact of 0 implying no impact at all, while 10 indicating catastrophic consequences. Figure 5 shows the potential impact of G.S.01 (5a), G.SW.03 (5b) and ED.COM.01 (5c) assets in all dimensions. The most impacted dimension is integrity, since an error or manipulation in the services, software and hardware can directly result in a critical system safety issue. Availability has, in general, a medium impact. While monitoring data and safety and security properties are crucial for safety function diagnostics, as the system is fail-safe, it could be moved to a safe state whenever this data is not available. Moreover, the authenticity of users and data origin is critical, any unauthorized access and use might seriously compromise the overall system safety. Finally, regarding, accountability, any undetected and unattended service use can cause major disruptions. Sometimes, it may also imply contract violations.
2) Likelihood: The assessment of likelihood is the estimation of the probability of failure or the effort required by an attacker to perform an attack. For this purpose, it is assumed that design and deployment best practices are applied, for example, disabling other communication protocols and closing unnecessary communication ports. For safety related failures, these values are obtained from a FMECA. The assumptions and the scenario described previously are also considered. Figure 6 shows the failure and attack likelihood of G.S.01 (6a), G.SW.03 (6b) and ED.COM.01 (6c) assets. As observed, unintentional failures and errors present, generally speaking, high likelihood, specially for E.2 (administrator errors), E.4 (configuration error), E.8 (malware diffusion) and E.24 (system failure due to exhaustion of resources). From the security point of view, the CAN bus (ED.COM.01) entails remarkable attack likelihood levels, notably for A.11 (unauthorised access), A.14 (eavesdropping) and A.25 (theft). Finally, the system is also highly susceptible to hardware and software failures (I.5).
C. Risk Evaluation
In this step, the resulting risk associated to each asset is calculated. For it, the potential impact and failure/attack likelihood evaluation results are used. Risk is evaluated as shown in Equation 1 and measured from 0 to 10. A risk map for each asset is then built, which indicates the risk level in each dimension for each threat. The computed risk maps for the previously presented assets are shown in Figure 7. As can be seen, the gateway assets G.S.01 (7a) and G.SW.03 (7b) present high levels or risks for errors and failures. The authenticity, traceability and integrity properties of the component might severely be compromised. [Re-]routing of messages (A.9) attacks also represent a danger for G.S.01.
On the contrary, ED.COM.01 (7c) presents medium risk levels which might jeopardize the integrity and authenticity dimensions. The security threat to be tackled is the unauthorised access (A.11) to the system. The CAN bus is also susceptible to administrator errors (E.2), configuration errors (E.4), [Re-]routing errors (E.9) and sequence errors (E.10).
VI. RISK TREATMENT
The goal of the risk treatment phase is to address the previously identified risks. Usually, safety and security measures are implemented to decrease them. Nevertheless, other strategies might be adopted if the cost of implementation is high, finding the appropriate balance. Therefore, through a risk versus cost analysis, a risk threshold is defined. All risks above such threshold shall be addressed and mitigated, while the risks below can be disregarded. At this stage, we consider three levels of risk: low (1 to 4), medium (5 to 7) and high (8 to 10). In this safety and security concept, failure detection or avoidance countermeasures, in addition to security measures are defined for all those threats entailing medium or high risk.
A. Functional Safety & Cyber-security Management
In order to avoid systematic faults during the different phases of the development process and to develop and maintain a secure product, the definition and enforcement of a Functional Safety and Cyber-security Management process is recommended. To this end, the safety and security methodology requirements from IEC 61508-1 (clause 6) [7] and IEC 62443-4-1 [10] should be considered.
B. Diagnostic Mechanisms
Runtime error detection is implemented through diagnostic mechanisms that achieve the required diagnostic coverage (DC) for each integrity level and architecture design. The particular measures could be selected from IEC 61508-2 and -3 Annex A according to the required diagnostic coverage [7]. At a high-level, the considered diagnostic mechanisms can be classified as follows: • Autonomous hardware diagnostics: the hardware platform includes autonomous diagnostic mechanisms. • Software-commanded diagnostics: the system includes hardware diagnostic components to be commanded by software including features for the diagnostic of independence violations. • Platform independent diagnostics: additional diagnostics for hardware components and software applications. • External diagnostics: system diagnostics external to the gateway, e.g., off-chip redundancy with majority voter or external watchdog for temporal and logical monitoring with independent clock source. • Independence violations detection: Measures for the detection of independence violations are implemented by hardware diagnostic mechanisms (e.g., MPU, watchdog) and by the online monitoring that supervises the correct temporal behaviour of each partition and handles the external watchdog in case of execution time exceeding events.
C. Independence of Execution
Independence of execution is a crucial property of mixedcriticality systems, and it shall be guaranteed both on the spatial and temporal domains. To this end, based on previous work done in EU projects of the mixed-criticality cluster such as MultiPARTES [15], PROXIMA [16], DREAMS [17] and SAFEPOWER [18], the following services and mechanisms should be provided by the hypervisor: resource management, time synchronization, inter-partition communications, fault management and logging and safe system start-up and shutdown.
D. Safe and Secure Update
The update shall be deployed following a predefined safe and secure procedure. In order to guarantee safety and security, an important aspect of this procedure is the verification and validation of the changed software.
E. Safe and Secure Configuration
System configuration shall be defined at design time by safety and security system architects and programmed using qualified tools. The UP2DATE architecture has the particularity that the configuration may need to be adapted with a software update. However, in all cases, this configuration shall be defined and validated at design phase, and it shall be protected against unintended runtime modifications out of the updating process.
F. Compatibility and Integration Check
Compatibility and integration check is a crucial technique for the verification of new updates. The overall goal of this check is to verify that all software components meet with the constraints defined in their safety and security properties, which include requirements for their integration with existing software modules and with the hardware platform and its configuration.
G. Safe and Secure Communications
Different security zones will be connected through conduits that provide the security functions that enable the secure communication. All zone boundaries are supervised and managed through firewalls, in which a security policy is enforced. In these security policies (in each firewall) all network traffic shall be denied by default, and legitimate and required communications allowed.
H. Online Monitoring
Online monitoring will verify, based on runtime information, that the system meets its specification (and more specifically, its safety and security properties) before, during and after an update and that it is therefore operating within safe bounds according to the constraints of each compliant item. In this way, it is possible to detect residual specification and implementation faults in software and system integration faults.
This monitoring has a direct impact on system safety and therefore, the system shall be capable of reacting within the Process Safety Time (PST), that is, before a hazardous event is caused. This is the reason why online monitoring runs on the gateway itself (to mitigate the communication overhead of sending the data to an external device).
I. Offline Monitoring
Offline monitoring is used for security fingerprinting, which is devoted to the detection of performance anomalies that could result for instance, from malicious code installation during an update. It should be confirmed that the software update does not contradict the system reliability and operability features. For this purpose, the system is monitored in two phases, following the approach presented by Cherkasova, Ludmila, et al. [19].
J. Access Control Scheme
The access control scheme manages which entity, such a person or machine, is allowed to communicate and access the resources included in the system. For this purpose, these entities shall firstly be authenticated. On the one hand, the server and the gateway will use a Public Key Infrastructure from which the required authentication certificates will be generated. On the other hand, a symmetric cryptography-based challenge-response authentication mechanism is used for the authentication between the server and the end-device. Thus, a multi-factor authentication is required to perform an update in the end-device, a valid certificate for the connection to the gateway and a master key.
Besides, all inbound and outbound communications will be regulated by an integrated firewall in the gateway. All allowed secure communications and ports will be included in a whitelist. By default, all other communications will be blocked.
K. Security Auditor
The security auditor is a SCAP-enabled agent integrated within the system that is used to identify software flaws, vulnerabilities, and security-related misconfigurations. The Security Content Automation Protocol (SCAP) [20] is a group of standards defined by the National Institute of Standards and Technology (NIST) that enable the automated vulnerability management, measurement, and policy compliance evaluations. The auditor scans and checks (periodically or upon request) the system for vulnerabilities and weaknesses according to the SCAP specifications.
VII. RELATED WORK
Kavallieratos, G, Sokratis K, and Vasileios, G [21] presented a comprehensive survey of safety and cyber-security co-engineering methods. In this work, 25 methods related to safety and security risk analysis methods were analyzed. Nevertheless, as stated by the authors, a generic application and domain independent methodology should be used, such as the one defined by the ISO 27005 standard [12]. In this sense, four different well-known detailed risk analysis methods were studied by Syalim, A, Yoshiaki H, and Kouichi S [22], which are: Mehari, Magerit, NIST800-30 and Microsoft's Security Management Guide. Currently, Microsoft uses STRIDE methodology for products threat modelling [23].
The STRIDE methodology is widely used for the assets and threats identification. An IEC 62443 compliant risk analysis was presented by M.Fockel et al. [24] for the development of industrial control systems. This methodology was also used by Zhendong Ma and Christoph Schmittner for the threat modelling of connected and intelligent vehicles [25] and by A. Vasenev et al. [3] for a automotive case considering specific OTA threats. Nevertheless, it has to be pointed out that the STRIDE methodology addresses the system elements (assets) and threats identification, it does not cover the impact and likelihood estimations, nor the risk computation.
In order to tackle this gap, J.P. Monteuuis et al. from PSA Group, Telecom ParisTech and CEA LIST, propose the SARA framework for threat modelling and risk assessment for driverless vehicles. Although also based on STRIDE, the authors extend such method to define systematic threat analysis and risk assessment process, in which the safety issues are also considered. For this purpose, the severity, attack likelihood and controllability parameters are evaluated. SARA is divided into four main phases: (1) Feature definition; (2) Threat specification; (3) Risk assessment; (4) Countermeasures.
As far as integrated safety and security risk assessments are concerned, ABB [26] proposes a safety and security addressing methodology for safety-critical systems. As stated, safety and security should jointly be managed. This approach was also supported by S. Plósz et al. [27]. For the combined assessment, a combined catalogue composed by a failures catalogue (based on FMEA) and an attacks catalogue (based on STRIDE) was created. This method enables efforts saving, raising issues which may not be identified instead and multidimensional decision making. Finally, an argumentation case for safe and secure automotive OTA updates was presented by T.Chowdhury et al. [28].
VIII. CONCLUSIONS
Software-intensive safety-critical systems are facing new needs. Similar to consumer products, OTA updates could provide higher flexibility and maintainability capabilities, including security weaknesses and bugs fixing. However, it presents several technical challenges, as well as safety and security risks. Although required for security, software modifications and upgrades on safety-critical systems are commonly not recommended. A safety re-certification may also involve high efforts and costs.
In this paper, a safety and security concept for software updates on mixed-criticality systems is presented. For this purpose, a safety and security risk assessment of a next generation automotive system is performed, composed by a gateway and an end-device. The safety and security analysis and measures defined in this concept will be further developed in the UP2DATE European project and validated at the automotive and railway case studies. | 6,168.4 | 2021-11-24T00:00:00.000 | [
"Computer Science"
] |
The new science of moral cognition : the state of the art
Título: La nueva ciencia de la cognición moral: estado de la cuestión. Resumen: La necesidad de realizar aproximaciones multidisciplinares al estudio de la naturaleza humana es ampliamente aceptada. Esta perspectiva se ha manifestado especialmente prolífica en el campo de la psicología moral. A pesar que el estudio de temas morales ha sido materia recurrente de las humanidades y de las ciencias sociales, solo la posterior integración de diferentes disciplinas científicas en la ciencia de la ―psicología moral‖ parece haber sido determinante para el desarrollo de este campo de estudio. Así, en los últimos diez años, diversos estudios procedentes de las ciencias cognitivas, la filosofía experimental, la primatología, la psicología clínica y del desarrollo, las ciencias económicas o la antropología han dado lugar a lo que parece ser una ―nueva era‖ en el estudio de la moralidad. En este artículo, revisamos los hallazgos más importantes que constituyen el ―estado del arte‖ de la psicología moral, con el objetivo de facilitar una mejor comprensión acerca del funcionamiento de la mente moral. Palabras clave: psicología moral; juicio moral; cognición social. Abstract: The need for multidisciplinary approaches to the scientific study of human nature is a widely supported academic claim. This assumption has proved to be especially successful in the field of moral psychology. Although studies of moral topics have been ubiquitous in both humanities and social sciences, it is not until the integration of different scientific disciplines in the convergent science of moral psychology that the study of morality seems to start its flourishing age. Thus, in the last ten years, a growing body of research from cognitive sciences, experimental philosophy, primatology, clinical and developmental psychology, economy and anthropology have made possible a ―new era‖ on the study of morality. In this paper, we review the most striking findings that constitute the ―state of the art‖ of moral psychology, with the aim to facilitate a better understanding of how the mind functions in the moral domain.
Introduction
Recent multidisciplinary approaches to the nature of morality have given rise to important findings, constituting what appears to be a -new era‖ in this topic.This was largely possible because a priori theoretical models of morality are now required to be complemented with experimental data.But, even before the current -boom‖ of moral research, there was an important tradition in moral psychology, with the paradox that it was not recognized as a research topic per se.In other words, during the last century, psychology has made remarkable progress in the study of morality through the study of topics such as empathy, aggression, fairness, norms and obedience without considering them aspects of an integrated moral field. 1 In this context, an important particularity of morality is that it has been traditionally studied as a part of developmental and educational psychology.Thus, developmentalists believed that children were active actors who constructed much of their morality by themselves.For Piaget (1932Piaget ( /1965) ) the constructive processes through which children develop respect for rules (their moral understanding) is explained through the progressive development of psychological mechanisms for information processing.The work of Piaget was developed (never better expressed) by Lawrence Kohlberg (1969), who claim that moral reasoning was developed through a progressive and fixed sequence of stages in which children improve their reasoning abilities.Consequently, this model explains children's ability to reason philosophically about moral (justice) problems.
Despite the fact that developmentalists' approaches made important contributions to the study of morality, such a rationalist view of our moral seems to undermine the role of emotional processes in the moral domain.Wilson (1975Wilson ( /2000) ) argued that biology plays a leading part in moral life by providing our species with brain structures that allow us to experience moral emotions in the presence of certain events.However, it was not until the shift of the -affective revolution‖-with its emphasis on the study of the automatic affective systems of the mind-and the rebirth of sociobiology as evolutionary psychology that the study of the psychological processes underlying our moral sense suggested whether an emotional explanation of morality was indeed possible. 2 Indeed, since the modern cognitive sciences, the idea that many of our social behaviors can be explained as the result of automatic processes has found several theoretical and empirical supports (Bargh, 1994).Thus, it is argued that automatic stimulus evaluation occurs at a very early stage in information processing, and that the process is fast, unintentional, efficient and occurring outside of awareness (Öhman, 1987).This claim has direct evolutionary connotations: automatic processes are phylogenetically older than controlled processes, which are slower, effortful and often conscious.
This perspective was reinforced by neuroimaging research and the results obtained from inter-species comparative studies.Thus, from the field of neuroscience, Damasio (1994) showed that patients who suffer lesions in specific brain regions display social deficits (in particular, in their capacity for social decision making).According to the field of anales de psicología, 2014, vol. 30, nº 3 (octubre) primatology, research by de Waal (1996) and collaborators has proved to be prolific, making it possible for Darwin's seminal theories about the -moral sense‖ to find important empirical support.
Current state of research on moral psychology
Over the last ten years, discoveries about intuitions, emotions and the particular ways in which automatic mechanisms interact with rational processes have led to what appears to be the beginning of a new era in the study of morality.Although there is a broad agreement that morality is an exclusively human phenomenon, the absence of a standard comprehension about the innateness of the moral sense is still an object of scientific debate.Therefore, this review is organized around a preliminary distinction between the study of morality at the level of capacity and the study of moral cognition at the level of content.
The study of morality at the level of capacity
Consequently, there are two different ways in which the innateness of morality can be accounted for.Firstly, there is the level of the cognitive and affective mechanisms that are involved in moral cognition (the capacity level).Secondly, there is a different level that refers to the psychological predispositions that bias the content of moral judgments and moral systems (the content level).
According to the first perspective, the fact that H. sapiens is the only living species that can be considered a moral being has been a central claim in biological approaches to morality.In the case of morality, it seems that our species has evolved some psychological mechanisms or -innate hard-ware‖ that is not fully present -that is, at least not to the same degree-in any other animal species.This prediction has found support in findings from inter-species comparative studies.Hence, modern sophisticated cognitive faculties appear to be structured on more basic mental capacities that are shared with other primate species.With regard to this issue, parsimony suggests that, if some psychological mechanisms involved in moral cognition are also present in our closest biological relatives, it is feasible that these mind traits evolved before the appearance of humans (Nadal et al., 2009).
Indeed, many non-human primates display human-like methods to deal with conflicts inherent to their social life.Specifically, behaviors such as reciprocity, reconciliation, consolation, conflict intervention or mediation are well documented in several comparative studies, to such an extent that they have been considered the -building blocks‖ of morality (Flack & de Waal, 2000).Each of these blocks appears to include different cognitive and affective mechanisms that seem to be correlated with the complexity of the behavior and, interestingly, the taxonomical place of the genre.For example, some non-human primates appear to be sensitive to effort (van Wolkenten, Brosnan, & de Waal, 2007) and capable of detecting and punishing cheaters, abilities that suggest the presence of retributive emotions toward inequity (Brosnan & de Waal, 2003).Likewise, behaviors such as reconciliation, consolation or conflict intervention are associated with an understanding of the distinction between self and other (de Waal, 2007), the ability to make some inferences from the physical world (Tomasello, Call, & Hare, 2003) and even a cognitive level of empathy, which implies an appraisal of the other´s contextual/emotional situation (Preston & de Waal, 2002).
However, as noted by Darwin, humans' and nonhumans' social behaviors differ substantially in their degree of complexity.For instance, it has been suggested that cognitive capacities, such as symbolic thought and the ability for abstraction, are fundamental in humans' moral cognition.According to Tse (2008), both the capacity to symbolize and the capacity to mentally construct categorical abstractions favored a new scenario in which any event (or individual) that is symbolized could be reconceived as a categorical instance (e.g., good or evil, right or wrong, acceptable or unacceptable).
In addition, neuroimaging results support this account.Moll and Schulkin (2009) found that ancient limbicneurohumoral systems of social attachment and aversionwhich are involved in non-human primate behaviors such as altruism or aggression-are tightly integrated with -newer‖ cortical mechanisms in the making of moral sentiments and values.This suggests that the motivational-emotional neural mechanisms that underlie prosocial behaviors in other species acquire a new dimension when they are integrated with brain regions associated with complex social knowledge, supporting the hypothesis that morality is not a unified neurological phenomenon (Parkinson et al., 2011).
Morality understood as a set of innately codetermined social concerns
The debate about the innateness of morality seems to become more controversial when it refers to the specificity of the biological influences in the content of morality.As Sripada (2008) points out, the discussion about -content na-tivism‖-which refers to the specific set of actions that moral norms prohibit, permit or require-does not need to be reduced to a contraposition between the human mind as a blank slate versus the mind as fully programmed by genes.Although empirical evidence supports that the -ingredients‖ that make moral life possible are indeed given by evolution, it has not yet delimited the precise extent to which biology can also constrain human's moral -products.‖In the present section, three approaches to the innateness of the content of morality are reviewed: (a) moral judgments understood as evaluations driven by innate principles; (b) moral judgments understood as automatic-affective evaluative processes; and (c) moral norms understood as psychologically constrained cultural constructions.
Moral judgment understood as an evaluation driven by innate principles
The first approach to the innateness of moral content argues that we are born with a moral faculty akin to the language faculty.Thus, it has been proposed that moral judgments are structured on a set of implicit principles that constitute the -Universal Moral Grammar‖ (Hauser, 2006), understood as an innate device of morality acquisition (Mikhail, 2007).In other words, the human mind is born equipped with a set of domain-specific rules, principles and concepts that can produce a wide range of mental representations.These implicit principles determine the deontological status of an infinite assortment of acts (and non-acts, see Mikhail, 2007).As a result, moral intuitions are structured on these psychological guidelines that constitute the moral faculty.
For instance, it is argued that, although there are domain-general mechanisms underlying the moral faculty, some cognitive mechanisms are moral-specific (Cushman, Young & Hauser, 2006).These authors believe that such mechanisms -translate‖ general principles into specific moral judgments, because each one of them is understood as -a single factor that, when varied in the context of a moral dilemma, consistently produces divergent moral judgments‖ (Cushman, Young & Hauser, 2006, p. 1082).
Therefore, they found support for the existence of three particular moral principles.Action principle causes that people judge harm caused by action as morally worse than harm caused by omission.Intention principle causes that people judge intended harm as morally worse than foreseen harm.Lastly, contact principle causes that people judge harm involving physical contact as morally worse than harm caused without contact.
Research conducted by Knobe (2010) is an interesting counterpoint to this perspective.This author has found evidence suggesting that the -moral status‖ of an action (that is, whether it is judged as morally right or wrong) influences the perception of the intentionality of the action judged.For instance, Knobe and his team found that the same action was judged as intentional or unintentional depending on the wrongness or rightness of the action, respectively.
Likewise, a growing body of studies from the field of neuroscience suggests that there might be some unconscious principles underlying moral judgments.Consider the following scenario: A runaway trolley is going to kill five people if it continues its present course.The only way to avoid this tragedy is to hit a switch that will change the trolley course, of which the major problem is that, in its new side track, it will run over-and of course, kill-one person instead of the initial five.Is it morally acceptable to hit the switch?(Greene Sommerville, Nystrom, Darley, & Cohen, 2001, p.2105) Diverse studies on this topic show a large inclination to immediately consider the affirmative response morally ac-ceptable (Greene et al., 2001;Greene, Nystrom, Engell, Darley, & Cohen, 2004).Interestingly, responses were quite different when participants were asked to evaluate a similar recreation of the trolley dilemma.In this second case (the -footbridge dilemma‖), all the variables were controlled to be identical than in the trolley dilemma.Thus, in this second version, the only modification was that, in order to stop the train and save five people, participants have to push a -big‖ person instead of performing the action of -hitting the switch.‖Despite the obvious similarities, results show that people respond in an opposite way: they tend to immediately consider as -not permissible‖ to push one man off in order to save five (Greene et al., 2001).
What makes it morally acceptable to sacrifice one life in order to save five in the first case but not in the second one?For Greene and collaborators (2001), the main distinction between the two situations is that the simple thought of pushing someone to certain death with one's hands in an -close-up and personal‖ manner is likely to be more emotionally salient than the -impersonal‖ thought of hitting a switch, even if both responses have similar consequences.It is noteworthy that, despite that the explanatory validity of this distinction has been seriously questioned (Kahane et al., 2011;McGuire, Langdon, Coltheart & Mackenzie, 2009), it appears that there is something about the actions in the footbridge and the switch dilemma that elicits different behaviors.
Moral judgments understood as an automatic-affective evaluative process
The possibility that the evaluation of both types of dilemmas engage dissociable processing systems has been proposed as an explanation for this phenomenon.Neuroimaging studies have reported activity in several brain regions during the evaluation of moral events (Moll & Schulkin, 2009), which shows that the process of moral judgment involves several brain areas working integratedly.Some of these areas are associated with emotional processes, and others areas are related to rational processing, a fact that has favored the discussion about the function of rational and emotional processes in moral judgments.
For example, Greene (2009) proposes a dual-process theory of moral judgment, according to which automatic emotional responses drive characteristically deontological judgments, and controlled cognitive processes drive utilitarian judgments.Thus, Greene claims that moral cognition functions like a picture camera: there is an -automatic‖ (emotions-intuitions) and a -manual‖ (conscious reasoning) mode.Depending on the situation being judged, one setting could be more efficient than the other.However, as a general rule, the automatic mode is more efficient in everyday situations to which we are to some extent habituated.Conversely, in novel situations that require of more flexible responses, the manual mode is more efficient.These differentiated processes can enter into conflict in the moral situaanales de psicología, 2014, vol.30, nº 3 (octubre) tions where a rational evaluation clearly favors the -right‖ response, but the implication of such a choice elicits a negative emotional reaction (Greene et al., 2004).Supporting this claim, a neuropsychological study by Koenigs et al. (2007) found that ventromedial prefrontal patients made about five times more utilitarian judgments than control subjects.
The dual conception of moral cognition is amply shared among moral psychologists.Moreover, a recent body of research favors the characterization of a typical moral judgment as an automatic process.For example, Jonathan Haidt (2001) found an important battery of evidence supporting his central claim that most moral judgments are caused by moral intuitions.
Based on this conception, Haidt (2001) proposes the Social Intuitionist Model of moral judgment (SIM), which, essentially, captures the interaction between moral intuitions, moral judgments and moral reasoning.Therefore, in daily life, affect-laden intuitions drive moral judgments, whereas moral reasoning-when it occurs-follows these intuitions in an ex-post facto manner.From this perspective, moral judgment is much like aesthetic judgment: in the presence of a moral event, we experience an instant feeling of approval or disapproval (Haidt, 2001).Thus, moral reasoning also plays an important -social‖ role in moral cognition, being very common in conversation and moral decisions (Haidt & Bjorklund, 2007).In particular, moral arguments should be understood as attempts to trigger the right intuitions in others.As a consequence, moral discussions are understood as processes in which two or more people are engaged in a battle to push the rival´s emotional buttons.
The characterization of moral judgment as a response resulting from intuitive-affective processes has found support in two central claims.Firstly, the fact that people often have the feeling that something is wrong but find it extremely difficult to find reasons that justify their evaluation.Thus, Haidt (2001) identified the cognitive phenomenon of -Moral dumbfounding,‖ which consists of the fact that, in the absence of a truly comprehension of a given moral judgment, people tend to search for plausible explanations about why anyone in a similar situation would have proceeded in the same way.Therefore, it can be said that in those situations, people intuitively -know‖ whether something is right or wrong, but faced with the lack of a logical understanding of the response, they tend to rationalize a justification for their initial intuition.In other words, the reason why we are often unconscious of the cognitive processes that influence moral judgments is because the -moral mind‖ acts more like a lawyer trying to build a case rather than a judge searching for the truth (Haidt, 2001): People have quick and automatic moral intuitions and, when called upon to justify these intuitions, they generate post-hoc justifications out of a priori moral theories.They do not realize that they are doing this.(…).Rather, people are searching for plausible theories about why they might have done what they did.Moral arguments are therefore like shadow-boxing matches: each contestant lands heavy blows to the opponent's shad-ow, then wonders why he doesn't fall down (Haidt, 2001, p. 12-13).
The second claim that supports the characterization of moral judgments as automatic-affective evaluative processes is the sensitivity of moral judgments to affective influences.For instance, there is evidence suggesting that disgust exerts a special influence on moral judgments (Eskine, Kacinik, & Prinz, 2011;Eskine, Kacinik, & Webster, 2012;Schnall, Haidt, Clore, & Jordan, 2008;Olivera La Rosa & Rosselló, 2012, 2013).Also, it seems that the reverse of this patter also mediates moral cognition.Ritter and Preston (2011) found that disgust towards rejected religious beliefs was eliminated when participants were allowed to wash their hands.Moreover, there is evidence that both the cognitive concept and the sensation of cleanliness can make moral judgments less severe (Schnall, Benton, & Harvey, 2008) and reduced the upsetting consequences of immoral behavior (Zhong & Liljenquist, 2006).
Moral norms understood as psychologically constrained cultural constructions
The affective-intuitive approach to morality is largely sustained by the claim that moral beliefs and motivations are ultimately derived from moral emotions.These emotions are understood as evaluations (good or bad) of persons or actions, with the particularity that the object evaluated can be the self or another.Thus, Haidt (2003) proposes that moral emotions can be divided into other-condemning emotions (like contempt, anger or guilt), self-condemning emotions (shame, embarrassment and guilt), other-praising emotions (gratitude, admiration and elevation) and self-praising emotions (pride and self-satisfaction).These emotions are typically triggered by the perception of a moral violation and normally motivate actions directed at the reestablishment of the -broken‖ moral value (Nichols, 2008).
A distinctive feature of moral emotions is that their subjective experience is especially sensitive to cultural factors and social dynamics.Thus, the fact that some moral emotions are associated with some social situations across different cultures suggests that there may be some psychological foundations underlying the development of moral systems.For instance, Haidt and Joseph (2004) argue that we are born with a -first moral draft‖ that is constituted of (at least) five sets of affect-laden intuitions, of which one is easily triggered by the perception of (at least) five sets of moral situations.In other words, the human mind has evolved these sorts of -social receptors‖ or -moral buds‖ (Haidt & Joseph, 2004, p. 57) that are sensitive to the recognition of social patterns (such as actions, relationships or intentions) and can -trans-late‖ the perception of these patterns into emotional states.Further, it is argued that evolutionary pressures structured the human mind to intuitively develop concerns about five moral foundations (Haidt & Joseph, 2004).Therefore, harm/care is associated with the emotion of compassion and concerns for other-suffering, including virtues such as caring and compassion.Fairness/reciprocity involves concerns about unfair treatment, inequity, and abstract notions of justice.Moral violations within this domain are associated with the emotion of anger.In-group/loyalty is associated with emotions of group pride and rage against traitors and concerns derived from group membership.Authority/respect involves concerns related to social order and obligations derived from hierarchical relationships, concerns that are mediated by the emotion of fear.Lately, purity/sanctity involves concerns about physical and spiritual contagion, including virtues of chastity, wholesomeness, sanctity, control of desires and is regulated by the emotion of disgust.
Thus, Haidt and Bjorklund (2007) argue that the process of moral development should be understood as an externalization process: our mind has evolved five moral foundations that function as -learning modules,‖ which, when working together with cultural elements, facilitated the emergence of moral knowledge.
Moreover, an important aspect of this theory is that each moral foundation is understood as largely independent from an evolutionary perspective.That is, each set of psychological mechanisms (moral emotions and intuitions) can be explained as shaped by different selective social pressures.This hypothesis is derived from the fact that four of them (all but Purity-sanctity) appear to be built on psychological mechanisms that are present in non-human primates (Haidt & Joseph, 2004).
These findings call attention to the significant influence of emotional processes in moral life.For instance, it has been proposed that the moral dimension of rules is psychologically grounded on moral emotions (Nichols, 2008).Like Greene (2009) and Haidt and Joseph (2004), the author believes that we have evolved an innate psychological predisposition to feel negative affective responses when in the presence of an action that involves another's suffering.According to his approach, this aversive mechanism constitutes the -emotional support‖ for the emergence and transmission of moral norms.In other words, for the -cultural fit-ness‖ of a moral norm, there must be some emotional congruence between the content of the norm and its implications.
Therefore, affective mechanisms appear to constitute an important factor mediating the moral/conventional distinction.Rozin, Markwith and Stoess (1997) proposed the concept of moralization to explain the phenomenon in which objects or activities that were originally neutral acquire a moral status.For example, they found that participants who reported avoiding meat for moral reasons found meat more disgusting and offered more reasons in support of their position.In the same line, Rozin and Singh (1999) found that participants' disgust measures were highly correlated with their (negative) moral judgments against smokers, suggesting that disgust toward smoking is correlated with strong beliefs that smoking is immoral.
Conclusion
Summarizing, the approaches reviewed above suggest that emotional processes play a motivational role at the normative level of morality.Such a claim implies that there are no rigid parameters constraining moral norms, only innate predispositions that can potentially shape the content of those norms.As Sripada (2007) points out, although there are -high-level themes" in the content of moral norms that are nearly ubiquitous among moral systems-such as harm, incest, helping, sharing, social justice, and group defense-, the specific rules that operate within each theme are culturally idiosyncratic and highly variable.
Therefore, the innateness of moral systems should be understood in terms of a set of social preparedness-like a -universal menu of moral categories‖ (Prinz, 2007, p. 381) -that constrains the construction and functioning of moral systems.In this context, the cuisine analogy created by Haidt and Bjorklund (2007) might be illustrative: although cuisines are unique cultural products, they are also built on an innate sensory system that includes five different taste receptors on the tongue.These biological structures constrain cuisines while at the same time allow them a wide range of creativity in the final products, also constraining our preferences.In short, it can be said that the human mind is endowed with -conceptual moral seeds‖ that are typically externalized through individual development if the right -weather‖ (the cultural inputs) does its part.
The present review has some limitations.Due to the broadness of the research's theme different approaches were not considered in the current discussion.For instance, morality has been a major theme in Western philosophy.Although the discussion of philosophical approaches to the moral domain certainly exceeds the scope of this review, it is important to mention that recent findings from neuroscientific and clinical studies have provided new insights into traditional philosophical debates.With regard to this issue, Damasio (2004) research strongly suggests that the human mind is essentially embodied (as Spinoza believed) which implies that body-states often precede higher-order mental processes and not the other way around (as Descartes claimed).
In addition, further studies on clinical populations that involve affective-related impairments and dysfunctions can provide key insights to the understanding of the influence of affective variables on the moral domain.In this line, further research is needed to address the specific role of emotional processes in moral judgments.Moreover, future studies should be designed to test whether the influence of incidental affects on moral judgments is indeed moral specific or it can be extended to other type of affective judgments (e.g., aesthetic judgments). | 6,006 | 2014-08-12T00:00:00.000 | [
"Philosophy",
"Psychology"
] |
Impeller: a path-based heterogeneous graph learning method for spatial transcriptomic data imputation
Abstract Motivation Recent advances in spatial transcriptomics allow spatially resolved gene expression measurements with cellular or even sub-cellular resolution, directly characterizing the complex spatiotemporal gene expression landscape and cell-to-cell interactions in their native microenvironments. Due to technology limitations, most spatial transcriptomic technologies still yield incomplete expression measurements with excessive missing values. Therefore, gene imputation is critical to filling in missing data, enhancing resolution, and improving overall interpretability. However, existing methods either require additional matched single-cell RNA-seq data, which is rarely available, or ignore spatial proximity or expression similarity information. Results To address these issues, we introduce Impeller, a path-based heterogeneous graph learning method for spatial transcriptomic data imputation. Impeller has two unique characteristics distinct from existing approaches. First, it builds a heterogeneous graph with two types of edges representing spatial proximity and expression similarity. Therefore, Impeller can simultaneously model smooth gene expression changes across spatial dimensions and capture similar gene expression signatures of faraway cells from the same type. Moreover, Impeller incorporates both short- and long-range cell-to-cell interactions (e.g. via paracrine and endocrine) by stacking multiple GNN layers. We use a learnable path operator in Impeller to avoid the over-smoothing issue of the traditional Laplacian matrices. Extensive experiments on diverse datasets from three popular platforms and two species demonstrate the superiority of Impeller over various state-of-the-art imputation methods. Availability and implementation The code and preprocessed data used in this study are available at https://github.com/aicb-ZhangLabs/Impeller and https://zenodo.org/records/11212604.
Introduction
The orchestration of cellular life hinges on the precise control of when and where genes are activated or silenced.Characterizing such spatiotemporal gene expression patterns is crucial for a better understanding of life, from development to disease to adaptation (Mantri et al. 2021).While singlecell RNA sequencing (scRNA-seq) is a revolutionary and widely available technology that enables simultaneous gene expression profiling over thousands of cells, it usually needs to dissociate cells from their native tissue and thus loses the spatial context (L€ ahnemann et al. 2020).Recent advances in spatial transcriptomics (Ståhl et al. 2016) allow spatially resolved gene expression measurements at a single-cell or even sub-cellular resolution, providing unprecedented opportunities to characterize the complex landscape of spatiotemporal gene expression and understand the intricate interplay between cells in their native microenvironments (Strell et al. 2019).However, due to technical and biological limitations, most spatial transcriptomic profiling technologies still yield incomplete datasets with excessive missing gene expression values, hindering our biological interpretation of such valuable datasets (Choe et al. 2023).Therefore, gene imputation is a critical task to enrich spatial transcriptomics by filling in missing data, enhancing resolution, and improving the overall quality and interpretability of the datasets.
Several methods have been successfully developed for gene imputation in spatial transcriptomics, which can be broadly summarized into two categories-reference-based and reference-free approaches.Since scRNA-seq data usually offer a deeper dive into transcriptome profiling, referencebased methods integrated spatial transcriptomic data with matched scRNA-seq data from the same sample for accurate imputation.While promising, these referenced-based methods usually suffer from two limitations.First, most studies do not always have matched scRNA-seq data, especially those using valuable and rare samples.Second, even with matched data, there can be significant gene expression distribution shifts due to sequencing protocol differences (e.g.single nuclei RNA-seq versus whole cell spatial transcriptomics) (Zeng et al. 2022).
Researchers also used reference-free methods for direct gene expression imputation.For instance, traditional gene imputation methods designed for scRNA-seq data, such as scVI (Lopez et al. 2018), ALRA (Linderman et al. 2018), Magic (van Dijk et al. 2018), and scGNN (Wang et al. 2021), have been adapted for spatial transcriptomic data imputation.While effectively capturing cell-type-specific gene expression signatures, these methods completely ignored the rich spatial information, resulting in suboptimal results.Later, scientists emphasized the importance of spatial context for cell-to-cell interaction (CCI) in modulating expression changes in response to external stimuli (Armingol et al. 2021).Therefore, Graph Neural Network (GNN) based methods have been developed to mimic CCIs for imputation tasks with improved performance.However, different types of CCI involve distinct cell signaling mechanisms with varying interaction ranges.Existing GNN-based methods used very shallow convolutional layers for computational convenience, successfully modeling short-range CCI (e.g. via autocrine and juxtacrine) but ignoring long-range interactions (e. g. via paracrine and endocrine).As a result, they cannot fully exploit the spatial information for gene expression imputation.
To address the abovementioned issues, we propose Impeller, a path-based heterogeneous graph learning method for accurate spatial transcriptomic data imputation.Impeller contains two unique components to exploit both transcriptomic and spatial information.First, it builds a heterogeneous graph with nodes representing cells and two types of edges describing expression similarity and spatial proximity.Therefore, the expression-based edges allow it to capture celltype-specific expression signatures of faraway cells from the same type, and the proximity-based edges incorporate CCI effects in the spatial context.Second, Impeller models longrange CCI by stacking multiple GNN layers and uses a learnable path operator instead of the traditional Laplacian matrices to avoid the over-smoothing problem.Extensive experiments on diverse datasets from three popular platforms and two species demonstrate the superiority of Impeller over various state-of-the-art imputation methods.
Our main contributions are summarized below: � We propose a graph neural network, Impeller, for reference-free spatial transcriptomic data imputation.Impeller incorporates cell-type-specific expression signatures and CCI via a heterogeneous graph with edges representing transcriptomic similarity and spatial proximity.� Impeller stacks multiple GNN layers to include both short-and long-range cell-to-cell interactions in the spatial context.Moreover, it uses a learnable path-based operator to avoid over-smoothing.� To the best of our knowledge, this is the first paper to combine cell-type-specific expression signatures with spatial short-and long-range CCI for gene expression imputation.� We extensively evaluate Impeller alongside state-of-theart competitive methods on datasets from three sequencing platforms and two species.The results demonstrate that Impeller outperforms all of the baselines.
Imputation methods ignoring spatial information
Earlier spatial transcriptomic data imputation methods adapted the computational strategies originally developed for scRNA-seq data, overlooking the spatial coordinate information of each spot.For instance, eKNN (expressionbased K nearest neighbor), and eSNN (expression-based Shared nearest neighbor) are methods implemented using the Seurat R-package that rely on gene expressions of nearest neighbors.MAGIC adopted data diffusion across similar cells to impute missing transcriptomic data.ALARA used low-rank approximation to distinguish genuine nonexpression from technical dropouts, thus preserving true gene absence in samples.scVI used a deep variational autoencoder for gene imputation by assuming the read counts per gene follow a zero-inflated negative binomial distribution.However, these methods completely ignored the rich spatial information, resulting in sub-optimal performance.
Imputation methods utilizing spatial information
Later on, several methods were developed to exploit the spatial coordinate information to improve imputation accuracy.Since scRNA-seq data are usually sequenced deeper to provide more accurate expression measurements, several methods incorporated additional scRNA-seq data during the imputation process.For instance, gimVI used a low-rank approximation and included scRNA reference (Lopez et al. 2019).Tangram mapped scRNA-seq data onto spatial transcriptomics data to facilitate imputation by fitting expression values on the shared genes (Biancalani et al. 2021).STLearn used gene expression data, spatial distance, and tissue morphology data for imputing absent gene reads (Pham et al. 2020).However, additional scRNA-seq data are not always available and there can be large gene expression distribution shifts between these datasets due to differences in sequencing protocols (e.g.single-cell versus single-nuclei), resulting in limited applications for reference-based methods.
On the other hand, several reference-free methods have been developed for more generalized settings.For example, the seKNN (spatial-expression-based K nearest neighbor) and seSNN (spatial-expression-based shared nearest neighbor) models (Satija et al. 2015, Butler et al. 2018, Stuart et al. 2019, Hao et al. 2021) incorporated cell-to-cell distance when defining the KNN for imputation tasks.Recently, STAGATE (Dong and Zhang 2022) is a graph attention auto-encoder framework that effectively imputes genes by integrating spatial data and cell type labels.Overall, these methods did not deeply integrate and exploit the full potential of combining expression and spatial data.
Problem definition
Here, we aim to impute the excessive missing gene expression values in spatial transcriptomics data without matched reference scRNA-seq data.Formally, given a sparse cell-by-gene count matrix X obs 2 R n × m which represents observations for n cells across m genes, and the spatial coordinates C 2 R n ×2 of these cells, our goal is to impute the gene expression matrix X 2R n × m .X obs is derived from the ground truth matrix X gt 2 R n × m , which contains the observed nonzero entries pre-masking.To simulate real-world data conditions, 10% of the nonzero entries in X gt are masked to form a test set and another 10% for validation, thus creating X obs .This matrix serves as the input for our imputation model.The major challenge is to generate X that is as close as possible to the ground truth gene expression X gt , using both the observed gene expressions in X obs and the spatial information in C.
Heterogeneous graph construction
As shown in Fig. 1, we build our Impeller model based on two widely accepted biological insights-(i) gene expression can be modulated by surrounding cells via CCI; (ii) faraway cells of the same cell type may share stable gene expression signatures.Therefore, Impeller first builds a heterogeneous graph G to fully exploit both spatial and cell-type information, with nodes and edges representing cells and their relationships.
Specifically, G contains two complementary graphs: a spatial graph (G s ) and a gene similarity graph (G g ).Edges in G s represent the cell's spatial proximity to model CCI, while edges in G g denote the cell's transcriptomic similarity to capture the cell-type-specific expression signatures.
Spatial graph construction
The spatial graph G s ðV s ; E s Þ is created based on the spatial distance between cells, with nodes V s representing the cells and edges in E s connecting nearby cells.Specifically, an edge e s;fijg in G s is established between v i ; v j 2 V s if and only if their Euclidean distance d i;j is less than a predefined threshold d thr , which can be represented as: where C i ¼ ½C i;0 ; C i;1 � and C j ¼ ½C j;0 ; C j;1 � are 2D spatial coordinates of cell i and j, respectively.
Gene similarity graph construction
Impeller also builds a gene expression similarity graph G g similar to that in scRNA-seq analysis.Specifically, we first extract the highly variable genes (default 3100).Then, for each target cell, we select its top K most similar cells.Mathematically, where X h i is the expression vector of highly variable genes in cell i, K g ðX h i Þ returns the top k g cells most similar to cell i (e.g. using the Euclidean distance as the similarity metric), and e g;fijg is the edge between cells i and j in G g .
GNN model on heterogeneous graph
With the heterogeneous graph built, Impeller uses a pathbased heterogeneous GNN to synthesize the impacts of spatial CCI (G s ) and cell-type-specific expression signatures (G g ) for the imputation task.We introduce the problem of traditional GNN, our learnable path operator, and the overall architecture of Impeller as follows.
Problem of traditional GNN
We aim to impute the missing gene expression values in spatial transcriptomics data by incorporating its physical and transcriptional neighbors via a heterogeneous graph.By treating expression profiles as initial cell embeddings (f ð0Þ ¼ X obs ), the lth (l 2 f1; 2; . . .; L − 1g) GNN layer follows a message passing form (Duan et al. 2022a ,b,c , Wang et al. 2019, 2022, Xu et al. 2020, 2021, Duan et al. 2023, 2024) to generate cell i's embedding in layer l as follows: ; e g;fij gÞÞ; (3) where is the embedding dimension at lth layer, and N s ðiÞ and N g ðiÞ are neighboring cell i in G s and G g .� denotes a differentiable, permutation invariant function, e.g.sum, mean, and γ Θ ; ϕ Θ , and ψ Θ denote differentiable functions such as MLPs.
After L layers, we obtain the imputed gene expressions, denoted as X ¼f ðLÞ 2 R n × m .In order to capture long-range CCI interaction, we have to include relatively far away cells by stacking multiple GNN layers via a larger L. Traditional Laplacian matrices-based GNN suffers from over-smoothing, resulting in deteriorated performance as L increases (Eliasof et al. 2022).Therefore, we introduce a learnable path operator to overcome this issue and better capture the long-range CCI.Impeller: a graph method for spatial transcriptomic data imputation (5) where P s and P g are sets of paths sampled from the G s and G g , each containing T s and T g paths.Each path P s 2 P s and P g 2 P g are separately convolved using op ðlÞ s or op ðlÞ g , and the results are averaged to acquire the node embeddings.
The overall architecture of impeller
After convolving both spatial and gene similarity paths, we concatenate their embeddings to form the overall node embeddings, as in where σð�Þ denotes the ReLU activation function, W ðlÞ 2 ðlÞ emb is the learnable weight matrix, d ðlÞ emb is the embedding dimension at lth layer, and ½�; �� denotes concatenation operation.Then, Impeller tries to minimize the Mean Squared Error (MSE) between X and X gt : where 1½�� is an indicator function that equals 1 if the condition inside brackets is met (X gt;ði;jÞ 6 ¼ 0), and 0 otherwise.The loss is computed only over nonzero entries of X gt .
k-hop complexity analysis
Traditional GNNs need to gather information from k-hop neighbor nodes after stacking of k layers.
emb , Impeller's number of parameters remains on par with traditional GNNs.
Data sources and preprocessing
In our study, we tested Impeller using diverse datasets from three popular sequencing platforms and two organisms.Specifically, we included 10X Visium datasets from the human dorsolateral prefrontal cortex (DLPFC), (Maynard et al. 2021), Steroseq datasets from mouse olfactory bulb (Chen et al. 2021), and Slide-seqV2 from mouse olfactory bulb (Stickels et al. 2021) in our analyses.Detailed attributes of these datasets are summarized in Table 1 (for filter details and visualizations, see the Supplementary Appendix).After standard pre-processing and normalization procedures, we downsampled the data according to scGNN, where 10% of nonzero entries in the dataset were used as a test set, and another 10% of nonzero entries were reserved for validation.For a fair comparison, we repeat ten times with different mask configurations.
Baseline methods for benchmarking
We conducted a comparative study utilizing 12 state-of-theart methods, including reference-free and reference-based methods that originally required additional scRNA-seq data.However, in our analysis, we did not use any additional scRNA-seq data for a fair comparison.
First, we included methods directly adapted from scRNAseq data imputation and completely ignored the rich spatial information, including a deep generative model scVI, a lowrank approximation model ALRA, nearest neighbors-based models eKNN and eSNN, a diffusion-based model MAGIC, and a GNN-based model scGNN.Furthermore, we used several imputation methods specifically designed for spatial transcriptomic data, such as seKNN (spatial-expressionbased K nearest neighbor), and seSNN (spatial-expressionbased shared nearest neighbor).gimVI and Tangram need additional scRNA-seq from matched samples, so we used a reference-free implementation available through their website for a fair comparison.Lastly, we included STAGATE a graph attention auto-encoder framework by amalgamating spatial data and gene expression profiles.We use default parameters in most baseline methods (for details, see the Supplementary Appendix).
Evaluation metrics
We first define a test mask M 2 R n × m where the entries to be imputed are marked as 1 and the others as 0. Then we extract the relevant entries from both the imputed matrix X and the ground truth matrix X gt to form two vectors: x (from X) and x gt (from X gt ), each of length N, where N is the total number of entries to be imputed.Following scGNN settings, we use L1 Distance, Cosine Similarity, and Root-Mean-Square Error (RMSE) to compare imputed gene expressions x with the ground truth x gt .Mathematically: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi P
Improved imputation accuracy
We benchmarked our performance against 12 leading methods by assessing imputation accuracy across 14 datasets.These datasets span three prominent sequencing platforms (10x Visium, Stereoseq, and Slideseq) and two species (human and mouse).Table 2 summarizes the performance of Impellers and other baselines (for results of the other six samples of the DLPFC dataset, please see the Supplementary Appendix).For a fair comparison, we did not include any additional scRNA-seq data to facilitate the imputation task.Overall, Impeller consistently outperforms others in all datasets using L1 distance, Cosine Similarity, and RMSE, indicating the effectiveness and robustness of our strategy.
In addition, we found that most methods utilizing spatial information (w � group in Table 2) demonstrated higher imputation accuracy than those ignoring spatial information (wo � group in Table 2), validating the presence of rich information in the spatial context.Notably, Impeller surpasses even the best gene expression-only method, eKNN, with improvements of 11.32% on 10xVisium DLPFC, 31.09% on Stereoseq, and 6.01% on SlideseqV2 Mouse.Furthermore, compared to uniform averaging using KNN, GNN allows for more flexible neighbor information aggregation for better imputation accuracy, as reflected by the noticeably improved performance of Impeller and STAGATE.
In Table 3, GAT and GraphSAGE suffer from gradient vanishing/exploding issues as more layers are added to capture long-range CCI, resulting in quickly degraded performance.GCN works best initially, but its performance drops with more layers added.This could be because the number of neighbors grows fast as we increase the receptive field, leaving it difficult for the target cell to understand the influence of each neighbor.Furthermore, GraphTransformer starts with high errors at a receptive field of 2. It works best at a receptive field of 8, but the error goes up again at 32.This increase in error is similar to the problem of GCN, as all cells start to look too similar to make useful representations.On the other hand, Impeller effectively tackles these challenges by the path operator, as reflected by the consistently improved results until the receptive field of 32.As the receptive field continues to grow, Impeller's performance slightly declines, likely because distant information becomes less relevant for the target cell's gene imputation.An additional perturbation study, demonstrating the effectiveness of Impeller in capturing CCI, is shown in the Supplementary Appendix.
Advantage of heterogeneous graph
In our study, we explored the influence of graph modalities on imputation accuracy by assessing three key variants: var s , using solely the spatial graph; var g , utilizing only the gene similarity graph; var h , integrating both graphs.We then calculated the performance improvement from adding G g by comparing var h with var s , and the improvement from adding G s by comparing var h with var g .As shown in Fig. 2, the majority of the cases (22 out of 24) exhibit positive improvements.Specifically, in the DLPFC sample 151674, the inclusion of the gene similarity graph yields a 17.3% improvement, and the 13.6% enhancement is achieved by adding the spatial graph alone.Similarly, in sample 151508, the gene similarity graph and the spatial graph contribute to improvements of 3.6% and 9.9%, respectively.These results underscore the efficacy of our approach, particularly in scenarios where the complex interaction between spatial and gene expression data is pivotal for enhancing gene imputation accuracy.
Ablation study
We conducted an ablation study to evaluate the performance of four primary path operator variants of Impeller: op glo , where all Impeller layers and channels (each channel representing 1D of f ðlÞ ) share one path operator; op cha , where channels share an operator but layers have distinct ones; op lay , where all layers share one, but channels have individual operators; and op ind where every layer and channel possesses an independent path operator.As depicted in Fig. 3, both op glo and op cha performed poorly on the DLPFC dataset, indicating the importance of distinct operators for each channel.Notably, op lay and op ind showed comparable results, suggesting that layer-specific operators might be optional, depending on the specific application.Another ablation study regarding different path construction and graph construction methods is shown in the Supplementary Appendix.
Parameter analysis
To investigate the influence of Impeller's various hyperparameters, we conducted extensive experiments using the DLPFC dataset (Sample ID: 151507) and reported the mean and standard deviation of the imputation accuracy over ten repetitions.First, we studied the impact of q s and q g on the RMSE of a random walk on G s and G g following the Node2Vec mechanism.Higher values of q (i.e.q s and q g ) encourage the walk to sample more distant nodes, enhancing the exploration of the global graph structure, while lower values bias the walk toward neighboring nodes, facilitating local exploration.As shown in Fig. 4A, Impeller exhibits strong robustness with RMSE from 0.33 to 0.36 when q s and q g varied from 0.1 to 5.However, higher values of q s and q g tend to induce larger errors.For generality, we selected 1 as the default value for q s and q g .We investigated the impact of random walk length (k s and k g ) and layer number (L), shown in Fig. 4B.A path length of 2 with 10 layers results in maximum errors, reducing our model to a standard ten-layer GCN.This is because, at this path length, the model focuses on immediate neighbors, akin to how traditional GCNs operate.Such a setup, while deep, limits neighborhood exploration and increases over-smoothing risk.Conversely, a path length of 8 with 4 layers allows for capturing broader interactions (up to 28 hops), balancing extended reach and computational efficiency, thus avoiding oversmoothing and optimizing long-range CCI capture.
Neighbor visualization
To better understand the differences between traditional GNNs and our path-based GNN, Impeller, we turned to a visual example (sample 151507 from the DLPFC dataset).
Figure 1B shows how the typical GNN gathers information from far-away neighbors.not needed.On the other hand, Fig. 1C shows our Impeller model.Instead of stacking GNN layers, Impeller samples a direct path from the center node to the target node.While using this direct path method, Impeller offers better gene imputation performance by capturing the relevant long-range CCI.
Running time analysis
As shown in Table 4, we conducted a comparative model parameter and runtime analysis with popular graph-based models (GCN, GAT, GraphSAGE, and Transformer) on the DLPFC dataset.As discussed in Section 3.4.2,our model maintains a parameter count comparable to traditional GNNs, with the complexity per layer defined as Oðd emb Þ.Specifically, our model introduces only a 3.5% increase in parameters for GCN and a 2.7% increase for GAT.In contrast, it achieves a 48.0%reduction in parameters for GraphSAGE and a 74.1% reduction for GraphTransformer (Table 4).Despite its additional path sampling step, Impeller remarkably outperformed the others in training and inference efficiency.This can be partially credited to leveraging the DGL library's optimized implementation for path sampling (https://docs.dgl.ai/en/0.8.x/api/ python/dgl.sampling.html)and the inherently faster multiplication process used in path-based convolution compared to edge-wise information aggregation in traditional GNNs.In addition, Impeller showed the lowest RMSE, indicating superior prediction accuracy.Hence, Impeller offers a balanced blend of efficiency and precision for spatial transcriptomic data imputation, outperforming other graph-based models.
Conclusion
In this study, we introduced Impeller, a path-based heterogeneous graph learning approach tailored for spatial transcriptomic data imputation.By constructing a heterogeneous graph capturing both spatial proximity and gene expression similarity, Impeller offers a refined representation of cellular landscapes.Further, its integration of multiple GNN layers, coupled with a learnable path operator, ensures comprehensive modeling of both short and long-range cellular interactions while effectively averting over-smoothing issues.Benchmark tests across diverse datasets spanning various platforms and species underscore Impeller's superior performance compared to state-of-the-art imputation methods.This work not only establishes Impeller's prowess in spatial transcriptomic imputation but also underscores its potential to model both short-and long-range cell-cell interactions.Training and inference times with the fastest performance, as well as the best imputation performance (RMSE), are highlighted in bold.
Impeller: a graph method for spatial transcriptomic data imputation
Figure 1 .
Figure 1.The overview of Impeller.(A) Given the observed matrix X obs 2 R n × m of n cells and m genes, and the cells' spatial coordinates C 2 R n × 2 , we build the spatial graph G s and the gene similarity graph G g .The learned spatial and gene similarity path operators op s and op g are obtained through path s and path g , respectively.Convoluting cell features with path operators yields spatial/gene similarity embeddings, which are concatenated and fed into a multilayer perceptron for final gene imputation.(B) and (C) Comparison of neighbor aggregation methods in GNNs.(B) Traditional GNN stacks multiple layers to gather information from distant nodes.(C) The path-based GNN, Impeller samples a path to the target node.
s and T g appeared to have a minimal effect on results, due to the robustness of Impeller which resamples at each epoch during training.We chose 8 as the default number of random walks.Lastly, we evaluated how the embedding dimension d ðlÞ emb affects Impeller's performance.As shown in Fig. 4D, smaller d ðlÞ emb (such as 2, 4, 8) leads to limited expressive power and larger imputation errors.As d ðlÞ emb increases to 16, 32, 64, or 128, Impeller's expressive power improves and operations converge well in each run.Due to our early stopping criterion, we cease training if the validation RMSE does not improve for 50 consecutive epochs.When d ðlÞ emb was set to 256 or 512, it's hard for Impeller to converge quickly at these dimensions.To strike a balance between complexity and representational power, we opted for d ðlÞ emb of 64.
Figure 2 .
Figure 2. RMSE improvement by adding different graph modalities.
3.3.2 Learnable path operator
We first define path P s ¼ ðs 1 ; s 2 ; . . .; s ks Þ on G s of length k s and path P g ¼ ðg 1 ; g 2 ; . . .; g kg Þ on G g of length k g , where s i and g i are node (cell) indexes.Node embeddings at lth layer are denoted by f ðlÞ s i 2 R d ðlÞ emb and f ðlÞ g i 2 R d ðlÞ emb .Then, op Starting from each node, we generate multiple paths on G s and G g and aggregate results for a more expressive representation: ðlÞ g;i ½j�; f ðlÞ si ½j� and f ðlÞ gi ½j� represent the jth scalars of the d ðlÞ emb -dimensional vector op ðlÞ s;i ; op ðlÞ g;i ; f ðlÞ s i and f ðlÞ g i , respectively.
is the average node degree, the overall complexity becomes Oðn × d t × kÞ.In contrast, Impeller can directly access neighbors up to k-hop distance via a single layer by setting k s ¼ k g ¼ k.The computational complexity per layer for Impeller is Oðn × ðT s × k s þ T g × k g ÞÞ, with T s and T g representing the number of paths in G s and G g , k s and k g denoting path lengths.As a result, when T s < d t (a condition satisfied in our task), Impeller offers superior computational efficiency.
Given the complexity of each layer as Oðn × d t Þ, where n is the number of nodes and d t
Table 1 .
Summary of datasets.
Table 2 .
Gene imputation benchmark.aThebest results are bolded.Results marked "NA" for stLearn indicate unavailable HE stained images required by the method. a
Table 3 .
Performance of different receptive fields (RMSE).The best imputation performance is highlighted in bold.
Parameter analysis.(A) The mean RMSE w.r.t.different q s and q g for generating random walks in G s and G g .(B) The mean RMSE w.r.t.different path lengths k s and k g , and the number of Impeller layers L. (C) The L1 distance, cosine similarity, and RMSE w.r.t.different number of paths T s and T g .(D) The L1 distance, cosine similarity and RMSE w.r.t.different embedding dimensions d The center node (red sphere) stacks five GNN layers to gather information from distant nodes like the one shown in yellow.But this method sometimes pulls in extra information from different tissue layers that is ðlÞ emb .
Table 4 .
Running time summary of graph-based models. | 6,616.4 | 2024-05-28T00:00:00.000 | [
"Computer Science",
"Biology"
] |
The Characteristics Of Auditee and Audit Report Lag
The purposes of this study was to examine the effect of auditee characteristics as reflected by ROE, company size, DER and PAF’s (Public Accounting Firm) reputation on Audit Report Lag. Companies engaged in Real Estate and Property listed on the Indonesia Stock Exchange for the period 2011-2014 are used in this study. This sector was chosen because many investors are interested in investing in the real estate and property sector. The purposive sampling method was chosen as the basis for determining the sample so that a sample of 39 companies was obtained. The type of data in this study is quantitative data derived from secondary data in the form of financial reports and auditor reports which can be obtained from the page www.idx.co.id or the Indonesia Capital Market Directory (ICMD). SPSS 21 software was used to analyze research data. The results showed that ROE and company size had a significant negative effect on Audit Report Lag. DER has no effect on Audit Report Lag, while PAF's reputation has a positive effect on Audit Report Lag Keyword: ROE, company size, DER, PAF’s reputation, audit Report Lag
INTRODUCTION
Financial reports are prepared in order to provide information about the company's financial position, financial performance and cash flow to be used by most report users in making decisions. Users of financial reports can be investors and creditors. Investors base their investment decisions on the company's financial statements. Meanwhile, creditors view the company's solvency level as the company's ability to settle debt.
The financial statements as a source of information for users are expected to reflect the actual condition of the company. According to PSAK No. 1 of 2009, a quality financial report is a report that can meet qualitative requirements such as comprehensibility, relevance, reliability and comparability.
The rule Number X.K.2, Attachment to Decree of the Chairman of Bapepam and LK Number: KEP-346 / BL / 2011 dated July 5, 2011 which was updated as a circular of the Financial Services Authority (OJK) number 6 / SE OJK.04 / 2014 which regulates the Submission of Financial Statements Periodically Issuers or Public Companies, namely Annual Financial Statements and Semi-Annual Financial Statements. This OJK regulation requires public companies listed on the Indonesia Stock Exchange to submit an audited financial report along with an auditor's opinion report prepared by an independent auditor no later than the third month after the closing of the annual financial year. A financial statement that must be audited in advance is by an independent auditor. Delay in submitting financial reports can have a negative impact on market reactions. The slower the time for the submission of financial statements, it will cause market doubts to increase or decrease the level of relevance of financial reports (Lestari 2010).
The financial statements that are submitted on time can be seen from the date of closing of the financial statements until the submission of the audited financial statements provided by the independent auditor's report. This time period can be referred to as the Audit Report Lag. The longer the Audit Report Lag will negatively affect the market because it will also reflect the state of the financial statements. The longer the Audit Report Lag, the market will give a negative signal to investors and creditors. The maximum number of days allowed by the OJK is not more than 90 days, because if it exceeds that time the company will be subject to sanctions which will cause the company to suffer losses.
With the discovery of issuers who are not disciplined in submitting audited financial reports. There are at least 52 issuers that did not submit financial reports as of December 31, 2014. This is based on news reports quoted from http://www.neraca.co.id. PT. Bumi Resources Tbk is an example of an emitter engaged in mining that has not submitted its financial statements. One of the contributing factors is the issuer is still waiting for a third party debt confirmation answer.
There are several characteristics and factors that influence auditor lag. One of them is ROE (Return on Common Equity). ROE is an indicator of a company's ability to generate return on capital investment by shareholders. The higher ROE value is certainly one of the good news that companies can provide for investors and potential investors. This statement is in accordance with the results of an empirical study from Carslaw and Kaplan (1991) in Lestari (2010) which states that companies that experience losses will not immediately report audit results, while companies that are experiencing profits tend to report their financial statements as soon as possible (Pradana). 2014).
The second factor is company size. The bigger the company, the faster the company can submit financial reports. Large companies have good internal controls and tend to come under pressure from external parties to immediately submit audited reports (Ariyani and Budiartha 2014), and (Sa'adah 2013). The results of their study on the other hand are not in accordance with the results of research (Nugraha and Masodah 2012), and (Tiono and Jogi 2013) who concluded that company size has no effect on auditor lag.
The third factor is DER (Debt to Equity Ratio). Companies with a high Debt to Equity Ratio will require clearer disclosure, causing the time used to complete the audit to be longer or the Auditor lag will be longer. The same is the case with the research results (Putra and Putra 2016) which show that DER has a positive effect on auditor lag. Meanwhile, the research results (Destiana 2013) show that DER has no effect on the Audit Report Lag. This may be due to indications that the company can fulfill its obligations by way of debt restructuring The fourth factor is the PAF's reputation. Research by Parwati and Suhardjo (2009), and Iskandar and Trisnawati (2010) stated that the PAF's reputation as measured by PAF affiliated with the Big Four or PAF that is not affiliated with the Big Four has an effect on Audit Report Lag. This can be interpreted that the competence between the two types of PAF whether affiliated with the Big Four or not, is no different.
This study examines ROE, firm size, DER, and PAF's reputation on audit report lag because there are still inconsistencies in results from previous studies. Meanwhile, the samples chosen were companies engaged in the real estate and property sector which were listed on the IDX between 2011 and 2014. This sector was chosen to be studied because in previous studies many had examined the manufacturing sector. Besides that, there are also many investors who are interested in investing in the real estate and property sector which motivates to study this sector. This indicates that this sector is experiencing resilience in the face of a sluggish economy.
Based on the research background, the proposed research problem is to investigate whether ROE, DER, company size and PAF reputation affect Audit report Lag. The first part is background research, the second part presents a literature review and hypothesis development. The research methodology is presented in the third section and followed by the results in the fourth section. In the fifth section are Conclusions and suggestions.
THEORETICAL FRAMEWORK AND HYPOTHESIS
Financial reports are a useful tool for reducing information asymmetry and providing signals between management and company owners. Signaling Theory is developed in economics and financial management to show that there is a mechanism that insiders have superior information that is better and faster than outside investors (Akerlof 1970). This theory is the basis of this research to solve the problem of information asymmetry because the signal reflects the actions taken by management that management is obliged to convey information about the state of the company to the owner in addition to information which is an obligation that must be conveyed to outside investors, information that can be a signal of success or failures related to company operations.
Financial reports are a useful tool to reduce information asymmetry and provide information related to the financial position, performance, and changes in the financial position of a company that is useful for a large number of users in making economic decisions (IAI 2007). Users of financial reports include investors, employees, lenders, suppliers and creditors, customers, governments, and the public.
Based on the Financial Accounting Standards (IAI 2012, 6), financial reports have four qualitative characteristics, namely comprehensible, relevant, reliable, and comparable. In fulfilling the qualitative characteristics of financial statements, there are constraints, namely when it is full of relevant and reliable characteristics. These constraints include punctuality, a balance between costs and benefits, and a balance between qualitative characteristics. The timeliness of submitting financial reports to its users is one of the important things because if a company delays the submission of financial reports, the relevance of financial reports that must be fulfilled will decrease or even disappear so that it will greatly affect the decision making of users of financial statements.
The preparation of financial statements uses many assumptions, methods and bases that have been set as a standard by the accounting profession. The choice of assumptions, methods and bases is called accounting policies that provide flexibility for management to use one of the methods based on interest or purpose. Information in financial reports like this can be known if audited by a public accountant to determine the fairness level of the financial statements An independent auditor is a profession whose role is to perform externally on the side of the owner or shareholder (Fan and Wong 2005;Ausbaugh and Warfield 2003). Audit functions to reduce agency costs by providing assurance on the quality of accounting information, so as to increase the accuracy and efficiency of the contractual relationship between owners and management based on financial statements. An audit assignment is a systematic and competent activity to collect and evaluate evidence regarding economic activities and transactions which will then be adjusted according to predetermined conditions for publication to interested parties.
Several reasons for the need for audit services by independent auditors include (Halim 2008, 60): 1. Differences in interest: The difference in interests between company management that prepares financial statements and users of financial statements is a factor in the need for auditor services to improve information quality. 2. Consequences: independent auditors as third parties who assess the fairness of the financial statements. 3. Complexity: The development of the increasingly advanced business world makes financial reporting more complicated. Treatment of transactions often requires assistance from independent auditors. 4. Limited access: The complexity of developments in the business world will affect the process of preparing financial statements to become more complex. This condition causes the need for an independent auditor to improve the quality of information.
Financial services authority (OJK) circular number 6 / SE OJK.04 / 2014 which updates BAPEPAM Regulation No. X.K.2, Attachment to Decision of the Chairman of Bapepam Number: Kep / 346 / BL / 2011 which regulates the Submission of Periodic Financial Reports of Issuers or Public Companies. This OJK regulation requires public companies listed on the Indonesia Stock Exchange to submit an audited financial report along with an auditor's opinion report prepared by an independent auditor no later than the third month after the closing of the annual financial year so that the timeliness of submitting financial statements is very important. to be paid attention to in order to avoid any sanctions for delays in delivering information that will be claimed by users in making decisions. This delay in the submission of financial reports not only has an impact in the form of sanctions, but has the effect of reducing the qualitative characteristics of the financial statements, one of which is said to be relevant. The financial statements will be deemed to meet the relevant requirements if the financial statements are able to influence users in making decisions.
On the other hand, timeliness in the submission of financial statements will increase relevance. Meanwhile, the delay is called auditor lag, which is the time span between the balance sheet date and the date when the audit report is completed by an independent auditor (Tuanakotta 2011, 215). Whereas Knechel and Payne (2001) in Ahmad, et al (2005: 942) define Audit Report Lag or commonly known as audit delay is the time span between the closing date of the company's financial year to the date listed in the audit report. So that from the two definitions regarding Audit Report Lag above, it can be concluded that the definition of audit is the period of completion of the audit on financial statements carried out by an independent auditor starting from the closing date of the company's books, namely 31 December to the date stated in the audit report of the independent auditor. The Influence of ROE on Audit Report Lag.
ROE or Return on Common Equity is a ratio that measures the extent to which a business unit generates profits based on the book value of ordinary shareholders (Horne and Wachowicz 2013,183). Companies with high ROE levels will as soon as possible submit their financial reports to the public because this is good news which will provide a high assessment in the eyes of the interested parties. Therefore, based on the description above that: H1: ROE has a negative effect on the Audit Report.
Effect of Company Size on Audit Report Lag (Y)
Company size reflects the scale of the company which can be measured by the size of its assets and the number of employees. Companies that have large assets will reflect a large involvement of money as well. The larger the company size, the higher the level of complexity of the company. Based on the description above it can be stated that: H2: Company size has a positive effect on Audit Report Lag.
Effect of DER on Audit Report Lag (Y)
DER is a comparison between the funds provided by creditors and company owners. The greater the DER, the higher the risk is because most of the company's operations are financed by debt. Therefore, investors will tend to avoid risks related to high DER. This large risk will cause the company to become the center of attention so that this will prolong the Auditor Report Lag. Based on the description above it can be stated that: H3: DER has a positive effect on Audit Report Lag.
PAF's Reputation on Audit Report Lag (Y)
Companies that use PAF services that are affiliated with PAF the Big Four will be faster to publish their financial reports than companies that use PAF services that are not affiliated with PAF the Big Four. This is due to the reputation that PAF has which are affiliated with PAF the Big Four as well as a larger number of professional resources. Parwati and Suhardjo (2009), Lestari (2010) and Pradana (2014) conclude the same study results, so based on the description above it can be concluded that: H4: The PAF's reputation has a negative effect on the Audit Report Lag.
RESEARCH METHOD
This study uses a causality test which aims to predict the cause and effect relationship between the variables studied (Anwar 2011, 14). The causality relationship to be tested is the effect of ROE, company size, DER and PAF reputation on Audit Report Lag.
Population and Sample
The real estate and property sector listed on the IDX in 2011-2014 is the population used in the study. There are a total of 207 companies in the population while the sampling method is purposive sampling. A sampling method based on certain characteristics. The following table is the steps in determining based on this purposive sampling :
Total Samples 156
Total period of observation On the basis of the two criteria above, sampling can be carried out in order to obtain a sample of 39 companies from a total population of 207 companies.
Data collection
This research uses quantitative data with secondary data from audited financial reports and independent auditors' reports that have been published on the IDX or can be accessed through www.idx.co.id and through the Indonesia Capital Market Directory (ICMD).
Data Analysis
To analyze the research data, several analyzes were used. The first analysis is descriptive statistical analysis with the aim of knowing the data description of the independent variables, namely ROE, company size, DER and reputation of PAF and the dependent variable, namely Audit Report Lag. After descriptive statistical analysis, the results are shown in Table 2, then as the second step of analysis, which is one of the prerequisites to be tested with multiple linear regressions, is to pass the classical assumption test. This test consists of normality test, heteroscedasticity test and multicollinearity test. These four classical assumption tests have all passed. Then the third as the most recent data analysis is multiple linear regression analysis. This analysis aims to measure the degree of relationship and direction between each independent variable and the dependent variable (Ghozali 2011, 96). Software to analyze this data using SPSS Software (Statistical Package for Social Science) version 21 Table 2 Descriptive Statistics
Multiple Linear Regression Analysis
The model to be tested in this study with multiple regression analysis is as follows: Hypothesis testing in this study uses multiple linear regression analysis using SPSS software version 21. There are four hypotheses proposed in this study. Hypothesis test results are measured at the significant level at the p-value level <0.05 or significant at the 5% level. The results of the research model hypothesis test above are shown in Table 3 as follows : Table 3 The Result of Hypothesis Test So based on the results of the multiple regression analysis test, the following equation model can be produced : = 109,885 − 2,52 1 -1,950 2 -2,13 3 + 4,935 4
RESULTS AND DISCUSSION
The results of descriptive statistics in Table 2 showed that the average Audit Report Lag variable is 78.84 or 79 days. Audit Report Lag has a range between 30 days to 107 days. With an average of 79 days, it shows that on average each manufacturing company has complied with the maximum allowable day limit, which is 90 days.
Furthermore, the results of the second descriptive statistic, namely the size of the company in Table 2 shows an average of 15.05. The smallest company size is indicated by the number 10.99, while the largest company size is 17.45. The average firm size indicates that the company is large because it is close to its maximum limit.
The results of the third descriptive statistics for the independent variable ROE in Table 2 show the minimum range is 0.27 and the maximum range is 44.24. The mean for this variable was 11.62, while the standard deviation was 8.82.
The results of the fourth descriptive statistic for the independent variable DER in Table 2 show the minimum range is 0.08 and the maximum range is 1.91. The average for this variable is 0.73. This shows that 73% of the capital of manufacturing companies is financed by debt.
The results of the fifth descriptive statistics for the independent variable Reputation of PAF in Table 2 show that real estate and property companies that use the Big Four PAF or given code 1 have a proportion of 35 companies, while real estate and property companies that use non the Big Four PAF or given code 0 it has the proportion of 89 companies.
The results of this study based on Table 3 states that ROE has a significant negative effect on Audit Report Lag. It can be seen that the significance value is 0.022. This figure is below alpha (α) 0.05 or 0.022 <0.05 so that H0 is rejected and H1 is accepted. accordance with the signaling theory where ROE is one of the signals or directions given by the company. High ROE is good news for both companies and users of information, and vice versa. The good news will make it easier for auditors and encourage auditors to immediately complete the audit process or shorten the Audit Report Lag so that the audited financial reports can be published immediately. The results of the same study were also conducted by (Hariza et al. 2012). On the other hand, different study results were obtained by (Sutapa and Wirakusuma 2013) which stated that profitability in this case measured using ROE had no effect on Audit Report Lag. The regression test results based on Table 3 indicate that company size has a significant negative effect on Audit Report Lag. It can be seen that company size has a significance value of 0.030 which is below alpha (α) 0.05 (<0.05) which means that it has a significant effect. Meanwhile, based on the value of the regression coefficient, company size has a beta value (β) of -1,950 to the Audit Report Lag. This means that if there is a one-unit increase in the company size variable and other variables are considered constant, the Audit Report Lag variable will decrease by 1.950. This means that the larger the company size, the shorter or shorter the Audit Report Lag. Thus, Hypothesis 2 (H2) is accepted and H0 is rejected. The results of the study are in accordance with signaling theory which states that company size as measured by total assets is a guideline for investors to determine the long or short duration of Audit Report Lag. Research results are in accordance with the results (Sa'adah 2013;) The regression test results in Table 3 show that DER has no significant effect on Audit Report Lag. DER, which indicates how much capital the real estate and property company is financed by debt. This indicates that the high or low DER is not a factor considered by the auditor when completing his audit assignment. In the sense that auditors do not see DER as a factor that accelerates or slows down the audit process because high leverage is a normal or normal operational financing in most companies. The results of the study are not in accordance with the study (Putra and Putra 2016; Sutapa and Wirakusuma 2013); The regression test results in Table 3 prove that PAF's reputation has a significant positive effect on Audit Report Lag. These results imply that the higher the PAF's reputation, the more Audit Report Lag will increase, in the sense that the more the number of PAFs that are affiliated with The Big Four PAF the longer the audit completion process will also lead to the length of reporting or preparation of the auditor's report. Or in other words the Audit Report Lag is getting longer too. This is because the number of PAFs that are not affiliated with Big Four PAF's is more dominant, causing an effect on the lengthening of the Audit Report Lag. This is indicated by the large number of PAF that are not affiliated with The Big Four PAF's in this study is greater, namely 89 or 71.77% so this does not show a major influence on Audit Report Lag.
CONCLUSIONS
ROE has a negative effect on the Audit Report Lag. The higher the ROE, the more it is good news for both the company and information users, and vice versa. The good news will make it easier for auditors and encourage auditors to immediately complete the audit process or shorten the Audit Report Lag.
Company size has a negative effect on Audit Report Lag. The increasing the size of the company, the shorter the Audit Report Lag and vice versa, the smaller the company size, the longer the Audit Report Lag.
DER has no effect on Audit Report Lag. This result implies that the high or low DER does not determine the length or shortness of the Audit Report Lag.
The PAF's reputation has a positive effect on the Audit Report Lag. This is because the number of PAFs that are not affiliated with Big 4 PAF is more dominant, thus causing an effect on the length of the Audit Report Lag.
The limitation of this study is that the sample of this study consists of only 39 companies. In addition, the model in this study is very weak because the resulting coefficient of determination is only 0.13 or 13%, meaning that the ability of the independent variables to explain the variation in the dependent variable is very limited. This means that there are other factors that are not tested against the Audit Report Lag.
Suggestions from this study are that future studies are suggested to test a sample of companies engaged in other sectors. In addition, it is necessary to consider using additional other independent variables that are expected to affect the Audit Report Lag such as variables of company age, industry classification, and others. Investors who will invest in Real Estate and Property companies are expected to pay more attention to ROE, company size and PAF's reputation. | 5,773.6 | 2021-03-28T00:00:00.000 | [
"Business",
"Economics"
] |
Large-magnitude (VEI ≥ 7) ‘wet’ explosive silicic eruption preserved a Lower Miocene habitat at the Ipolytarnóc Fossil Site, North Hungary
During Earth’s history, geosphere-biosphere interactions were often determined by momentary, catastrophic changes such as large explosive volcanic eruptions. The Miocene ignimbrite flare-up in the Pannonian Basin, which is located along a complex convergent plate boundary between Europe and Africa, provides a superb example of this interaction. In North Hungary, the famous Ipolytarnóc Fossil Site, often referred to as “ancient Pompeii”, records a snapshot of rich Early Miocene life buried under thick ignimbrite cover. Here, we use a multi-technique approach to constrain the successive phases of a catastrophic silicic eruption (VEI ≥ 7) dated at 17.2 Ma. An event-scale reconstruction shows that the initial PDC phase was phreatomagmatic, affecting ≥ 1500 km2 and causing the destruction of an interfingering terrestrial–intertidal environment at Ipolytarnóc. This was followed by pumice fall, and finally the emplacement of up to 40 m-thick ignimbrite that completely buried the site. However, unlike the seemingly similar AD 79 Vesuvius eruption that buried Pompeii by hot pyroclastic density currents, the presence of fallen but uncharred tree trunks, branches, and intact leaves in the basal pyroclastic deposits at Ipolytarnóc as well as rock paleomagnetic properties indicate a low-temperature pyroclastic event, that superbly preserved the coastal habitat, including unique fossil tracks.
Obtained BSE images were used for vesicularity analyses applying the nested image technique following Klug and Cashman (1994) and Shea et al. (2010). The BSE images were processed with the FIJI-ImageJ (Schneider et al. 2012) open-source image analyses software to create binary images using the built-in auto thresholding function; when it was necessary the automatic results have been manually refined.
The 2D (two dimensional) area fraction of the glass have been measured (Suplement 1, Table II). Klug and Cashman (1994) suggested that the 2D area fraction of the vesicles equals the volume fraction in three dimension and yields clast vesicularity in case of random vesicle orientation. The vesicularity index which represents the mean value of the measured vesicularity and the vesicularity range which represents the total spread of the measured values were calculated following Houghton and Wilson (1989).
Petrography and glass chemistry
Unit A shows two subfacies. Unit A_1 subfacies is a pale greyish yellow fine-grained tuff. The tuff is matrix supported with 5% crystals and rounded white micropumice clasts. The crystals are quartz, feldspar, and dark mica (Supplement 1, Fig. 1A). Unit A_2 subfacies is a whitish grey, layered, coarse-grained tuff (Supplement 1, Fig. 1B). The matrix supported tuff contains rounded, white pumice clasts and quartz, feldspar, and dark mica crystals (Supplement 1, Fig. 2A). Unit B is a dark brown fine-grained tuff containing mm sized accretionary lapilli concentrating at the base of the unit (Supplement 1, Fig. 1C). The accretionary lapilli have a well-defined core and rim (Supplement 1, Fig. 2B). This unit has diffused transition and flame structure at the base. Unit C consists of whitish grey pumiceous lapillistone (Supplement 1, Fig.1 D, E). Quartz, feldspar, and dark mica are present as phenocrysts. The pumices are angular and oriented. Unit D is a gray lapilli tuff with high number of phytogenic clasts (Supplement 1, Fig. 1F). Quartz, feldspar, and dark mica were observed as loose crystals in the matrix (Supplement 1, Fig. 2D).
Supplement 1, Table I contains the glass chemistry results measured with the EDX detector of the AMRAY electron microscope. The measured glass composition was used only for relative comparison of Unit A and Unit C. The SiO 2 /Al 2 O 3 ratio of Unit A is slightly higher compared to Unit C, but this difference is negligible. SiO 2 /Al 2 O 3 vs Na 2 O/K 2 O ratios of the glass indicate homogenous major element melt geochemistry for these units.
Vesicularity
BSE image analysis was effective in characterizing Unit A and Unit C. Samples from these units contained appropriate pumice clasts for vesicularity analyses. The vesicles of the pumices from these samples were studied to understand the main conduit processes as degassing and fragmentation during the eruption.
Most of the studied clasts of Unit A and Unit C are highly and moderately vesicular (Supplement 1, Table II and Supplement 1, Fig. 4B) according to the Houghton and Wilson (1989) classification. The Unit A sample also contains poorly vesicular clasts population. The larger vesicularity range can indicate heterogenous and mature, partly degassed conduit at the time of fragmentation (e.g. Cashman 2004). However, the size dependent vesicularity analyses of Unit A (Supplement 1 Fig. 4A) indicates logarithmic correlation between clast size and vesicularity, in other words the poorly vesicular clasts are only represented by small sized platy and flaky ash while the larger pumice clasts (> 500 µm) are highly-moderately vesicular similar to the pumices of Unit C. Based on Walker (1980) and Houghton and Wilson (1989), the vesicularity of the clasts increases as the size of the clasts converges to the diameter of the vesicles. Therefore, the broad range of vesicularity in Unit A, especially the poor vesicularity, is only apparent, and the poorly vesicular clasts are interpreted as testifying the strongly fragmented material of moderately/highly vesicular magma. This also suggests that the pre-fragmentation vesicularity of Unit A and Unit C magma was similar, indicating comparable decompression history for both units, but with a more effective fragmentation in the case of Unit A. We propose that similarly to the Askja 1875 (Carey 2009) or Grímsvötn 2011 (Liu et al. 2015) eruptions, in the case of Unit A the already vesiculated expanding magma fragmented more efficiently forced by the explosive magma-water interaction.
Phreatomagmatic fragmentation occurs due to magma and water interaction in the conduit. The involvement of water during the fragmentation produces fine-grained deposit in contrast to the magmatic volatile-driven, dry fragmentation (e.g., Wolhetz 1986, Austin-Erickson et al. 2008, Németh & Kósik 2020. The lower vesicularity index and higher vesicularity range in Unit A tuff indicates magma-water interaction during the early stages of the Ipolytarnóc eruption. The involvement of water is also supported by the high amount of fine ash in Unit A and abundant presence of accretionary lapilli in Unit B (see Supplement 1 Fig. 2B), which is probably a co-PDC plume product deposited on the top of Unit A PDC (Pyroclastic Density Current) deposit (Schumacher & Schminke, 1995). The relative abundance of highly vesicular clasts in Unit A suggests late-stage, explosive magma-water interaction of the already degassed expanding magma which was near to or probably just above its fragmentation threshold. It shall be noticed that in contrast to Unit A and B, Unit C vesicularity distribution and two-dimensional vesicle textures (Supplement 1 Fig 3. A-E) indicates dry fragmentation and falls into the range measured for large Plinian eruptions (Cashman 2004). As field observations suggest, Unit C deposited directly on the top of Unit B with sharp boundary, without any signs of intereruptive erosion indicating lack of longer quiescence (Supplement 1 Fig. 2 of main text). Thus, during the Eger-Ipolytarnóc eruption the initial phreatomagmatic phase (Unit A, B) has been followed by a dry magmatic phase represented by Unit C fallout deposit. The transition between these phases was sharp. The sharp transition between wet, phreatomagmatic, and dry magmatic fragmentation mode can be interpreted as a result of a) the depletion of available water supply (e.g., caldera lake) or b) vent position shifting similar to the eruptions of Askja in 1875 or Taupo in 232 (Carey et al. 2009. | 1,730.4 | 2022-06-13T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Low-Frequency-Noise Attenuation through Extended-Neck Double-Degree-of-Freedom Helmholtz Resonators
: The use of acoustic liners, based on double-degree-of-freedom Helmholtz resonators, for low-frequency-noise attenuation is limited by the volume of individual resonating cavities. This study investigates the effect of the septum neck length on the acoustic performance of double-degree-of-freedom resonators, both experimentally and numerically, for varying cavity volume ratios. The underlying sound attenuation mechanism is studied by analysing the acoustic pressure fields within the resonator cavities. An increase in the septum neck is shown to lower the frequencies affected by the resonator. In addition, it deteriorates and significantly improves the sound attenuation performance at the primary and secondary peak transmission-loss frequencies
Introduction
Aero-engines have gone through considerable development over the past decades to significantly reduce jet engine noise through the use of high bypass ratios [1].However, an increase in the bypass ratio of the engines has led to a decrease in the blade passing frequency (BPF) which necessitates the need for aero-engine acoustic liners to attenuate lower frequencies and target the BPF.Low-frequency-sound attenuation remains a challenge for the scientific community due to its inherited physical characteristics, as low-frequency content can travel farther from its source with less attenuation compared to high-frequency content.Helmholtz resonator sound attenuation depends on various geometrical parameters and is governed by Equation (1), where the speed of sound in air is given by c, S neck is the neck opening area, V cavity is the volume of the resonating cavity, and l neck is the length of the neck.The end-correction factor δ neck compensates for any discontinuities inside the resonator that result in the formation of higher-order modes [2,3].Equation (1) suggests an increase in the resonating cavity volume or decrease in the neck-opening area, to target low-frequency sound.However, both these characteristics would be undesirable for an acoustic liner due to size, weight, and manufacturing constraints.The effect of the changing neck geometry and location has been studied extensively in the past [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]; however, the influence of extending the neck into the Helmholtz resonator cavity has not been investigated in detail.Past research has shown neck extensions leading to lower-frequency sound attenuation [19][20][21][22][23], due to the introduction of multiple resonance frequencies, when the neck extensions become significantly large [24][25][26][27].Following the recent surge in studies such as Simon et al. [28] and Jones et al. [29] investigating the use of extended-neck resonators for low-frequencysound absorption, this paper presents the underlying mechanism of sound attenuation from double-degree-of-freedom (2-DoF) resonators with varying neck extensions.
Experimental and Numerical Methodology
The experiments were performed at the University of Bristol Grazing Flow Impedance Tube Facility (UoB-GFIT) which is illustrated in Figure 1a.The impedance tube's internal duct has a square cross-section of 50.4 mm × 50.4 mm and measures 3000 mm in length.The test section, 762 mm in length, is placed at a distance of 23 tube hydraulic diameters from the acoustic source, in order to ensure plane wave propagation.The facility is intended to allow experiments both with and without airflow in the duct.However, experiments in this study have been conducted without airflow.In the presence of airflow, a 3000 mm long diffusing section can be added to the facility to minimize acoustic reflections into the test section by promoting a gradual reduction in air velocity along the length of the section.The impedance tube acoustic source consisted of two BMS 4592ND compression drivers, capable of generating sound pressure levels of up to 130 dB in the test section.A Tektronix AFG3011C arbitrary waveform generator (Beaverton, OR, USA) was utilised to produce a white-noise excitation with an amplitude of 10 V pp .G.R.A.S. 40PL free-field array microphones were used to obtain the noise data in the tube.A National Instruments PXIe-1082 data acquisition system (Austin, TX, USA) with a PXIe-4499 sound and vibration module was used to acquire data.The data acquisition code enabled the utilisation of Matlab R2016a to interface between the data acquisition device and the signal generator.
The experiments were performed at a sampling rate of 2 15 Hz for 16 s in order to satisfy the Nyquist criterion.The two upstream (G1-G2) and downstream microphones (G19-G20) (see Figure 1a) were used to collect data for transmission loss and transmission coefficient eduction following the ASTM E2611-19 standard [30].The spacing between the microphones was determined by the frequency range of interest.The distance between the upstream microphones (G1 and G2) and downstream microphones (G19 and G20) was selected following the ASTM E2611-19 standard [30] as, 0.01c where s is the spacing, f upper and f lower are the upper and lower frequency limits, respectively.The distance between the upstream microphones (G1 and G2) and downstream reference microphones (G19 and G20) was set at 40 mm, which ensured sound waves were captured accurately between 85 Hz to 3400 Hz.The effect of test samples on the acoustic field propagation in the tube was captured using an array of 16 G.R.A.S. microphones, G3 to G18.The spacing between these microphones was set at 25 mm, which enabled capturing frequencies between 137.5 Hz and 5488 Hz.The Helmholtz resonator test samples used in this study, shown in Figure 1, were manufactured using stereolithography (SLA), in an Elegoo Mars 2 Pro Mono LCD Resin 3D printer.The 3D-printed samples were flush-mounted to one of the side walls of the impedance tube, with Cavity 1's neck receiving the incoming acoustic field.The resonator was manufactured as a single chamber with a rectangular cross-section of 22 mm × 40 mm and a depth of 46.8 mm.In addition, the sidewalls of the resonators consisted of indents to enable the insertion of a septum.Moreover, a key-lock relation was designed into the resonator covers to ensure sealing.
Three different types of resonator septum configurations were tested in this study, namely, a baseline two-degree-of-freedom configuration (2-DoF) shown in Figure 1b, a septum neck extension towards both Cavity 1 and Cavity 2 (Case A), shown in Figure 1c, and a septum neck extension towards Cavity 2 only (Case B), shown in Figure 1d.The 2-DoF resonators consisted of two cavities with volumes V 1 and V 2 .The volumes could be adjusted by changing the location of the septum.The ratio between V 1 and the entire chamber volume (V 1 + V 2 ) is defined by the parameter "m" and is called the volume ratio.
For example, a volume ratio of m = 0.3 suggests that the volume V 1 for Cavity 1 is 30 percent of the entire chamber volume.The experiments in this study were conducted for a range of volume ratios 0.3 < m < 0.7.In addition, two different septum neck extension configurations for both one-sided and double-sided extensions were studied in detail.The extension of the septum neck t 1 or t 2 was represented as a ratio with the total chamber length (L).Therefore, a septum neck extension of t 1 = 0.1 would mean that the one-sided neck extension towards Cavity 2 was 10 percent of the total chamber length (L).
Case -Baseline
Case The acoustic signals for Cavity 1 and Cavity 2 were obtained using a pair of Knowles (Santa Clara, CA, USA) Omni-Directional (FG) 2.56 DIA electret condenser microphones (FG-23329-P07) flush-mounted to the internal walls of both cavities for all resonator volume ratio and neck extension configurations.Microphones M1 and M2, as shown in Figure 1, were utilised to capture Cavity 1 and Cavity 2 signals, respectively.Prior to the experiments, the microphones were magnitude-and phase-calibrated using a GRAS 40PL free-field microphone as the reference pressure transducer, with a known sensitivity value.The calibration procedure followed that outlined by Ali [31].For a frequency range of 100 Hz to 3000 Hz, the calibration results indicated a phase shift of less than 7 • , with a fairly consistent amplitude sensitivity, which is not shown here for brevity.
The finite element analysis conducted in this study employed COMSOL Multiphysics 5.5 TM [32], a widely used commercial software package.The primary focus of these simulations was to gather data within the acoustic domain and calculate transmission coefficients and losses for different test sample configurations.The overarching goal was to closely emulate the outcomes of experimental tests, thereby ensuring the dependability of the simulation results for a subsequent acoustic field analysis of both the impedance tube and resonators.In order to enhance the precision of representing the physical phenomena, the acoustic domain was partitioned into two discrete regions.The region encompassing the resonator setups was modelled using a thermoviscous physics approach, while the sections before and after the resonators were described using a simplified pressure acoustics model.These two regions were linked through a multiphysics interface.It is important to note that the presence of boundary layers, especially in confined areas such as the resonator necks, can lead to substantial thermal and viscous losses.Therefore, to address these effects, the simulation incorporated the thermoviscous interface, utilizing COMSOL's thermoviscous acoustic module to solve linearized Navier-Stokes equations and comprehensively address continuity, momentum, and energy equations.
The computational domain of the impedance tube was discretized through the utilization of a freely structured triangular mesh.For regions employing the thermoviscous acoustics model, a boundary layer mesh configuration was employed.To facilitate the analysis, an acoustic source was introduced at one end of the impedance tube domain, employing a port boundary condition to establish an incident wave at the upstream boundary.Conversely, another port boundary condition was employed at the opposite end of the impedance tube domain, serving as an acoustic termination to enforce a nonreflecting condition for the waveguide.The mesh resolution was determined by setting a maximum element size equal to one-sixth of the wavelength corresponding to the highest frequency of interest (3000 Hz).Additionally, a minimum element size was established at one-tenth of the wavelength associated with this highest frequency.The mesh refinement strategy incorporated a maximum element growth rate of 1.2, with a curvature factor of 0.3.Furthermore, precise resolution was achieved in the narrow regions of interest by adjusting the number of layers to a value of 3. The computational analysis involved the determination of the acoustic transmission loss within the simulation domain, computed as follows, where In the above equations, c 0 denotes the speed of sound (343 m/s), ρ signifies the air density (1.173 kg/m 3 ), p represents the estimated pressure parameter, p o,s corresponds to the inlet pressure set at 1 pascal, and W in and W out denote the computed total acoustic power values at the inlet and outlet regions, respectively, calculated over the respective port areas S in and S out .
In the context of transient simulations, a model for transient pressure acoustics was employed for the impedance tube model, with the exception of the region containing the Helmholtz resonator, where a transient thermoviscous model was utilized.The Transient Pressure Acoustics Model node incorporates equations tailored for the simulation of predominantly time-dependent (transient) acoustics.It deals with the scalar wave equation, where p t represents the complete acoustic pressure, ρ corresponds to the fluid density, c denotes the speed of sound, q d is attributed to the dipole domain source, and Q m signifies the monopole domain source.This formulation of the wave equation allows for the possibility that the speed of sound and density can be spatially dependent.However, it was assumed that these properties changed relatively slowly over time, particularly when compared to the variations in the acoustic signal.
The mesh parameters, including element order, size, and type remained consistent with those employed in steady-state simulations.To simulate an incident pressure plane wave, a background pressure field node with an initial pressure amplitude of 1 pascal, denoted as p in , was introduced.To prevent acoustic reflections and ensure anechoic termination at both ends of the impedance tube, perfectly matched layers (PMLs) were applied.Two distinct time scales were utilized in the transient simulations: one corresponding to the frequency of the incoming pressure wave and another governing the time step used by the numerical solver.Time integration was achieved by employing the generalised alpha method [33], with a time step size set to T/60, where T represented the time period of the acoustic wave.The simulations were conducted at resonant frequencies tailored to various resonator configurations, each running for a duration of 30T to achieve convergence.In order to obtain pressure data within the resonator cavities, 50 domain probe points were placed strategically along the resonator's length.These probe points were evenly spaced at 1 mm intervals, with the initial point situated 10 mm below the resonator's opening.The schematics for both the steady and transient simulation are shown in Figure 2a with a detailed view of the mesh in the narrow regions, i.e., the neck of the resonator sample shown in Figure 2b.The transient simulations were utilised to extract the time sensitivity of the acoustic pressure and velocity within the impedance tube duct and the resonator test samples.The extracted information could be presented in the form of contour pressure and velocity maps which aid in better visualising the change in acoustic field within the whole system.
Results and Discussion
A comparison of the transmission coefficient and transmission loss induced by the 2-DoF and 2-DoF extended-neck configurations is presented in Figure 1e and 1f, respectively, for a fixed volume ratio m = 0.5.The results illustrate the primary ( f 1 ) and secondary ( f 2 ) peak transmission-loss frequencies for the 2-DoF resonator configurations, which are consistent with findings by Xu et al. [34].In addition, it can be seen that for a fixed volume ratio (m), the peak transmission-loss frequencies of a Helmholtz resonator could be altered significantly by extending the septum neck length, resulting in the resonator targeting lower frequencies.The direction of the septum neck extension, i.e., towards Cavity 2 (t 1 ) or both cavities (t 2 ), also reduced the peak transmission-loss frequencies; however, this effect was not significant.
The effect of neck extensions on both peak transmission-loss frequencies is better visualised in Figure 3a,b, which presents the two frequencies for an extended-neck resonator configuration normalised with the peak transmission-loss frequencies of the baseline 2-DoF configuration ( f / f 2−DoF ).The results clearly indicate a substantial decrease in f 1 as the neck extension is increased from t 1 , t 2 = 0.1 to t 1 , t 2 = 0.3, for both Case A and Case B. However, as the volume ratio increases, the peak transmission-loss frequency approaches that of a baseline 2-DoF resonator, a trend consistent with the findings of Gautam et al. [35] for a standard 2-DoF resonator.The opposite can be seen for f 2 , in Figure 3b, where the peak transmission-loss frequency decreases further with an increasing volume ratio and increasing neck extension.It can also be seen that the direction of the neck extension does not have a considerable effect on the peak transmission loss frequencies.The observations are similar for Cases A and B. In order to assess the effect of neck extensions on the range of frequencies affected, a nondimensional bandwidth coefficient was defined as follows, where ∆ f n,Case is the bandwidth of frequencies obtained at a transmission coefficient of 0.75, n is the order of the peak transmission-loss frequencies and "Case" refers to Case A or Case B. The same value for the baseline 2-DoF resonator is given by ∆ f n,Baseline , as shown in Figure 1e.The bandwidth coefficient for the different resonator configurations, at f 1 and f 2 , is presented in Figure 3c and 3d, respectively.Similarly, the magnitude of sound attenuated by different configurations is presented by defining another nondimensional parameter, the normalised transmission loss (TL), which is defined as, The normalised TL for the different resonator configurations, at f 1 and f 2 , is presented in Figure 3e and 3f, respectively.The bandwidth coefficient and normalised TL results at f 1 and f 2 illustrate a similar behaviour for all resonator configurations.There is a significant reduction in both the magnitude of sound attenuated and frequency bandwidth affected, at f 1 , as the neck extension is increased.However, as the volume ratio is increased, both parameters approach that of a baseline 2-DoF resonator.The opposite is evident at f 2 , where both parameters significantly increase with the increasing neck extension and volume ratio.This shows the significant effect of changing the septum neck length on improving the resonators' sound absorption capacity at f 2 .The finite element results matched the experimental findings, but small differences could be due to difficulties in accurately replicating the experimental boundary conditions.The resonators were assumed to have a hard wall in the finite element analysis simulations, which may not be the case for the test samples, which were 3D-printed.In addition, it was also evident that the direction of the septum neck extension had no significant effect on the performance of the resonator.Therefore, all further analysis is focused on the "Case A" neck extension direction configuration.The trends in Figure 3 and the inverse relation of the peak transmission-loss frequencies with the volume and neck length (Equation ( 1)) illustrate that variations in the volume ratio and the length of the septum neck have an influence on the acoustic environment within the resonator cavities, which leads to the different noise attenuation characteristics.In order to characterize this influence on the acoustic pressure field inside these cavities, transient finite element simulations were conducted using commercially available software COMSOL Multiphysics 5.5 TM .These simulations were performed at both frequencies f 1 and f 2 .Figure 4 presents the internal cavity acoustic pressure field, at f 1 , for the t 2 = 0.3 configuration, in comparison with a standard 2-DoF resonator, at different volume ratios.Figure 5 presents similar results but at f 2 .The acoustic field data were obtained via domain probe points placed along the centreline of the resonator, shown by the dotted red lines in Figures 4 and 5.The pressure field results at f 1 illustrate both cavities being excited in phase, with Cavity 2 being the dominant cavity (higher pressure magnitude), regardless of the volume ratio and neck extension.In addition, an increase in the volume ratio shifts the trend towards an equalised pressure magnitude between the two cavities.Moreover, increasing the septum neck length increases the pressure magnitude within Cavity 2 and reduces the magnitude in Cavity 1.The pressure field results at f 2 , shown in Figure 5, illustrate an interesting behaviour, with both cavities being excited out of phase and the pressure magnitude being concentrated in Cavity 1 at lower volume ratios, shifting to Cavity 2 at higher volume ratios.In addition, the septum neck extension leads to an increase in pressure magnitude in Cavity 1, regardless of the volume ratio.Recall that an opposite behaviour was observed at f 1 where Cavity 1's pressure decreased with the increasing septum neck extension.The incoming sound field reaching a resonator neck is efficiently scattered and absorbed at the resonance frequency of the Helmholtz resonator.This is enabled by a very low impedance around the resonator neck at its resonance frequency, which makes the neck act as a pressure release surface [36].Since Cavity 1 directly interacts with the incoming acoustic field, it acts as the pressure release surface, and a reduction in acoustic pressure within Cavity 1 would lead to a loss in the acoustic performance of the resonator at f 1 , with increasing neck extension.Conversely, an increase in Cavity 1's pressure at f 2 would improve the resonator performance, which is evident in Figure 3.The finite element analysis results for the acoustic pressure field within the resonator cavity illustrated interesting underlying mechanisms which may be related to the sound attenuation behaviours observed; however, these needed to be validated with experimental results.Recall that every resonator configuration was instrumented with two microphones to measure the acoustic pressure field in both Cavity 1 (microphone M1) and Cavity 2 (microphone M2), as shown in Figure 1. Figure 6 presents the sound pressure level (SPL) inside each cavity, at f 1 and f 2 , for volume ratios of m = 0.3, 0.5, 0.7 plotted against the septum neck extension (t 2 ).The experimental results are consistent with the findings of the finite element transient simulations.The SPL within Cavity 1, at the primary peak transmission-loss frequency f 1 , can be seen to decrease with an increasing septum neck extension as well as an increasing volume ratio.The opposite is true for the SPL observed within Cavity 2, which increases with the increasing volume ratio and neck extension length.The results for the secondary peak transmission-loss frequency f 2 are also consistent with the transient simulations, with Cavity 1 having a concentration of acoustic pressure at lower volume ratios and Cavity 2 having a higher concentration at larger volume ratios.In addition, SPL within Cavity 1 increases significantly with an increase in septum neck extension, whereas the opposite is true for SPL within Cavity 2 until m = 0.5.At m = 0.7 and t 2 = 0.3, both Cavity 1 and Cavity 2 can be seen to have a significantly higher concentration of acoustic pressure compared to all other volume ratio and neck extension cases.The relationship in phase difference (ϕ) between the two cavities within the 2-DoF resonator setups can also offer valuable insights into how an extended septum neck and changes in volume ratio might impact the resonator's ability to attenuate sound.Crossspectral calculations between the data acquired from microphones M1 and M2 were used to obtain the relative phase between the two cavities.The relative phase data for the three resonator septum neck extension configurations at volume ratios of m = 0.3, 0.5, and 0.7 are presented in Figure 6.The first ( f 1 ) and second ( f 2 ) peak transmission-loss frequencies for each septum neck extension configuration are marked by a coloured triangle and a coloured circle, respectively.The results indicate that increasing the septum neck extension reduces the bandwidth of frequencies within which the two cavities are in phase.At m = 0.3, shown in Figure 6g, the standard 2-DoF resonator cavities are in phase up to around 1000 Hz, whereas for the t 2 = 0.3 configuration, the two cavities are in phase up to 500 Hz.As the volume ratio increases, this bandwidth becomes even larger.At m = 0.7 (Figure 6i), the standard 2-DoF resonator cavities are in phase up to around 1600 Hz compared to the t 2 = 0.3 configuration, where the cavities are in phase up to 900 Hz.The loss in resonator performance at f 1 and the improvement at f 2 , seen in Figure 3, may be attributed to this relative phase behaviour.Recall that at f 1 , both cavities resonate in phase whereas at f 2 they resonate out of phase.Therefore, an increase in the septum neck extension promoting an out-of-phase behaviour may be the reason for the improved resonator performance at f 2 .
The relative phase of microphones M1 and M2 and the microphone directly opposite to the resonator neck opening (G8), flush-mounted to the side wall of the test section, may also shed some light on the effect of the different resonator configurations on the sound field in the test section.The relative phase of microphones M1 and G8, for volume ratios m = 0.3, 0.5, and 0.7, is presented in Figure 7a, 7b, and 7c, respectively.The same data but for the relative phase between microphones M2 and G8 are presented in Figure 7d-f.In addition, the areas of interest around the peak transmission-loss frequencies are presented within shaded regions in the figure.The results for the relative phase between Cavity 1 (microphone M1) and the impedance tube duct (microphone G8) show the cavity's acoustic field and impedance tube's acoustic field being out of phase near the peak transmission-loss frequencies.This would lead to destructive interference and attenuate the sound field.The bandwidth of the out-of-phase frequencies, around f 2 , increases as the septum neck is increased, which may lead to the improvement in the resonator performance at f 2 , as seen in Figure 3d,f.The opposite is observed for the relative phase between Cavity 2 (microphone M2) and the impedance tube (microphone G8).An increase in the septum neck extension leads to a reduction in the bandwidth of frequencies at which the cavity and the duct acoustic field are out of phase.This reduction in the frequency bandwidth may aid in the improvement in the resonator performance at f 2 , leading to a destructive interference between Cavity 1 and Cavity 2.
Conclusions
The effect of the septum neck length on the sound attenuation performance of a 2-DoF Helmholtz resonator with different internal volume ratios was studied experimentally and numerically.Experiments were performed in a grazing flow impedance tube with 3D-printed test samples.The results showed that increasing the septum neck length of a 2-DoF resonator significantly affected both the primary ( f 1 ) and secondary ( f 2 ) peak transmission-loss frequencies.Both f 1 and f 2 were reduced significantly with the increasing septum neck length at lower volume ratios.However, an increase in the volume ratio of the resonator increased f 1 but decreased f 2 even further.The induced transmission loss and the targeted bandwidth of frequencies at f 1 reduced with the increasing septum neck length but increased with the increasing volume ratio, albeit remaining below the baseline 2-DoF resonator values.However, at f 2 , both the induced transmission loss and the targeted range of frequencies significantly increased with increasing neck length.In addition, the induced transmission loss and the targeted range of frequencies increased further at higher volume ratios.Finite element analysis results were utilised to assist the understanding of the underlying sound attenuation mechanism by investigating the acoustic field inside the resonator cavities.The acoustic pressure results for f 1 showed that Cavity 2 acted as the dominant resonant cavity, and the acoustic pressure magnitude increased as the septum neck extension length increased.This led to a decrease in Cavity 1's acoustic pressure, which subsequently resulted in a deteriorated sound attenuation performance at f 1 .The results at f 2 showed that Cavity 1 acted as the dominant cavity at lower volume ratios, and as the volume ratio increased, Cavity 2 became dominant with an increased pressure magnitude.Moreover, the Cavity 1 acoustic pressure magnitude increased with the increasing septum neck extension, which may have led to the improved resonator performance observed at f 2 .The SPL data collected from experiments using flush-mounted microphones placed inside the resonators matched the findings from the finite element analysis.This alignment reinforced the observed acoustic characteristics within the resonator cavities.Furthermore, the relative phase measurements between the resonator cavities and the impedance tube duct demonstrated a more pronounced out-of-phase behaviour as the septum neck extension increased.This outcome further confirmed the observed trend in sound performance.The results of this study suggest that 2-DoF resonators can be adapted to achieve a specific low-frequency-noise attenuation behaviour, by altering septum neck length, for application in modern high-bypass-ratio engines without altering the dimensions of the engine nacelle.Moreover, observations from this study can be used to design acoustic liners with improve low-frequency-noise attenuation characteristics.
Figure 1 .
Figure 1.Schematics of the following components are depicted: (a) a grazing flow impedance tube, (b) a baseline resonator with two degrees of freedom (2 DoF), (c) a 2-DoF resonator with a two-sided septum neck extension (referred to as Case A), (d) a 2-DoF resonator with a one-sided septum neck extension (referred to as Case B), (e) comparative analyses encompassing experimental and finite element analysis results for the transmission coefficient arising from 2-DoF resonators and 2-DoF resonators with extended septum neck configurations, and (f) comparative analyses incorporating experimental data and finite element analysis results for the transmission loss from 2-DoF resonators and 2-DoF resonators with extended septum neck configurations.The septum neck extension t 1 = t 2 = 0.3 and m = 0.5.
PFigure 2 .
Figure 2. The numerical setup used in this study: (a) schematics of the steady and transient simulation; (b) detailed image of the mesh in the neck of a resonator sample.
Figure 3 .
Figure 3. Experimental and finite element analysis results for (a) peak transmission-loss frequencies comparison with changing volume ratio (m) at f 1 , (b) peak transmission-loss frequencies comparison with changing volume ratio (m) at f 2 , (c) change in bandwidth coefficient with changing volume ratio at f 1 , (d) change in bandwidth coefficient with changing volume ratio at f 2 , (e) normalised transmission loss comparison with changing volume ratio (m) at f 1 , (f) normalised transmission loss comparison with changing volume ratio (m) at f 2 .
Figure 6 .
Figure 6.SPL of acoustic signal for Cavity 1 (C1) and Cavity 2 (C2) captured using microphones M1 and M2, for different septum neck extension configurations of t 2 = 0, 0.1 and 0.3: (a) volume ratio m = 0.3 at f 1 ; (b) volume ratio m = 0.5 at f 1 ; (c) volume ratio m = 0.7 at f 1 ; (d) volume ratio m = 0.3 at f 2 ; (e) volume ratio m = 0.5 at f 2 ; (f) volume ratio m = 0.7 at f 2 .Phase of the pressure signals from microphone M2 relative to M1 for different septum neck extension configurations of t 2 = 0, 0.1, and 0.3: (g) volume ratio m = 0.3; (h) volume ratio m = 0.5; (i) volume ratio m = 0.7.The first peak transmission-loss frequency, f 1 , is illustrated by a coloured triangle and the second peak transmission-loss frequency, f 2 , is illustrated by a coloured circle.
Figure 7 .
Figure 7. Phase of the pressure signals from microphones M1 and M2 relative to G8 for different septum neck extension configurations of t 2 = 0, 0.1, and 0.3: (a) phase of M1 relative to G8 for a volume ratio m = 0.3; (b) phase of M1 relative to G8 for a volume ratio m = 0.5; (c) phase of M1 relative to G8 for a volume ratio m = 0.7; (d) phase of M2 relative to G8 for a volume ratio m = 0.3; (e) phase of M2 relative to G8 for a volume ratio m = 0.5; (f) phase of M2 relative to G8 for a volume ratio m = 0.7. | 6,972 | 2023-12-03T00:00:00.000 | [
"Physics",
"Engineering"
] |
Uncomputably complex renormalisation group flows
Renormalisation group methods are among the most important techniques for analysing the physics of many-body systems: by iterating a renormalisation group map, which coarse-grains the description of a system and generates a flow in the parameter space, physical properties of interest can be extracted. However, recent work has shown that important physical features, such as the spectral gap and phase diagram, may be impossible to determine, even in principle. Following these insights, we construct a rigorous renormalisation group map for the original undecidable many-body system that appeared in the literature, which reveals a renormalisation group flow so complex that it cannot be predicted. We prove that each step of this map is computable, and that it converges to the correct fixed points, yet the resulting flow is uncomputable. This extreme form of unpredictability for renormalisation group flows had not been shown before and goes beyond the chaotic behaviour seen previously.
Understanding collective properties and phases of many-body systems from an underlying model of the interactions between their constituent parts remains one of the major research areas in physics, from high-energy physics to condensed matter. Many powerful techniques have been developed to tackle this problem. One of the most farreaching was the development by Wilson 1,2 of renormalisation group (RG) techniques, building on early work by others 3,4 . At a conceptual level, an RG analysis involves constructing an RG map that takes as input a description of the many-body system (e.g., a Hamiltonian, or an action, or a partition function, etc.), and outputs a description of a new many-body system (a new Hamiltonian, or action, or partition function, etc.), that can be understood as a "coarse-grained" version of the original system, in such a way that physical properties of interest are preserved but irrelevant details are discarded.
For example, the RG map may "integrate out" the microscopic details of the interactions between the constituent particles described by the full Hamiltonian of the system. This procedure generates a coarse-grained Hamiltonian that still retains the same physics at larger length scales 5 . By repeatedly applying the RG map, the original Hamiltonian is transformed into successively simpler Hamiltonians, where the physics may be far easier to extract. The RG map therefore produces a dynamic map on Hamiltonians, and consecutive applications of this map generate a "flow" in the space of Hamiltonians. Often, the form of the Hamiltonian is preserved, and the RG flow can be characterised by the trajectory of the parameters describing the Hamiltonian.
The development of RG methods has not only allowed sophisticated theoretical and numerical analysis of a broad range of manybody systems. It also explained phenomena such as universality, whereby many physical systems, apparently very different, exhibit the same macroscopic behaviour, even at a quantitative level. This is explained by the fact that these systems "flow" to the same fixed point under the RG dynamics.
For many condensed matter systems-even complex strongly interacting ones-the RG dynamics are relatively simple, exhibiting a finite number of fixed points to which the RG flow converges. Hamiltonians that converge to the same fixed point correspond to the same phase, so that the basins of attraction of the fixed points map out the phase diagram of the system. However, more complicated RG trajectories are also possible, including chaotic RG flows with highly complex structure [6][7][8][9][10] . Nonetheless, as with chaotic dynamics more generally, the structure and attractors of such chaotic RG flows can still be analysed, even if specific trajectories of the dynamics may be highly sensitive to the precise starting point. This structure elucidates much of the physics of the system [11][12][13] . RG techniques have become one of the most important technique in modern physics for understanding the properties of complex manybody systems.
On the other hand, recent work has shown that determining the macroscopic properties of many-body systems, even given a complete underlying microscopic description, can be even more intractable than previously anticipated. In fact, refs. 14-16 showed that this goal is unobtainable in general: they engineered a quantum many-body Hamiltonian whose spectral gap, phase diagrams and any macroscopic property characterising a phase are uncomputable. These results imply that any RG technique which we may apply to this specific system in order to characterise the spectrum and other properties is bound to fail: there can be no RG scheme-or even more broadly, no algorithm-that can answer the spectral gap problem. Yet, it is unclear how such a negative result will emerge. In principle, this obstacle may be because there does not exist an RG map which can compute a coarse-grained version of an intractable Hamiltonian, or which cannot retain its macroscopic properties at every iteration, or again whose fixed points are not well-defined (or do not exist to begin with).
Results
We denote a 2D L × L lattice as Λ(L), and the minimum eigenvalue of a Hamiltonian H (the ground-state energy) as λ 0 (H). After some RG procedure, we denote the renormalised Hamiltonian R(H), and after k-iterations of the RG procedure R (k) (H). We also denote BðHÞ to be the set of bounded operators acting on Hilbert space H.
The family of Hamiltonians we will consider is that from ref. 14, which are a set of translationally invariant, nearest neighbour, 2D spinlattice models with open boundary conditions defined on Λ(L). The Hamiltonians are parametrised by single parameter φ, and hence the set can be written as fHðφÞg φ2Q . Each lattice site is associated with a spin system with local Hilbert space of dimension d, C d . The property of interest is the spectral gap, which is defined as the energy gap between the first excited state energy and the ground-state energy. Importantly, it is shown that as the lattice size goes to infinity, any Hamiltonian in this family must either have a spectral gap >1/2 or be gapless. However, determining which case occurs is undecidable.
Our main result is an explicit construction of a renormalisation group mapping for this Hamiltonian with the following features: Theorem 1 (Uncomputability of RG Flows (informal)) We construct an RG map for the Hamiltonian of Cubitt, Pérez-García and Wolf 14 which has the following properties: 1. The RG map is computable at each renormalisation step. 2. The RG map preserves whether the Hamiltonian is gapped or gapless, as well as other properties associated with the phase of the Hamiltonian. 3. The Hamiltonian is guaranteed to converge to one of two fixed points under the RG flow: one gapped, with low-energy properties similar to those of an Ising model with field; the other gapless, with low-energy properties similar to the critical XY-model. 4. The behaviour of the Hamiltonian under the RG mapping, and which fixed point it converges to, are uncomputable.
The undecidability of the fixed point follows implicitly from the undecidability of the spectral gap 14,15 , since the fixed point depends on the gappedness of the unrenormalised Hamiltonian. Theorem 1 demonstrates that the renormalisation process fails, but not because it is impossible to construct a well-defined RG mapping: the actual reason is that the trajectory of the Hamiltonian under repeated applications of the RG mapping is itself uncomputable. Consequently, determining the fixed point that the trajectory eventually converges to is itself undecidable. This is despite each individual step of the RG process being computable.
We note a subtlety in the statement of Theorem 1. It is important that we are able to explicitly construct the RG scheme, rather than just prove the existence of such an RG scheme. If only existence were proven, it would leave open the possibility that finding the RG scheme is itself an uncomputable task, thus meaning it cannot actually be determined.
The Cubitt, Pérez-García and Wolf Hamiltonian
Before outlining our RG construction, we review some of the important features of the Hamiltonian from refs. 14, 15 used to prove the undecidability of the spectral gap. The Hamiltonian can be written as:
History states
To understand the structure of the H u (φ) ground state, we must first review how computation can be encoded in Hamiltonians and their ground states using history states. A quantum Turing Machine (QTM) is a model of quantum computation based on classical Turing Machines (TMs). Much like a classical Turing Machine, a QTM consists of a tape split up into cells, such that the cell is either empty or contains a symbol from an allowed set. The QTM also has a control head which moves along the tape. The head updates the tape at each time step depending on its internal state and the symbol currently written on the tape. The significant difference with respect to a classical TM is that the head and tape of a QTM can be in a superposition of states. The updates to the QTM and tape configuration are then described by a transition unitary, U, such that the overall state of the QTM updates as |ψ ! U|ψ at each time step. Given a particular QTM, using a construction of Gottesman and Irani 17 , it is possible to encode the evolution of the QTM in the ground state of a specially constructed 1D nearest neighbour, translationally invariant Hamiltonian. In particular, the ground state is known as a history state and it encodes T steps of the QTM computation. Here T is a predefined and fixed function of the Hamiltonian's chain length determined by the particular QTM-to-Hamiltonian mapping. If the state of the QTM and its tape at time t is |ψ t , then the history state is For the QTM-to-Hamiltonian mapping we are interested in, T is an increasing function of the history state length, T = T(L) = Ω(2 L ). Thus, longer-length history states encode more computational time steps.
The ground state of H u (φ)
The local Hilbert space which H u (φ) acts on can further be decomposed into a "classical" and "quantum" part: H u = ðH c Þ ΛðLÞ ðH q Þ ΛðLÞ . In particular, H u (φ) can be thought of as acting classically on states in H c . Furthermore, H u (φ) has the useful property that all its eigenstates are product states across these two parts of the Hilbert space. In particular, the ground state can be written as |Ti c |ψ 0 q where |Ti c 2 ðH c Þ ΛðLÞ and |ψ q 2 ðH q Þ ΛðLÞ . H u (φ) is designed so that |Ti c is the ground state of a classical Hamiltonian based on so-called Robinson tiles. That is, the local basis states in this part of the Hilbert space correspond to particular types of square tiles with markings on them, and the Hamiltonian enforces certain configurations of these tiles to be energetically penalised. Thus |Ti c corresponds to a non-penalised pattern of Robinson tiles. This pattern has a self-similar structure of nested squares of increasing size, with side length 4 n + 1,n 2 N 18 (see Fig. 1 for a diagram). |ψ 0 q is coupled to |Ti c such that 1D history states (of the type described in Eq. (2)) appear along the top edge of every square in the pattern. Thus for every n 2 N, 1D history states of length 4 n + 1, n 2 N, appear periodically across the lattice. Everywhere else in the lattice is in a trivial "filler" state which has zero energy.
The history states are designed to encode a QTM M which takes input φ 2 Q in binary (where φ is the input parameter to the Hamiltonian), and then either halts or does not halt within the allotted time. By introducing an additional local penalty term to the Hamiltonian, the individual history states encoding a halting computation receive an energy penalty, and so the ground state of the whole lattice in the halting case picks up a positive energy contribution scaling as Ω(L 2 ). Conversely, in the non-halting case, the ground state of H u has energy going as −Ω(L). The overall ground state of the overall Hamiltonian H(φ) is then either the zero energy ground state of H trivial in the halting case, or the ground state of H u (φ) in the non-halting case. In the halting case the ground-state energy scales as λ 0 (H u (φ)) = Ω(L 2 ), hence H u (φ) has a higher ground-state energy than H trivial and so the zero energy ground state of H trivial is the overall ground state. Otherwise, λ 0 (H u (φ)) = −Ω(L) and we see that the overall ground state is that of H u (φ). In the halting case, refs. 14, 15 show that H(φ) is gapped, and in the non-halting case H(φ) is gapless.
The key point for our purposes is that the overall behaviour of H(φ) is determined by the ground-state energy of H u (φ). Since establishing whether a given universal Turing Machine halts is an undecidable problem 19 , determining which ground state occurs, and thus whether the Hamiltonian is gapped or gapless, is undecidable.
The block-spin renormalisation group (BRG)
Our RG map is based on a blocking technique widely used in the literature to study spin systems, often called the Block Spin Renormalisation Group (BRG) [20][21][22][23] . Note that this is also sometimes called the "quantum renormalisation group", but we will not use this name to avoid potential confusion. Modifications and variations of this RG scheme have also been extensively studied 24,25 .
The BRG is among the simplest RG schemes. The procedure works by grouping nearby spins together in a block, and then determining the associated energy levels and eigenstates of this block by diagonalisation. Having done this, high-energy (or otherwise unwanted) states are removed, resulting in a new Hamiltonian.
As an explicit example, we repeat the review of the RG process in ref. 21 for the 1D isotropic XY-model defined below as: We first group terms into blocks of 2: now contains all terms acting within the two site blocks. Diagonalising h i gives 4 states with energies fE ð1Þ 0 , E ð1Þ 1 , E ð1Þ 2 , E ð1Þ 3 g in ascending order. We truncate the states associated with the two higher energies, and keep the lowest two which we label as |0i ð1Þ ,|1i ð1Þ with energies E ð1Þ 0 , E ð1Þ 1 , respectively. We now replace this operator with a new operator, acting on a single block-spin site with the form The between-block interaction now needs to be determined: to replicate this, we use X = ξ (1) X (1) , where ξ (1) can be determined by looking at the matrix elements under the new renormalised block basis, i.e., 0 h | ð1Þ X |1i ð1Þ = ξ ð1Þ 0 h | ð1Þ X ð1Þ |1i ð1Þ . The two new two-local terms acting on the block spins are then: where J (1) = ξ (1)2 J. By introducing an extra term depending on the identity, we find a renormalised Hamiltonian: where C ð1Þ = ðE ð1Þ 0 + E ð1Þ 1 Þ=2. After n iterations of the RG mapping, we have a Hamiltonian where the constants are defined by the same procedure: J (n) = ξ (n)2 J (n−1) , B ðnÞ = B ðnÀ1Þ + ðE ðnÞ 0 À E ðnÞ 1 Þ=2, C ðnÞ = C ðnÀ1Þ + ðE ðnÞ 0 + E ðnÞ 1 Þ=2.
Our RG scheme
We now want to construct an RG scheme for H(φ) which preserves the relevant physical properties. Most notably, whether the Hamiltonian is gapped or gapless. We will show that in order to preserve the lowenergy properties of this Hamiltonian, we can reduce the analysis to finding RG schemes for each of the Hamiltonians in Eq. (1). Welldefined RG schemes exist for H d , H trivial , H guard which preserve their gaps and ground-state energies, hence the remaining task is finding an RG scheme for H u (φ). In particular, we develop an RG scheme which allowing us to break the problem of finding an overall RG scheme into finding one for each individual Hamiltonian.
To retain the properties of the overall Hamiltonian, the RG scheme must maintain the ground-state energy density of H u (φ) in both the halting and non-halting cases. We will do this by: (a) preserving the overall self-similar structure of the Robinson tiling and thus the pattern of the history states appearing in the ground state, (b) ensuring that the energy contribution of each individual history states is preserved. Since the history states give the only non-trivial energy contribution to the ground-state energy of H u (φ), then this is sufficient for our purposes.
The RG scheme for H u (φ)
In order to develop an RG scheme for H u (φ), we remark that its eigenstates are product states across ðH c Þ ΛðLÞ ðH q Þ ΛðLÞ . This allows us to split our RG scheme up further into one part that renormalises the classical space ðH c Þ ΛðLÞ and another for the quantum space ðH q Þ ΛðLÞ (a rigorous justification of this is given in Section E of the Supplementary Information). As with the BRG, both schemes consist of a blocking and truncation procedure. We give a flow diagram of the proof in Fig. 2.
The blocking procedure. The RG scheme proceeds by splitting the lattice into disjoint 2 × 2 square blocks. The basis states of the individual lattice sites within a 2 × 2 block are then combined into a single site on a new lattice, such that if the initial local Hilbert space dimension was d, then the new lattice sites have local Hilbert space dimension d 4 . Having obtained a new reduced lattice of size L/2 × L/2, we now wish to reduce the size of the local basis to only include basis states which contribute to low-energy states.
Truncating the classical space. Since the basis states in the classical part of the Hilbert space are represented by Robinson tiles, the new renormalised-basis states correspond to all possible combinations of these tiles on a 2 × 2 block: we call these "supertiles". However, a subset of these bigger tiles can be shown to either have high energy with respect to the previous Hamiltonian or will be removed at later stages of the RG process. Thus removing these supertiles will only remove local basis states which do not contribute to the low-energy states of the renormalised Hamiltonian. It turns out that each new state in the renormalised basis can be identified in a one-to-one manner with a state in the unrenormalised basis, such that the Hamiltonian is of the same form. Thus, the ground state of the new renormalised Hamiltonian on the classical part of the Hilbert space will not only be selfsimilar but will generate the same Robinson pattern as the unrenormalised ground state. A detailed analysis of the RG scheme for this part of the Hamiltonian is given in Section C of the Supplementary Information. Truncating the quantum space. Finally, we consider the effect of the blocking procedure on the history state, combining pairs of cells on the Turing Machine tape. After k-iterations of the blocking, a single new basis state will contain 2 k Turing Machine tape cells on a single lattice site, where at each iteration, it is possible to further remove some of the states which we know must have high energy. For example, there exist sets of states which are a priori known to be energetically penalised, e.g., states corresponding to Turing Machine configurations with two heads next to each other. Such states are known to not contribute to the ground state, thus they can be removed from the local Hilbert space in the truncation procedure. We are also able to discard some states which are guaranteed to evolve into one of these disallowed states. Furthermore, after iterating the RG procedure multiple times, there will be entire history states localised to a single renormalised-basis state. We can then integrate these out. More details are given in Section D of the Supplementary Information.
Energy contribution of the integrated out states
We have glossed over some details in the previous section. In particular, what happens to the energy contributions from the history states which are integrated out?
When the RG mapping has been applied k times, such that 2 k ≥ 4 n + 1, then we know that a full history state which would appear in the ground state of H(φ) is now formed from a superposition of basis states on a single site. By diagonalising the on-site Hamiltonian, we see that it now forms the lowest energy state of the local Hilbert space. As discussed earlier, in the halting case, this history state will pick up some positive energy, which is known explicitly as per ref. 26, and in the non-halting case it has exactly zero energy. In order to preserve the ground-state energy of the overall Hamiltonian, when integrating out the local basis states, we take the energy contribution of the history state and add it to a local projector term. This has the effect of introducing a local energy shift which preserves the overall energy. This is equivalent to introducing the term C ðnÞ 1 i in the BRG procedure as per Eq. (9). See Supplementary Information E.1 and E.2 for more details.
This introduces a 1-local term in H u (φ) which has the form τ 2 ðkÞ1 i acting on each lattice site i where, if the encoded TM is non-halting on input φ, then τ 2 (k) = − 2 −k ∀ k. If the TM halts on input φ then: Here, k h (φ) is defined as the following: let L h 2 f4 n + 1g n2N be the smallest-length history state for which the TM M halts when running on input φ, then k h is the smallest integer satisfying 2 k h ðφÞ >L h ðφÞ. The behaviour of τ 2 (k) is fully discussed in Supplementary Information E.7.
We see that after k-iterations, the Hamiltonian R (k) (H u (φ)) has ground-state energy which scales as A crucial feature is that every step of the RG process is explicitly computable: it is simply a case of blocking together four sites, determining the renormalised-basis states, and removing subsets of local basis states which do not contribute to the low-energy subspace. Even determining whether a given history state contains a halting computation or not can be done by examining the legitimate evolution encoded within the history state, finding whether that halts or not, and then integrating out its energy contribution appropriately. The time taken is a function of the number of local basis states on each site which is upper bounded by O(d 4k ). Thus each step of the RG procedure is computable, as claimed in point 1 of Theorem 1.
The RG trajectory
As per Eq. (11), we see that the Hamiltonian has a coefficient which is exactly −2 k in the case the encoded TM does not halt. However, in the halting case, τ 2 (k) begins to change behaviour as soon as the number of spins that have been blocked together is larger than the length of the history state needed to encode a halting computation. Thus, the k for which τ 2 (k) changes behaviour depends on the length when the Turing Machine first halts, and hence on the time step at which the Turing Machine halts. However, as we pointed out before, this quantity is undecidable in general, and thus determining whether τ 2 (k) eventually becomes positive is itself uncomputable.
Furthermore, there are two fixed points associated with the RG flow. One occurs for τ(k) = −2 k which corresponds to a gapless Hamiltonian, and the other for τðkÞ ! À2 k + Ωð4 kÀk h ðφÞ Þ, which corresponds to a gapped Hamiltonian. Since distinguishing between these two cases is undecidable, our argument immediately yields that: Corollary 2 Determining whether the Hamiltonian flows to the gapped or gapless fixed point under this RG scheme is undecidable.
Indeed, the Hamiltonian from ref. 14 has two very different fixed points: one which at low energies roughly corresponds to a 2D Ising model and another which corresponds to a critical, gapless XY-model (further discussion in Section G of the Supplementary Information).
Thus, we have constructed an RG scheme for the Hamiltonian H(φ) which is computable at every step, but the overall trajectory and end-point is uncomputable. Fig. 3 | Chaotic vs Uncomputable RG flow behaviour. In both diagrams, k represents the number of RG iterations and η represents some parameter characterising the Hamiltonian; the blue and red dots are fixed points corresponding to different phases. We see that in the chaotic case (a), the Hamiltonians diverge exponentially in k, according to some Lyapunov exponent. In the undecidable case (b), the Hamiltonians remain arbitrarily close for some uncomputably large number of iterations, whereupon they suddenly diverge to different fixed points.
Discussion
In this work, we have shown that a qualitatively new type of RG flow occurs in many-body Hamiltonians with undecidable spectral gap. Specifically, we give an explicit construction and analysis a block-spin RG procedure for the Hamiltonian of ref. 14 which we are able to study analytically and prove that it has the following features: (i) the RG map is computable at each renormalisation step; (ii) the RG map preserves whether the Hamiltonian is gapped or gapless; (iii) the Hamiltonian is guaranteed to converge to one of two fixed points under the RG flow; (iv) the behaviour of the Hamiltonian under the RG mapping, the trajectory of the RG flow and which fixed point it converges to are all uncomputable.
We show that under this RG construction, the Hamiltonian flows toward one of two RG fixed points: either a gapped Ising-like Hamiltonian or a gapless critical XY-like Hamiltonian. Furthermore, the parameters characterising the Hamiltonian have a trajectory depending on the halting time of the Turing Machine encoded within the Hamiltonian. Since the Halting Problem is undecidable and the halting time uncomputable, the trajectory of the Hamiltonian under the RG flow-and therefore which fixed point it ultimately converges to-are uncomputable, even if the parameters of the initial Hamiltonian are known exactly. This is a qualitatively new and more extreme form of unpredictability that goes beyond even chaotic RG flows which have been previously studied. The unpredictability of chaotic systems arises from the fact that even a tiny difference in the initial system parameters-which in practice may not known exactly-can eventually lead to exponentially diverging trajectories (see Fig. 3). However, the more precisely the initial parameters are known, the longer it is possible to accurately predict the trajectory of a chaotic process, and if the system parameters were known exactly, then in principle it becomes possible to determine the long-time behaviour of the RG flow. The RG flow behaviour exhibited in this work is more intractable still. Even if we know the exact initial values of all system parameters, its RG trajectory and the fixed point it ultimately ends up at is provably impossible to predict. Moreover, no matter how close two sets of initial parameters are, it is impossible to predict how long their trajectories will remain close together before abruptly diverging to different fixed points that correspond to separate phases (see Fig. 3). Thus, the structure of the RG flowe.g., the basins of attraction of the fixed points-is so complex that it cannot be computed or approximated, even in principle. We note that a similar form of unpredictability has previously been seen in classical single-particle dynamics, in seminal work by , while our result shows for the first time that this extreme form of unpredictability can occur in RG flows of many-body systems.
Despite the somewhat artificial Hamiltonian considered here, we expect the behaviour of the RG scheme here to be generic, in the following sense. For any well-defined, computable RG scheme for Hamiltonians with undecidable macroscopic properties, we expect that at least one coefficient of a relevant operator should have an uncomputable trajectory. The reasoning is straightforward: the welldefinedness and computability of the RG flow implies that, at each step of the RG process, we would be able to find each parameter characterising the Hamiltonian after each iteration. However, when the macroscopic properties of the Hamiltonian are undecidable, we expect determining which fixed point it flows towards to be an undecidable problem. For there to be no contradiction between these two statements, the parameters of the Hamiltonian must flow in an uncomputable manner (otherwise, the entire flow is computable and we reach a contradiction). As such, the uncomputable behaviour observed in the RG scheme here must occur for any RG scheme one can construct for Hamiltonians whose macroscopic properties are uncomputable from its microscopic description (note that ref. 16 has shown that such Hamiltonians can constitute a non-zero-measure subset of a phase diagram, so do not require arbitrarily precisely tuned parameters).
Often RG flows are characterised by a set of continuous differential equations. By the nature of having a discretised lattice and a real space RG procedure, it is not natural to consider continuous variation of the parameters in terms of differential equations 30 . Rather, the RG relations in this setting are expressed in terms of finite difference equations, e.g., for a Hamiltonian characterised by a set of parameters fα i g i , such that after the k th RG iteration the coefficients are denoted fα i ðkÞg i , then: In the case of the uncomputable RG flows exhibited here, f i ðk, fα j g j Þ will be some function whose behaviour is uncomputable as we iterate k and the coefficients fα j g j . In the case of τ 2 (k) for the blockspin RG scheme, we have constructed in this work, f depends on whether a given TM halts after a time depending on k. For RG flows characterised by continuous differential equations, we expect there should exist RG schemes with uncomputable behaviour that satisfy analogous differential equations: ∂α i =∂k = f i ðk, fα j ðk À 1Þg j Þ, where f is again an uncomputable function. In the continuous case, one would expect similar behaviour to that observed here: a particular parameter travels along a well-defined trajectory, but at some uncomputable point abruptly changes its behaviour and diverges from its previous trajectory.
Naturally, there are limitations on the generality of the conclusions that can be drawn from this work in the sense that the Hamiltonian discussed in this work is highly artificial and the RG scheme reflects this. Indeed, this Hamiltonian has an enormous local Hilbert space dimension and its matrix elements are highly artificially tuned. Both of these factors are unlikely to be present in naturally occurring Hamiltonians. A step towards overcoming this limitation was taken in ref. 16, where it was shown that Hamiltonians with uncomputable properties can occupy a non-zero-measure set of the phase diagram, thus do not depend on arbitrarily precise parameter tuning. As the Hamiltonians in that work are a development of the Cubitt-Pérez-García-Wolf Hamiltonian we have studied here, we expect our results will can readily be extended to this case (and indeed to the Hamiltonian in ref. 31 which also displays undecidable properties). However, the Hamiltonians remain highly artificial. Thus an obvious route for further work is to look for more natural Hamiltonians displaying undecidable behaviour and consider RG schemes to renormalise them.
Furthermore, although the RG scheme is essentially a simple BRG scheme, the details of our construction and analysis rely on knowledge of the structure of the ground states. Due to the behaviour of this undecidable model, any BRG scheme will have to exhibit similar behaviour to the one we have analysed rigorously here. But it would be of interest to find a simpler RG scheme for this Hamiltonian (or other Hamiltonians with undecidable properties) which is able to truncate the local Hilbert space to a greater degree, without using explicit a priori knowledge of the ground state, whose behaviour can still be analysed rigorously.
It is also worth noting that the Hamiltonian and RG schemes constructed here could also be used to prove rigorous results for chaotic (but still computable) RG flows. Indeed, if we modify the Hamiltonian H(φ) so that instead of running a universal Turing Machine on input φ, it carries out a computation of a (classical) chaotic process (e.g., repeated application of the logistical map), then two inputs which are initially very close may diverge to completely different outputs after some time. By penalising this output qubit appropriately, the Hamiltonian will still flow to either the gapped or gapless fixed point depending on the outcome of the chaotic process under our RG map, but the RG flow will exhibit chaotic rather than uncomputable dynamics.
Data availability
Data sharing not applicable to this article as no datasets were generated or analysed during this study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/. | 7,896 | 2021-02-09T00:00:00.000 | [
"Physics"
] |
Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs
Background Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). Results In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster - both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. Conclusions The bi-directed de Bruijn graph is a fundamental data structure for any sequence assembly program based on Eulerian approach. Our algorithms for constructing Bi-directed de Bruijn graphs are efficient in parallel and out of core settings. These algorithms can be used in building large scale bi-directed de Bruijn graphs. Furthermore, our algorithms do not employ any all-to-all communications in a parallel setting and perform better than the prior algorithms. Finally our out-of-core algorithm is extremely memory efficient and can replace the existing graph construction algorithm in VELVET.
I. INTRODUCTION
The genomic sequence of an organism is a string from the alphabet Σ = {A, T, G, C}.This string is also referred as the Deoxyribonucleic acid (DNA) sequence.DNA sequences exist as complementary pairs (A − T , G − C) due to the double strandedness of the underlying DNA structure.Several characteristics of an organism are encoded in its DNA sequence, thereby reducing the biological analysis of the organism to the analysis of its DNA sequence.Identifying the unknown DNA sequence of an organism is known as de novo sequencing and is of fundamental biological importance.On the other hand the existing sequencing technology is not mature enough to identify/read the entire sequence of the genomeespecially for complex organisms like the mammals.However small fragments of the genome can be read with acceptable accuracy.The shotgun method employed in many sequencing projects breaks the genome randomly at several places and generates several small fragments (reads) of the genome.The problem of reassembling all the fragmented reads into a small sequence close to the original sequence is known as the Sequence Assembly (SA) problem.
Although the SA problem seems similar to the Shortest Common Super string (SCS) problem, there are in fact some fundamental differences.Firstly, the genome sequence might contain several repeating regions.However, in any optimal solution to the SCS problem we will not be able to find repeating regions -because we want to minimize the length of the solution string.In addition to the repeats, there are other issues such as errors in reads and double strandedness of the reads which make the reduction to SCS problem very complex.
The literature on algorithms to address the SA problem can be classified into two broad categories.The first class of algorithms model a read as a vertex in a directed graph -known as the overlap graph [2].The second class of algorithms model every substring of length k (i.e., a kmer) in a read as a vertex in a (subgraph of) a de Bruijn graph [3].
In an overlap graph, for every pair of overlapping reads, directed edges are introduced consistent with the orientation of the overlap.Since the transitive edges in the overlap graph are redundant for the assembly process they are removed and the resultant graph is called the string graph [2].The edges of the string graph are classified into optional, required and exact.The SA problem is reduced to the identification of a shortest walk in the string graph which includes all the required and exact constraints on the edges.Identifying such a walk -minimum S-walk -on the string graph is known to be NP-hard [4].
When a de Bruijn graph is employed, we model every substring of length k (i.e., a k-mer) in a read as a vertex [3].A directed edge is introduced between two k-mers if there exists some read in which these two k-mers overlap by exactly k − 1 symbols.Thus every read in the input is mapped to some path in the de Bruijn graph.The SA problem is reduced to a Chinese Postman Problem (CPP) on the de Bruijn graph, subject to the constraint that the resultant CPP tour include all the paths corresponding to the reads.This problem is also known to be NP-hard.Thus solving the SA problem exactly on both these graph models is intractable.
Overlap graph based algorithms were found to perform better (see [5] [6] [7] [8]) with Sanger based read methods.Sanger methods produce reads typically around 1000 base pairs long.However these can produce significant read errors.To overcome the issues with Sanger reads new read technologies such as the pyrosequencing (454sequencing) have emerged.These read technologies can produce reliable and accurate genome fragments which are very short (up to 100 base-pairs long).On the other hand short read technologies can increase the number of reads in the SA problem by a large magnitude.Overlap based graph algorithms do not scale well in practice since they represent every read as a vertex.De Bruijn graph based algorithms seem to handle short reads very efficiently (see [9]) in practice compared to the overlap graph based algorithms.However the existing sequential algorithms [9] to construct these graphs are sub-optimal and require significant amounts of memory.This limits the applicability of these methods to large scale SA problems.In this paper we address this issue and present algorithms to construct large de Bruijn graphs very efficiently.Our algorithm is optimal in the sequential, parallel and out-of-core models.A recent work by Jackson and Aluru [1] yielded parallel algorithms to build these de Bruijn graphs efficiently.They present a parallel algorithm that runs in O(n/p) time using p processors (assuming that n is a constantdegree ploynomial in p).The message complexity of their algorithm is Θ(nΣ).By message complexity we mean the total number of messages (i.e., k-mers) communicated by all the processors in the entire algorithm.One of the major contributions of our work is to show that we can accomplish this in Θ(n/p) time with a message complexity of Θ(n).An experimental comparison of these two algorithms on an SGI Altix machine shows that our algorithm is considerably faster.In addition, our algorithm works optimally in an out-of-core setting.In particular, our algorithm requires only Θ( n log(n/B) B log(M/B) ) I/O operations.
The organization of the paper is as follows.In Section II we introduce some preliminaries and define a bidirected de Bruijn graph formally.Section III discusses our main algorithm in a sequential setting.Section V and Section VI show how our main idea can easily be extended to parallel and out-of-core models optimally.In Section V-A we provide some remarks on the parallel algorithm of Jackson and Aluru [1].Section VII gives algorithms to perform the simplification operation on the bi-directed de Bruijn graph.Section VIII discusses how our simplified bi-directed de Bruijn graph algorithm can replace the graph construction algorithm in a popular sequence assembly program VELVET [9].Finally we present experimental results in Section IX.
II. PRELIMINARIES
Let s ∈ Σ n be a string of length n.Any substring s j (i.e., s[j]s The set of all k−mers of a given string s is called the k−spectrum of s and is denoted by S(s, k).Given a k−mer s j , sj denotes the reverse complement of s j (e.g., if s j = AAGT A then sj = T ACT T ).Let ≤ be the partial ordering among the strings of equal length, then s i ≤ s j indicates that the string s i is lexicographically smaller than s j .Given any k−mer s i , let ŝi be the lexicographically smaller string between s i and si .We call ŝi the canonical k−mer of s i .In other words, if s i ≤ si then ŝi = s i otherwise ŝi = si .A k−molecule of a given k−mer s i is a tuple ( ŝi , si ) consisting of the canonical k−mer of s i and the reverse complement of the canonical k−mer.In the rest of this paper we use the terms positive strand and canonical k−mer interchangeably.Likewise the noncanonical k−mer is referred to as the negative strand.
A bi-directed graph is a generalized version of a standard directed graph.In a directed graph every edge has only one arrow head (-⊲ or ⊳-).On the other hand in a bi-directed graph every edge has two arrow heads attached to it (⊳-⊲, ⊳-⊳,⊲-⊳ or ⊲-⊲).Let V be the set of vertices and ] refer to the orientations of the arrow heads on the vertices v i and v j , respectively.A walk W (v i , v j ) between two nodes v i , v j ∈ V of a bi-directed graph G(V, E) is a sequence v i , e i1 , v i1 , e i2 , v i2 , . . ., v im , e im+1 , v j , such that for every intermediate vertex v il , 1 ≤ l ≤ m the orientation of the arrow head on the incoming edge adjacent on v il is opposite to the orientation of the arrow head on the out going edge.To make this clearer, let e il , v il , e il+1 be a sub-sequence in the walk then for the walk to be valid it should be the case that o 2 = o ′ 1 .Figure 1(a) illustrates an example of a bi-directed graph.Figure 1(b) shows a simple bi-directed walk between the nodes A and E. Bi-directed walk between two nodes may not be simple.Figure 1(c) shows a bi-directed walk between A and E which is not simple -because B repeats twice.
A de Bruijn graph D k (s) of order k on a given string s is defined as follows.The vertex set V of D k (s) is defined as the k−spectrum of s (i.e.V = S(s, k)).We use the notation suf (v i , l) (pre(v i , l), respectively) to denote the suffix (prefix, respectively) of length l in the string v i .Let the symbol • denote the concatenation operation between two strings.The set of directed edges E of D k (s) is defined as follows: We can also define de Bruijn graphs for sets of strings as follows.If S = {s 1 , s 2 . . .s n } is any set of strings, a de Bruijn graph B k (S) of order k on S has To model the double strandedness of the DNA molecules we should also consider the reverse complements ( S = { s1 , s2 . . .sn }) while we build the de Bruijn graph.
To address this a bi-directed de Bruijn graph BD k (S ∪ S) has been suggested in [4].The set of vertices V of BD k (S ∪ S) consists of all possible k−molecules from S ∪ S. The set of bi-directed edges for BD k (S ∪ S) is defined as follows.Let x, y be two k−mers which are next to each other in some input string z ∈ S ∪ S. Then an edge is introduced between the k−molecules v i and v j corresponding to x and y, respectively.Please note that two consecutive k−mers in some input string always overlap by k − 1 symbols.The converse need not be true.The orientations of the arrow heads on the edges are chosen as follows.If both x and y are the positive strands in v i and v j , respectively, then an edge (v i , v j , ⊲, ⊲) is introduced.If x is the positive strand in v i and y is the negative strand in v j an edge (v i , v j , ⊲, ⊳) is introduced.Finally, if x is the negative strand in v i and y is the positive strand in v j an edge (v i , v j , ⊳, ⊲) is introduced.
Figure 2 illustrates a simple example of the bi-directed de Bruijn graph of order k = 3 from a set of reads AT GG, CCAT, GGAC, GT T C, T GGA and T GGT observed from a DNA sequence AT GGACCAT and its reverse complement AT GGT CCAT .Consider two 3−molecules v 1 = (GGA, T CC) and v 2 = (GAC, GT C).Because the positive strand x = GGA in v 1 overlaps the positive strand y = GAC in v 2 by string GA, an edge (v 1 , v 2 , ⊲, ⊲) is introduced.Note that the negative strand GT C in v 2 also overlaps the negative strand T CC in v 2 by string T C, so the two overlapping strings GA and T C are drawn above the edge (v 1 , v 2 , ⊲, ⊲) in Figure 2. A bi-directed walk on the example bi-directed de Bruijn graph as illustrated by the dash line is corresponding to the original DNA sequence with the first letter omitted T GGACCAT .We would like to remark that the parameter k is always chosen to be odd to ensure that the forward and reverse complements of a k-mer are not the same.
Bi-directed de Bruijn graph example III.OUR ALGORITHM TO CONSTRUCT BI-DIRECTED DE BRUIJN GRAPHS In this section we describe our algorithm BiConstruct to construct a bi-directed de Bruijn graph on a given set of reads.The following are the main steps in our algorithm to build the bi-directed de Bruijn graph.Let R f = {r 1 , r 2 . . .r n }, r i ∈ Σ r be the input set of reads.Rf = { r1 , r2 . . .rn } is a set of reverse complements.Let R * = R f ∪ Rf and R k+1 = ∪ r∈R * S(r, k + 1).R k+1 is the set of all (k + 1)-mers from the input reads and their reverse complements.( vi Reduce multiplicity: Sort all the bidirected edges in [STEP-1], using radix sort.Since the parameter k is always odd this guarantees that a pair of canonical k-mers have exactly one orientation.Remove the duplicates and record the multiplicities of each canonical edge.Gather all the unique canonical edges into an edge list E.
IV. ANALYSIS OF THE ALGORITHM BiConstruct
Theorem 1: Algorithm BiConstruct builds a bidirected de Bruijn graph of the order k in Θ(n) time.
Here n is number of characters/symbols in the input.
Proof: Without loss of generality assume that all the reads are of the same size r.Let N be the number of reads in the input.This generates a total of (r − k)N (k + 1)-mers in [STEP-1].The radix sort needs to be applied at most 2k log(|Σ|) passes, resulting in 2k log(|Σ|)(r − k)N operations.Since n = N r is the total number of characters/symbols in the input, the radix sort takes Θ(kn log(|Σ|)) operations assuming that in each pass of sorting only a constant number of symbols are used.If k log(|Σ|) = O(log N ), the sorting takes only O(n) time.In practice since N is very large in relation to k and |Σ|, the above condition readily holds.Since the time for this step dominates that of all the other steps, the runtime of the algorithm BiConstruct is Θ(n).
V. PARALLEL ALGORITHM FOR BUILDING
BI-DIRECTED DE BRUIJN GRAPH In this section we illustrate a parallel implementation of our algorithm.Let p be the number of processors available.We first distribute N/p reads for each processor.All the processors can execute [STEP-1] in parallel.In [STEP-2] we need to perform parallel sorting on the list E. Parallel radix/bucket sort -which does not use any all-to-all communications-can be employed to accomplish this.For example, the integer sorting algorithm of Kruskal, Rudolph and Snir takes O n p log n log(n/p) time.This will be O(n/p) if n is a constant degree polynomial in p.In other words, for coarse-grain parallelism the run time is asymptotically optimal.In practice coarse-grain parallelism is what we have.Here n = N (r − k + 1).We call this algorithm Par-BiConstruct.
Theorem 2: Algorithm Par-BiConstruct builds a bidirected de Bruijn graph in time O(n/p).The message complexity is O(n).
A. Some remarks on Jackson and Aluru's algorithm
The algorithm of Jackson and Aluru [1] first identifies the vertices of the bi-directed graph -which they call representative nodes.Then for every representative node |Σ| many-to-many messages were generated.These messages correspond to potential bi-directed edges which can be adjacent on that representative node.A bi-directed edge is successfully created if both the representatives of the generated message exist in some processor, otherwise the edge is dropped.This results in generating a total of Θ(n|Σ|) many-to-many messages.The authors in the same paper demonstrate that communicating many-tomany messages is a major bottleneck and does not scale well.On the other had we remark that the algorithm BiConstruct does not involve any many-to-many communications and does not have any scaling bottlenecks.
On the other hand the algorithm presented in their paper [1] can potentially generate spurious bi-directed edges.According to the definition [4] of the bi-directed de Bruijn graph in the context of SA problem, a bi-directed edge between two k-mers/vertices exists iff there exists some read in which these two kmers are adjacent.We illustrate this by a simple example.Consider a read r i = AAT GCAT C. If we wish to build a bi-directed graph of order 3, then {AAT, AT G, T GC, GCA, CAT, AT C} form a subset of the vertices of the bi-directed graph.In this example we see that k-mers AAT and AT C overlap by exactly 2 symbols.However there cannot be any bi-directed edge between them according to the definition -because they are not adjacent.On the other hand the algorithm presented in [1] generates the following edges with respect to k-mer AAT : {(AAT, AT A), (AAT, AT G), (AAT, AT T ), (AAT, AT C)}.The edges (AAT, AT A) and (AAT, AT C) are purged since the k-mers AT A and AT C are missing.However bi-directed edges with corresponding orientations are established between AT G and AT C. Unfortunately (AAT, AT C) is a spurious edge and can potentially generate wrong Fig. 3. Problems with pointer jumping on bi-directed chains assembly solutions.In contrast to their algorithm [1] our algorithm does not use all-to-all communicationsalthough we use point-point communications.
VI. OUT OF CORE ALGORITHMS FOR BUILDING BI-DIRECTED DE BRUIJN GRAPHS
Theorem 3: There exists an out-of core algorithm to construct a bi-directed de Bruijn graph using an optimal number of I/O's.
Proof: Sketch: Replace the radix sorting with an external R−way merge which takes only Θ( n log(n/B) B log(M/B) ).Where M is the main memory size, n is the sum of the lengths of all reads, and B is the block size of the disk.
VII. SIMPLIFIED BI-DIRECTED DE BRUIJN GRAPH
The bi-directed de Bruijn graph constructed in the previous section may contain several linear chains.These chains have to be compacted to save space as well as time.The graph that results after this compaction step is refered to as the simplified bi-directed graph.A linear chain of bi-directed edges between nodes u and v can be compacted only if we can find a valid bi-directed walk connecting u and v.All the k-mers/vertices in a compactable chain can be merged into a single node, and relabelled with the corresponding forward and reverse complementary strings.In Figure 4 we can see that the nodes X 1 and X 3 can be connected with a valid bidirected walk and hence these nodes are merged into a single node.In practice the compaction of chains seems to play a very crucial role.It has been reported that merging the linear chains can reduce the number of nodes in the graph by up-to 30% [9].
Although bi-directed chain compaction problem seems like a list ranking problem there are some fundamental differences.Firstly, a bi-directed edge can be traversed in both the directions.As a result, applying pointer jumping directly on a bi-directed graph can lead to cycles and cannot compact the bi-directed chains correctly.Figure 3 illustrates the first phase of pointer jumping.As we The same is true for the compaction of the bi-directed walk between X 1 and X 3 .The redundant edges after compaction are marked in red.Since bi-directed chain compaction has a lot of practical importance efficient and correct algorithms are essential.We now provide algorithms for the bi-directed chain compaction problem.Our key idea here is to transform a bi-directed graph into a directed graph and then apply list ranking.Given a list of candidate canonical bi-directed edges, we apply a ListRankingTransform (see Figure 5) which introduces two new nodes v + , v − for every node v in the original graph.Directed edges corresponding to the orientation are introduced.See Figure 5.
Lemma 1: Let BG(V, E) be a bi-directed graph; let BG t (V t , E t ) be the directed graph after applying Lis-tRankingTransform.Two nodes u, v ∈ V are connected by a bi-directed path iff u Proof: We first prove the forward direction by induction on the number of nodes in the bi-directed graph.Consider the basis of induction when |V | = 2, let v 0 , v 1 ∈ V .Clearly we are only interested when v 0 and v 1 are connected by a bi-directed edge.By the definition of ListRankingTransform the Lemma in this case is trivially true.Now consider a bi-directed graph with |V | = n + 1 nodes, if the path between v i , i < n and v j , j < n does not involve node v n the lemma still holds by induction on the sub bidirected graph BG(V − {v n }, E).Now assume that v i . . .v p , v n , v q . . .v j is the bi-directed path between v i and v j involving the node v n ; see Figure 6(a).Also Figure 6(a) shows how the transformed directed graph look like; observe the colors of bi-directed edges and corresponding directed edges.By induction hypothesis on the sub bi-directed paths v i . . .v p , v n and v n , v q . . .v j we have the following.v + i is connected to v + n or v − n by some directed path P 1 (See Figure 6(b); v + n is connected to v + j or v − j by some directed path P 2 .We examine three possible cases depending on how the directed edge from P 1 and P 2 is incident on v + n .In CASE-1 we have both P 1 and P 2 pointing into node v + n .This implies that the orientation of the bi-directed edges in the original graph is according to Figure 6(b).In this case we cannot have a bi-directed walk involving the node v n , which contradicts our original assumption.Similarly CASE-2(Figure 6(c)) would also lead to a similar contradiction.Only CASE-3 would let node v n involve in a bi-directed walk.In this case v + i will be connected to either v + j or v − j by concatenation of the paths P 1 , P 2 .We can make a similar argument to prove the reverse direction.
A. Algorithm for bi-directed chain compaction
We first identify a set of candidate bi-directed edges which can potentially form a chain.One possible criterion will be to include all the edges which are adjacent on bi-directed nodes with exactly one in and out degree.Each candidate bi-directed edge is transformed using ListRankingTranform and list ranking is applied on resultant set.As a consequence of the symmetry in ListRankingTransform we would see both forward and reverse complements of the compacted chains in the output.We can further canonicalize each chain and remove the duplicates by sorting.This results in unique bi-directed chains from the candidate bi-directed edges.Finally we report only the chains which are resultant of compaction of at least three bi-directed nodes.This removes all the inconsistent edges (see Figure 4) from further consideration.As a consequence of Lemma 1 all the bi-directed chains are correctly compacted.
B. Analysis of bi-directed compaction on parallel and out-of-core models
Let E l be the list of candidate edges for compaction.To do compaction in parallel, we can use a Segmented Parallel Prefix on p processors to accomplish this in time O(|2E l |/p + log(p)).On the other hand list ranking can also be done out-of-core as follows.Without loss of generality we can treat the input for the list ranking problem as a set S of ordered tuples of the form (x, y).Given S we create a copy and call it S ′ .We now perform an external sort of S, S ′ with respect to y (i.e., using the y value of tuple (x, y) as the key) and x respectively.The two sorted lists are scanned linearly to identify tuples (x, y) ∈ S, (x ′ , y ′ ) ∈ S ′ such that y = x ′ .These two
For each canonical bi-directed edge ( vi , vj , o 1 , o 2 ) ∈ E, collect the canonical k-mers vi , vj into list V. Sort the list V and remove duplicates, such that V contains only the unique canonical k-mers.• [STEP-4] Adjacency list representation: The list E is the collection of all the edges in the bi-directed graph and the list V is the collection of all the nodes in the bi-directed graph.It is now easy to use E and generate the adjacency lists representation for the bi-directed graph.This may require one extra radix sorting step.
• [STEP-3] Collect bi-directed vertices: can see, the green arcs indicate valid pointer jumps from the starting nodes.However since the orientation of the node Y 4 is reverse relative to the direction of pointer jumping a cycle results.In contrast, a valid bidirected chain compaction would merge all the nodes between Y 1 and Y 5 since there is a valid bi-directed walk between Y 1 and Y 5 .On the other hand, bi-directed chain compaction may result in inconsistent bi-directed edges and these edges should be recognised and removed.Consider a bi-directed chain in Figure4; this chain contains two possible bi-directed walks -Y 1 to Y 4 and X 1 to X 3 .The walk from Y 1 to Y 4 (Y 4 to Y 1 ) spells out a label AT AGGT (ACCT AT ) after compaction.Once we perform this compaction the edge between Y 4 and Z 1 in the original graph is no longer valid -because the k-mer on Z 1 cannot overlap with the label ACCT AT . | 6,490.4 | 2010-03-09T00:00:00.000 | [
"Computer Science"
] |
SURFACE CONSOLIDATION OF WALL PAINTINGS USING LIME NANO-SUSPENSIONS
Within the field of the conservation of historical and cultural monuments, lime nanosuspensions are still a relatively new and unexplored material. This study examines their effect on the consolidation of architectural surfaces and, consequently, on wall paintings. Previous experiments showed that considerably deteriorated materials may not be adequately strengthened using only lime nano-suspensions. Therefore, the effects of their admixtures and gradual applications with silicic acid esters were examined. For verification, a simulation of a deteriorated lime-based paint layer was created on panels of plaster. The results of the consolidation were subsequently studied using objective (peeling test, water absorption capacity test, measuring colour changes using a mobile spectrophotometer) and subjective methods (comparison of visual changes to a set standard and by testing cohesion using a cotton swab). The microstructure of a consolidated paint layer was studied with a scanning electron microscopy. Tests proved that with either individual lime-alcoholic suspensions or with successive applications and mixtures of silicic acid esters it is feasible to achieve good consolidation results, whilst the alkoxysilane content of the agent indisputably increases the consolidating effect of these materials.
Introduction
In recent years, the consolidation of paint layers of wall paintings or architectural surfaces is, as in other areas of the care of monuments, associated with the notion of material compatibility.On the basis of this notion, materials using the same type of binder initially used in the original artefact are being increasingly applied in the consolidation and conservation of historical and culturally significant buildings and artefacts.A result of the above-mentioned efforts is that a group of consolidating agents was developed at the beginning of the new millennium -so-called "lime nano-suspensions" [1].As they are materials whose only consolidation agent is the calcium hydroxide Ca(OH) 2 , they are designated for strengthening other porous calcareous materials such as lime plasters and their surface layers (washes, paintings).As opposed to a traditional lime-based consolidant, which is a saturated solution of the Ca(OH) 2 in water (limewater), these consolidants have a higher concentration of the active component and, moreover, the consolidant is dispersed in an alcoholic media, which, in many cases, as in the case of risks connected with moisture, can be more advantageous (for instance, in case of activation of water soluble salts).The aim of this study was to investigate the effects and risks of strengthening lime based paintings and surface layers of historical plasters using this type of consolidant as well as its modifications.
In spite of the fact that, as opposed to limewater, the concentration of the active matter in lime-alcoholic suspensions is higher, in the cases where the strength-ened substrate had disintegrated excessively, the consolidating effect was still inadequate.Therefore, the tests included modifications of these suspensions with silica based consolidants, where the resulting content of the solid component is approx.10 times higher.On the basis of previous experiments described in literary sources [2], the decision was made to test mixtures and combinations of silicic acid esters, which, in the conservation field, are the most commonly used consolidants for porous inorganic materials, including plaster, renderings and their surfaces.
Basic characteristics of lime alcoholic suspensions
The term lime-alcoholic nano-suspensions (or nanosols) is used for suspensions of calcium hydroxide in aliphatic alcohols with particles of calcium hydroxide, which size is about 50 to 300 nm.Theoretically, they are not nanomaterials in the true sense as their size would have to be 100 nm at the most [3].The individual suspensions, available commercially or as an experimental developmental material, differ from each other in their particle morphology, their concentration or type of alcohol.The most common dispersing agents are ethanol, 1-propanol and 2-propanol.
The suspensions are produced in various concentrations, commercially available in concentrations from 5 to 50 g of Ca(OH) 2 to 1 l alcohol.The viscosity and "whiteness" of the suspensions differ partly according to their concentration.Suspensions with higher concentrations have slightly higher viscosity and they are more opaque, "whiter".So far, two manufacturers offer products commercially -products from the German manufacturer IBZ Salzchemie, marketed under the CaLoSil® brand [4], and those from the Italian manufacturer CSGI [5], marketed under the Nanorestore Plus® brand (formerly Nanorestore®).Other similar materials are still under development or in their experimental phase.Compared to other dispersions (e.g., polymer dispersions) the stability of lime-alcoholic nanosuspensions is lower (only ca. 3 months), probably due to the small size of the particles which tend to agglomerate much faster.One manufacturer guarantees stability of its product in an unopened packaging for 12 months [6].
According to the needs, the suspension can be diluted by adding organic solvents.Usually, the same alcohol in which the Ca(OH) 2 particles have been dispersed.However, other options for diluting have been published.One of these can be to dilute with a mixture of ethanol and acetone to a ratio of 40 : 60, which, according to the authors of this article, should ensure better distribution of the consolidant in the material to be strengthened [7].
After application of the suspension to the material to be strengthened, the dispersion medium (alcohol) evaporates completely.This can result in the re-migration of Ca(OH) 2 particles toward the surface, which itself can cause white haze on the material being strengthened.Restricting this process can be carried out in several ways, by subsequent wetting with organic solvents or water (something dealt elsewhere within this study), by the aforementioned dilution of the suspension with a mixture of ethanol and acetone or by applying solutions of cellulose derivatives in a low concentration to the surface of the treated object after the consolidation [8].Another reason for the creation of the white haze can be an accumulation of the consolidant on the surface during its application.According to in situ tests, it seems that a limiting factor could be not only the pore size of the strengthened material (including thin surface layers), but possibly also a thin layer of deposits or secondary interventions adhering to the surface.
Experiment description
The main aim of the experiment was to study the effect of individual lime-alcoholic suspensions, which were examined at the Faculty of Restoration, University of Pardubice within the framework of two international research projects, STONECORE [9] and NANOFORART [10].The experiment was partly carried out during the course of the second of the abovementioned projects in whose framework some of its results were presented.
Another aim, deemed necessary during the course of the experiment, was the testing of mixtures and combinations of lime-alcoholic suspensions with silicic acid esters.This requirement arose primarily as a result of a restoration work on the wall paintings in the dome of St. Isidor Chapel in Křenov near Moravská Třebová, where the question of consolidating the severely deteriorated lime-based paint layers was an issue.Organic based consolidants as synthetic or natural polymers were resolutely rejected here due to the persistent high relative humidity in the building and possible risks related to the changed behaviour of paint layer treated by these consolidants (e.g., dilatation, water vapour permeability, biodegradation).Based on previous experiences [11], the necessary level of consolidation was impossible to achieve using (ideally compatible) lime-alcoholic suspensions.
In order to carry out comparison tests of the consolidants, five lime-alcoholic suspensions tested within the framework of the NANOFORART project were chosen (two of which are commercially available today under the brand names Nanorestore Plus® Ethanol 10 (CSGI ) and Nanorestore Plus® Propanol 10 (CSGI ).The product CaLoSil® E25 (IBZ Salzchemie), tested within the STONECORE project, was selected for comparison.
For the planned creation of more effective consolidants, silicic acid esters (oligomeric alkoxysilanes) in different forms were included in the experiment.Unmodified alkoxysilane (KSE 100 and KSE 300 ), its elasticized modification (KSE 300E) and a product modified with special primers for improving adhesion to the materials bound with calcium carbonate (KSE 300HV ) were used.Products from the manufacturer Remmers were chosen precisely because they offer agents in all our required categories whose examination was planned [12].During the experiment, not only the application of the individual consolidants (above all lime-nanosuspensions) was carried out, but also the application of their mixtures and subsequent application to test panels with the focus on the effect of the combined application of both abovementioned groups of materials was tested.
Characteristics of the test substrates
On the basis of previous experience, it was decided that testing the effects of all selected consolidants and their combinations or mixtures would be carried out on plaster panels with a simulation of a deteriorated paint layer.Due to the heavily damaged paint layers and plaster of the ceiling paintings in St. Isidor Chapel in Křenov, where the resulting technology was to be eventually applied, it was decided to simulate not only the completely powdered paint layer, but also the weakened plaster layering.On that basis, two plaster panels were prepared with a model simulation of the plaster and paint layer deterioration.Panels were prepared ca.not focus on the interactions between the consolidant and the plaster.
Two layers of a lime plaster with a low binder content were applied to the test panels.Whilst the first layer of the rough plaster (arricico) was scraped when still moist and not hardened, the finer plaster layer (intonaco) was applied and compacted with a trowel and subsequently levelled with a wooden float covered in felt.Two layers of paint without a binder were brushed onto the intonaco layer.In order to simulate deteriorated paintings using fresco [13] or lime secco [14] techniques as closely as possible, a limestone powder (calcium carbonate) was added to the pigment.This commonly functions as a binder, filler or pigment in the paint layer.For the composition of the individual layers in parts by volume see Table 1.
Characteristics of applied methods
During the experiment, the main intention was to study the resulting consolidation rate and apparent as well as possible hidden negative impacts on the treated material.For evaluation of the resulting strengthening effect on paint layers, there are not many objective methods.Probably, the peeling test, where an adhesive tape is used to determine the strengthening gain, could be considered as the most widespread method.The principle of the method is based on determining the mass of the peeled tape with defined dimensions before and after the consolidation.Even though this method has several more or less objective varieties, in our experiment, it does not provide correctly interpretable results for evaluating the strengthening effect on the used simulation of the damaged paint layer.With an increase of paint layer cohesion due to the consolidation, the amount of the material peeled off can also increase.This can lead to an incorrect evaluation of the results.In order to eliminate the inaccuracies of this test, it was decided to carry out the test in two different forms, with different adhesion power of the tape used.
For the same reason, another objective method in the form of a sclerometric hardness test was intended.However, as the painted layer of a wall painting, especially unconsolidated, is an extremely sensitive material, the most delicate method of this type of testing was required.The so-called Wolff-Wilborn test, which uses the varying hardness of graphite pencils to evaluate the hardness of materials, seemed to meet this requirement.During a previous test [15], it was evi-dent that the evaluation of the test was inconclusive and thus highly subjective.Therefore, the decision was made to eliminate this method from the following experiment.
On the basis of previous experience in the conservation-restoration field, it was evident that even subjective methods could provide credible and helpful results.Subjective evaluation methods (for example visual observations), as a helpful tool for assessment in restoration and conservation field, are mentioned in many publications [16,17].In addition, restorers are used to working with and subsequently evaluate these kinds of methods.A swab test of the paint layer with the help of a cotton wool tampon followed by a visual evaluation of the amount of material removed was the method chosen to complement objective methods of evaluating the strengthening rate.Visual changes (hazing or darkening) were also subjectively evaluated.However, visual changes were also evaluated objectively with the help of a Konica Minolta CM-2600d mobile spectrophotometer, which enables the recording of a colour space using the CIELAB [18] system.
The structure of samples taken from the surface were studied using Scanning Electron Microscopy (SEM ) to evaluate less evident, direct and indirect effects of consolidation and the micromorphology of the substrate as well as the newly added binder.This was carried out using a Mira 3 LMU (Tescan) electron microscope with EDS analysis Quantax 2000 (Bruker).The measurement was undertaken on fragments of simulated damaged paint layers in a high-vacuum mode using Secondary Electron (SE) and Back-Scattered Electron Detectors (BSE).Samples were sputtered with gold.The aim of this microscopic study was to ascertain the distribution of the consolidant within the pore system of the strengthened substrate, the microstructure of the resulting binder and the eventual changes of the pore system after the consolidation.
In order to evaluate long-term influences affecting the tested substrate, it is usually important whether or not the surface closes up or whether border lines form between the strengthened and un-strengthened zones.In this case, there are not many possibilities for evaluating the consolidation of paint layers.Probably the most common one is evaluating changes of water absorption.A method was sought, which would be adequately sensitive, would provide the relevant results and, at the same time, be available and easily
applicable (especially due to its intended use in situ).
A traditional evaluation of water absorption using the Karsten tube method in the case of delicate paint layers was inapplicable.An alternative could have been the Mirowski Pipe or its modified or automated form [19].In the end, the decision was made to use the Contact Sponge Method [20], which is extremely simple, undemanding and could be used for measuring directly on the artefacts on which the lime nanosuspensions and their mixtures with alkoxysilanes had been applied.This method was lightly-modified for the purposes of this experiment.The original sponge was replaced with a micro-porous polyvinyl alcohol sponge with the dimensions 30 × 33 × 74 mm (corresponding to the original dimensions 174 × 33 × 74 mm).Measuring was carried out by applying the sponge to the surface of the panel through a Japanese tissue.The panel was placed in a horizontal position, so there was probably an influence of the gravity force as well.The sponge was applied in 5 cycles each lasting 60 s -always with 35 s intervals.At the beginning of the experiment, the sponge was weighed in order to study the absorption rate over time.
A comparatively important feature of lime nanosuspensions to measure is their stability, which, as it was already mentioned, in comparison with other commonly used consolidants is markedly lower.Stability was evaluated on the basis of the sedimentation rate of individual studied suspensions.The stability of mixtures with alkoxysilanes was evaluated on the basis of the mixture gelation rate.
System for evaluating results
One of the tasks of the experiment was to compile a logical evaluation system, which would clarify the measured data and allow a more clear interpretation.In order to describe and enter the changes, which take place after the application of consolidants into one logical system, an attempt was made to evaluate these changes on a numbered scale from 0 to 3. "0" marks the best result, meaning that there was a good consolidation or there were no significant visual changes after the application of the consolidant.Conversely, "3" marks the worst result.Therefore, the higher the number on the scale the worse the consolidation and the greater the unwanted effects on the consolidated area.
However, as the measurements and comparisons of methods showed, not all tests could be transcribed into values on a simple scale.Therefore, some tests, such as those dealing with the stability, water absorption and microstructure studies using Scanning Electron Microscopy, were evaluated individually.For the above described evaluation scale, see Table 2.
In conclusion, alongside the evaluation of the individual tests, an overall mark has been calculated for each test field.This is an average mark given for visual impact and effectiveness of the consolidation.
Stability test of selected consolidants
All tested suspensions and some of their mixtures with alkoxysilanes were initially tested for stability.The suspensions and mixtures were placed into test tubes and photo-documentation was carried out at regular intervals.
The individual suspensions displayed vast differences in the stability.Due to the difficulties in accumulating all the tested consolidants, tests were carried out, on average, within 1 month of their production.Full stability (no sediment appearing at the bottom of the test tube) of the suspension ranged from between several days to ca. 8 weeks from the start of the experiment in the case of samples displaying the best results.This means that the full stability in the case of the most successful suspensions is reached in ca. 3 months.However, it is important to note that after a certain period, the sediment can be dispersed into the suspension again, as one manufacturer notes, after shaking the closed bottle or by an ultrasonic treatment [21].The question remains as to whether the original properties of the suspension remain intact -above all the penetration ability.
Regarding stability of the mixtures of lime-alcoholic suspensions with alkoxysilanes, an acceptable level of stability was recorded in the case of mixtures with alkoxysilanes modified for a use on carbonate materials.In one case, this was determined to be 8 days.Nevertheless, for application purposes, stability is necessary only over a period of several hours for a safe application after mixing.For an example of results of the stability test, see Figure 1.
Tests carried out on test panels
Two 100 × 50 cm plaster testing panels were divided into a grid of 9 × 9 cm sections.The consolidants were applied on the panels in a vertical position with a fine sprayer until the substrate was saturatedwhen the suspension remained on the surface longer than several seconds.This process was derived from previous experiments.During the application, the values of relative humidity varied between 35-40 %.Immediately after the application, the panels were placed under an airtight foil maintained with relative humidity of around 55-65 %.The air temperature was recorded in the range of 20-22 °C.The role of the relative humidity was already tested in the previous experiment [22] and it appeared to be significant.It showed that in the relative humidity around 30 %, a stronger white haze is formed than in the relative humidity around 60 %.The tests were carried out on both lime suspensions diluted with various alcohols to a concentration of 5 and 10 g of Ca(OH) 2 per 1 litre of agent and subsequent applications of silicic acid esters (alkoxysilanes) and lime suspensions as well as on their mixtures.Figure 2 shows one of the panels after testing.
Whilst the subjective evaluation tests (white haze, darkening, swab test) were realized after two weeks, part of the objective tests was realized after two months (spectrophotometer measuring, peeling tests I and II) and the other part after more than one year (water absorption test).
For the purpose of the evaluation, result sheets were created, which summarize the tests carried out on each test section.Each square is described on the sheet in detail, with the type of the consolidant and its concentration, followed by a detailed description concerning application -date, number of cycles, the amount of the consolidant used (volume) and details about eventual subsequent wetting.Last but not least, marks (scores) were given according to the results of individual tests.Six marks for each individual test were averaged out into a one overall mark, which serves as a mutual comparison of the test sections.Each sheet contains photo-documentation of each field after application of the consolidant, marking off of the test sections (peeling tests, swab test, water absorption test), an image taken using a USB microscope and a photograph documenting both peeling and swab tests.Figure 3 shows one of these sheets.
Study and evaluation of microstructure
Studies of the microstructure of consolidated samples were carried out using Scanning Electron Microscopy as mentioned above.Representatives of the individual consolidating systems were chosen for the studies -the application of lime suspensions in several cycles, gradual application of alkoxysilane and lime suspension based agents and a mixture of both abovementioned consolidants.One test section was also included where alkoxysilane, its mixture with lime suspension and a pure lime suspension were subsequently applied.For the purpose of the comparison, a sample was also taken from a test section consolidated with a twocycle application of alkoxysilane consolidants.The images of the microstructure (Figure 4) show that each of the aforementioned groups displays a different binder microstructure.Generally, it can be said that the binder accumulates on the walls of the pores and in the contact zones between the grains of the substrates thus forming new connections or bridges between these grains.Contrary to gels forming during the hardening of alkoxysilane consolidants, the binder structure formed through the combinations or mixtures (or both) is more porous or contains a greater number of shrinkage micro-cracks.In all three casesm the binding of the gel to the grains of the substrate is good and new binder does not show the tendency to detach from these grains.In the case of samples consolidated purely with lime suspensions, it was not possible to differentiate conclusively the new binder from the substrate itself.This is due firstly to the composition of the new binder, which is chemically identical to the particles present in the substrate (the paint layer contains CaCO 3 as a filler), and also due to the small amount of the consolidant applied and probably also due to its microstructure which is highly porous.Generally, it could be said that the porosity of the substrate simulating a deteriorated paint layer is in no way affected by the consolidation.Therefore, it can be supposed that not even those physical properties related to the porosity, such as vapour permeability or water absorption capacity (WAC), would change significantly as a result of the consolidation.This was confirmed by the values of the water absorption coefficient measured.The measurement of the WAC was performed after more than one year after the consolidation, so the effect of the temporary hydrophobicity of alkoxysilanes can be considered as negligible.
Discussing the results
Within the framework of the experiment, the individual suspensions were compared and various types of alcohol were tested for diluting them.On one hand, as already mentioned above, the full stability of the most stable suspensions is only approx.3 months from its production.On the other hand, the suspension can be re-dispersed after shaking or an ultrasonic treatment, giving still acceptable consolidation results for a certain period.A fundamental finding was discovered during the study of the effects of the denatured [23] and 99.8 % ethanol [24].Denatured ethanol, which should contain more water, exhibited very good ef- fects during stability tests of the mixtures and during consolidation tests on the test panels seemed to be at least adequate as a substitute to purer 99.8 % ethanol.Thus the original supposition that denaturing reagents, above all those featuring a higher content of water, can lower the stability of a suspension was disproven.This finding is especially important from an economic point of view as the price of the 99.8 % ethanol is 10 times higher than of the denatured ethanol.
A further phenomenon, which was investigated during the tests, was the restriction of the white haze forming during or after the application.Apart from the influence of the concentration of the consolidant, something that is dealt with later, there are several basic rules, which it is sensible to adhere to when applying a suspension.The first is the application method.A sprayer with a gas cartridge producing a fine aerosol was found to be the ideal application method for the test panels.This not only ensures a uniform application, but also restricts the amount of fluid applied within a certain period of time.The application is thus easier to regulate than, for example, using a mechanical sprayer.However, in the case of an application onto badly deteriorated materials in situ, the use of a syringe was found to be more suitable as the application could be focused more directly on uneven absorbent substrates.
Another factor capable of influencing the forming of the white haze is the absorbability of the material being strengthened.Where the absorption rate of the surface is low, there is a greater risk of the white haze forming.Conversely, there where the absorption rate is higher, the material being strengthened is capable of absorbing more of the consolidant.This is of course related to the rate of the deterioration of the material -the greater the deterioration, the more intensive the consolidation must be.This, in turn, affects another factor, which is the amount of suspension applied in one cycle.Usually, the ideal method proved to be the application of the material up to the initial saturation of the substrate.If this rule is not observed, the formation of the white haze can be expected with a greater probability.
The final factor is the influence of humidity.It was found that both the relative humidity and subsequent wetting with water restrict the formation of the white haze.Regarding the relative humidity, values of around 60 % are satisfactory.Conversely, a low relative humidity values of around 30 % increase the risk of the white haze formation.Subsequent wetting with spraying of water significantly reduces the formation of the white haze in general.An interesting fact is that even if white haze has formed, it is possible to reduce it, not however eliminate it, by wetting.This finding could be beneficial in cases where the first traces of the white haze appear.Subsequent wetting by spraying could, even after few hours, entirely eliminate these traces.As the experiment showed, wetting after each application cycle of the suspension has a negative effect on the resulting strengthening of the substrate.Therefore, wetting as little as possible is clearly appropriate, e.g., once every 2-3 cycles, according naturally, to the characteristics of the strengthened material.
Suitable concentration of the suspension and the number of application cycles are of crucial importance.According to the tests undertaken on the panels as well as on genuine historical monuments, the assumption is that in consolidation paint layers 3-6 cycles of a concentration of 5-10 g of Ca(OH) 2 per 1 litre of solvent are suitable.The concentration and number of cycles naturally depend on the type of the material being consolidated.However, experience has shown that more than 6 cycles of a concentration 5 g of Ca(OH) 2 per 1 litre of solvent greatly increase the risk of the white haze formation.
no. /
Surface Consolidation of Wall Paintings Using Lime Nano-Suspensions When comparing individual suspensions with each other, no crucial differences favouring one or the other were noted.For example, pure lime suspensions were used during the consolidation of the medieval wall paintings in St. Vitus Church in the submerged village of Zahrádka near Ledeč nad Sázavou.Here, 4 cycles of a 5 g/l concentration were applied to the paint layer.
The application of pure lime suspensions on extremely deteriorated materials was found to be unsuitable due to their relatively low consolidation effect and the risk of the white haze forming as a result of the higher number of application cycles.Therefore, tests were carried out with their mixtures with silicic acid esters and with separate subsequent applications of both consolidants mentioned.Regarding the separate subsequent applications, the test panels displayed a relatively satisfying increase in the paint layer cohesion, but also visual changes in the form of darkening and the white haze creation.Better results as regarding visual impacts and the consolidation were achieved with mixtures.The best mixtures were found to be those in the volume proportions of 1 : 1 : 1 (alkoxysilane KSE 100 to alkoxysilane KSE 300 or 300HV to lime suspension 10 g/l).Even mixtures in volume proportions of 1 : 1 : 2, achieved positively evaluated results.Faring only slightly worse were the 0 : 1 : 1 mixtures.When comparing the influence of the unmodified alkoxysilanes (KSE 300 ) and the specially modified product for improved adhesion to carbonate materials (KSE 300HV ), the unmodified product (KSE 300 ) performed better.
Two cycles of application of the abovementioned mixtures were not recommended within the framework of the experiment for use on the specific substrate of the test panels.However, they were subsequently used on the severely deteriorated paintings on the ceiling of the chapel of St. Isidor in Křenov.In this case, a combination of consolidandts was selected, in which an alkoxysilane agent in a "concentration" of 100 g/l (KSE 100 ), a mixture of lime suspension concentrated to 10 g/l (diluted ZFB 703i) and alkoxysilane guaranteeing the exclusion of gel at a rate of 300 g/l (KSE 300HV ), and finally, ca.6 cycles of lime suspension concentrated to 5 g/l (diluted CaLoSil E25 ) were successively applied.In the areas where even this combination could not achieve adequate strengthening, the mixture previously described was applied again.The consolidation effect was evaluated positively with subjective methods as regards both the effect of the consolidation and its visual aspect, where no changes were registered.
On the basis of studying the microstructure, as described above, it seems that mixtures of both of these inorganic consolidants could be the appropriate solution to consolidating extremely deteriorated substrates where the use of alkoxysilanes alone may not always be successful.In this regard, their use in the structural strengthening of lime plasters would be interesting.
Conclusion
Overall, it could be said that lime nano-suspensions are not only appropriate to use for lime based paintings and washes on historical plasters with a light to medium deterioration but, thanks to their material compatibility with the original, they are highly suitable regarding a long-term stability.However, based on the experience with other materials and technologies in the past, it is necessary to note that some unpredictable negative effects of the consolidation can appear after several years, or even decades.Nevertheless, thanks to the high compatibility of this material with the substrate to be strengthened, the risk itself can be considered to be relatively low.This study also gives very detailed instructions for the application of lime nano-suspensions, which enables a good consolidation effect without visual changes to the original material (creation of white haze).
The use of lime nano-suspensions alone for heavily deteriorated materials could be, in some cases, unsatisfactory.It was found that it is possible to achieve the consolidation of such materials by successive applications and mixtures of nano-lime suspensions with alkoxysilanes.From a theoretical point of view, regarding compatibility, the exclusive use of lime consolidants on paint layers is, of course, more suitable.Another reason for precautions when using alkoxysilanes is that after the consolidation, the paint layer could become harder, less elastic and more fragile thus increasing the risk of being damaged in the future.Therefore, the author of this study is of the opinion that the application of alkoxysilanes or their mixtures with lime suspensions is suitable only in those cases where strengthening is not effective with lime suspensions alone.In cases where more effective consolidation is necessary, it is suitable to apply those consolidants mentioned only in as small an amount as possible and to finalize the consolidation with the help of lime suspensions.
Figure 2 .
Figure 2. Condition after application of all consolidants on one of the test panels.
Figure 3 .
Figure 3. Test sheet for one of the test sections (IID7).It describes a successive application of a 1 : 1 : 2 mixture and lime-alcoholic suspension in a concentration of 5 g/l.
Figure 4 .
Figure 4. SEM/BEI (scanning electron microscope -backscattered electron image).On the left -microstructure of the new binder in the substrate pores as a result of gradual application of lime nano-suspension and alkoxysilane and, on the right, the microstructure of the new binder formed from the lime nano-suspension and alkoxysilane mixture.
Table 1 .
Composition of individual plaster panel layers (parts by volume).
2 month before the application of the consolidants started.Nevertheless, a test of maturity of the substrate (complete conversion of Ca(OH) 2 to CaCO 3 ) wasn't realized, because the simulation of the paint layer contained no binder and the experiment did
Table 2 .
Table showing methods of assessing the consolidation and related unwanted effects, defined after the application of consolidating agents on the basis of threshold values | 7,446.6 | 2017-05-02T00:00:00.000 | [
"Materials Science",
"Art"
] |
Hasilpedia: Transforming knowledge management at Inland Revenue Board of Malaysia
This paper provides a working example of how technology plays an important role in knowledge management for the Malaysia’s federal tax collection agency, Inland Revenue Board of Malaysia (IRBM). The IRBM had successfully gone through a five year organizational transformation process that had resulted in significant performance improvements duly recognized by the Malaysian government. Led by its visionary Chief Executive Officer (CEO), various initiatives had been implemented, including those which placed technology as a key driver in its operations. The focus of this paper is on the organization’s ‘knowledge base’ system, or the ‘k-base’. A computerized database for internal use, the k-base was developed in-house and currently managed by IRBM’s Information Technology Department. Originally created to support information sharing among the organization’s auditors, the k-base today features a myriad of information and is accessible by all employees. This paper will trace the journey of the k-base from its original version to being IRBM’s prized possession today as well as the organization’s plans for its future.
Introduction
Knowledge management in the organizational context refers to an approach in actively leveraging upon individual knowledge and expertise towards creating value for the organization (Scarborough, 2003). While much has been written on knowledge management (KM) technologies in organizations, there seems to be limited discussion on the topic when it comes to governmental bodies. For governmental bodies in developing countries like Malaysia, such studies are even more scant. In addressing the gap, this paper attempts to provide a working example of technology and KM in the Malaysian government. It describes how technology plays an important role in knowledge management for the Malaysia's federal tax collection agency, Inland Revenue Board of Malaysia (IRBM), or 'Lembaga Hasil Dalam Negeri Malaysia' as it is referred to in the national language. The term 'Hasil' in the organization's name can be translated to mean revenue, or income, which aptly refers to the IRBM's main role as the country's tax collection agent. Hence, the research question is "how can technology be utilized by a governmental agency such as the IRBM for knowledge management?" The objectives of this paper are:
1.
To determine the role of technology for KM in the context of IRBM. 2.
To identify the impacts of technology on three specific KM activities, namely knowledge acquisition, knowledge sharing, and knowledge application among IRBM employees.
The IRBM makes an appealing case study as it is currently regarded as an exemplary governmental body which had gone through a successful five year organizational transformation process that made them less bureaucratic, and more efficient and corporate-like. Their success story is well known among those in the government. Much of it is made public via the mass media, public talks, and management seminars, whereby the IRBM's Chief Executive Officer (CEO), Tan Sri Dr. Shukor Mahfar, is often invited to share management strategies and practices for other organizations to learn from their achievements.
At the heart of IRBM's transformation lie the CEO himself and his passion for continuous learning and improvement, and the inculcation of an innovative and knowledge-oriented culture among Hasilians. Hasilian is a term used to refer to the organization's workforce, and in many internal communication activities, the organization itself is termed 'the kingdom of Hasil'. Under the visionary leadership of the CEO, the top management team crafted and implemented various initiatives, some of which placed technology as a key driver in its improved operations. Technology now plays a much more significant role in its operations compared to the years prior to the start of the transformation, which was when Tan Sri Dr. Shukor Mahfar took office in 2011. This paper will trace the journey of the k-base from its original version to being IRBM's prized possession today as well as the organization's plans for its future. However, to establish the research background for the case, the following section presents a discussion on the study of KM in government institutions, with a focus on the role of technology.
Knowledge management technology in Malaysian government institutions
KM literature has established that organizations require comprehensive KM through the usage of information and communication technology (Kammani & Date, 2009). In today's organizations, advanced technology has made it possible for employee knowledge to be stored and managed using various technologies such as online databases, groupware, data warehouses, and information processing software (Kamhawi, 2010). However, KM researchers have long cautioned that the mere activity of storing data or information should not be equated with activities involved in extracting, transferring and creating knowledge for value creation in the organization. The creation of value from KM activities may require different technological infrastructure in different types of organizations as organizations handle knowledge in different forms (Whitehill, 1997). One category of difference is whether the organizations belong to the public or private sector.
A review of literature has shown that much of KM research has covered various KM technologies in private sector organizations, but organizations in the public sector, i.e. government agencies seem to have not been accorded similar attention. In today's digital economy, the emergence and adoption of KM technology is especially important to the government institutions of developing countries such as Malaysia in order for the countries to keep up with the more developed economies (Junoh, Osman, & Halim, 2014) However, literature on intra-organizational usage of technology for KM in Malaysian government institutions has not been found so far. This paper aims to address the research gap on this issue. It provides a working example on the usage of technology in KM for a Malaysian government agency. It describes how technology plays an important role in KM for the Malaysia's federal tax collection agency, the Inland Revenue Board of Malaysia (IRBM).
Organizational background
The IRBM is a revenue collecting agency under the Ministry of Finance, Malaysia. IRBM was established based on the Inland Revenue Board of Malaysia Act 1995 to give it more autonomy in financial and personnel management as well as to improve the quality and effectiveness of tax administration in Malaysia. It was formerly known as the Department of Inland Revenue Malaysia, before becoming IRBM on 1 March 1996. The agency is responsible for the overall administration of direct taxes under Income Tax Act 1967, Petroleum (Income Tax) Act 1967, Real Property Gains Tax Act 1976, Promotion of Investments Act 1986, Stamp Act 1949, and Labuan Business Activity Tax Act 1990(IRBM, 2016. It currently has 12 state offices and 36 branches in various locations all over the country. The vision of IRBM is to be a leading tax administrator that contributes to nation building. Its mission is to provide excellent tax services by improving voluntary compliance, implementing an integrated and transparent taxation system, increasing operational effectiveness through innovative processes and information technology, and by enhancing a competent workforce. Its quality policy is "with a foundation based on integrity, we are committed to provide the best service to the customers" (IRBM, 2016). To act as agent of the government and to provide services in administering, assessing, collecting and enforcing payment of income tax, petroleum income tax, real property gains tax, estate duty, stamp duties and other taxes agreed upon between the government and the Board. b) To advise the government on matters related to taxation and cooperates with the Malaysian ministries and statutory bodies on such matters.
c)
To participate in meetings, discussion and agreements pertaining to domestic and international taxation. d) To become a collection agent for and on behalf of any statutory body to recover loans payable to it under the written law in Malaysia. e) To diligently carry out other functions given to IRBM under any other written law in Malaysia.
Its organizational structure is shown in Fig. 1.
Transformation of IRBM
Tax revenue collection is the main component of the IRBM's key performance indicators.
In 2011 and 2012, IRBM had recorded a significant uptrend in tax revenue collection. Revenue collection for 2011 was posted at RM109.67 billion, which surpassed the target of RM91 billion set by the Malaysian Ministry of Finance, with an increase of 26.7% compared to collections in 2010. This phenomenon was noted by the Malaysian Government as comparisons were made to its performance in the three preceeding years when collections dipped from RM90.7 billion in 2008 to RM86.5 billion in 2010. When revenue collection for 2012 was posted at RM123.5 billion, surpassing the set target of RM110 billion, IRBM gained further recognition for its achievements (IRBM, 2014b).
IRBM publicly attributes its successes to radical changes in work culture and management styles instituted when its new CEO, Tan Sri Dr. Mohd. Shukor Mahfar took office in January 2011. He brought with him a strong belief in the need to improve the work environment of Hasilians, to re-energize them, and to get them to clearly understand and fully support the organization's strategic goals.
Change management strategies and initiatives
The organization's focus shifted from solely managing tax revenue collection to activities involving the management of its human capital, in recognition of the value of skills and knowledge resources they represent. Human capital management strategies are central to IRBM's organizational transformation strategy, and it was the area with the most significant change. Work performance targets were much more effectively communicated and details cascaded down to all levels of employees using a simple, easy-to-understand language and techniques. Word has it that the CEO insists on every employee message to be designed in such a way that it can be understood by even those in the lowest rung.
Performance is tracked and feedback is given on a monthly basis. As an incentive for targets to be met, performance was directly linked to monetary rewards such as annual bonuses as well as non-monetary rewards i.e. excellent service awards ceremonies, special badges and lanyards to be worn by employees denoting their excellent performer status, and other employee recognition programs designed to celebrate work achievements. These initiatives are different from previous practices where top level targets and strategic plans were the privy of only a few, and the performance-reward link was not clearly established.
There were various initiatives formulated to enhance learning, creativity, innovation, and the development of new ideas such as knowledge sharing sessions by local and international respected personalities, an employee suggestion program allowing for idea submission via the special email channel<EMAIL_ADDRESS>and the establishment of an internal think tank comprised of Hasilians with Masters and doctoral qualifications, who gather at brainstorming workshops periodically organized by IRBM head office. Even the Malaysian public as tax payers are provided with a direct channel to the CEO via<EMAIL_ADDRESS>for the forwarding of complaints, feedback, as well as ideas for improvement. Internally, the CEO's strong leadership, his natural flair as a public speaker, charismatic persona, people-oriented personality, and dedication towards the establishment of a learning-oriented culture helped the IRBM top management team obtain the necessary buy-in from Hasilians to focus on the continuous acquisition of new knowledge and its application towards enhanced work performance.
Focus on technology
Other than human capital management strategies, IRBM's transformation toward a more efficient and agile government entity had also given priority to the development of a strong technological infrastructure. The investment in technology prior to the transformation was necessary largely due to the host of e-services introduced for the convenience of IRBM's different categories of tax payers. The 'electronic filing' of taxes i.e. e-filing was implemented in 2005, allowing self assessment to be done by tax payers followed by the filing of taxes via an online portal. During the transformation period, technology had been given a significant boost to play a more central role in its operations. The 'mobile filing' i.e. m-filing service was later introduced in 2012 to allow for tax filing via smartphones, which is more in line with the tech-oriented lifestyle trends of its customers.
Apart from the services for its customers, a strong technological infrastructure was also needed by IRBM to support the organization's internal operations. The Case Management System (CMS) for audit and investigation purposes, and the Customer Relationship Management (CRM) system for the handling of inquiries, complaints, and feedback from taxpayers are among different applications developed specifically for IRBM operations.
There are applications such as the CMS and CRM which are for the exclusive use of specific parties and departments in IRBM. However, the knowledge base is open for access by all Hasilians. It is a system that is viewed favorably by Hasilians, and IRBM management has mentioned that it supports the CEO's vision of putting IRBM onto a path of becoming a learning organization. In his messages to Hasilians, he stresses on the need to capture and analyze job-related knowledge in order to learn from past mistakes and achievements.
Organizational pressure points for k-base creation
The context faced by IRBM is such that tax management work is highly complex, and regular changes in the various tax legislations under its administration make it even more challenging for the tax regulating body. Prior to the creation of the k-base system, it had come to management's attention that IRBM tax officers were struggling to keep up with the need to frequently update their technical knowledge. They were also facing high workload and limited time for reference checking when making technical decisions.
The situation was further aggravated by written manuals and reference materials available to the officers contained technical information that was not systematically arranged. The paper-based manuals were cumbersome to handle and some parts featured out-dated contents, making them inadequate for use by the officers. Much of the officers' time was wasted in information search, whereby they were mostly required to rely on memory to locate and identify relevant information. It therefore took the officers much longer time than necessary to complete their tax assessment work, leading to increased case backlogs at the branches.
Increasing space needed for storage of the paper-based manuals and reference materials also became a cause for concern. Document volume increases are largely due to the yearly changes in tax provisions. Besides document storage issues, IRBM also had to bear the costs of documenting tax legislation amendments and distributing the amended documents to the branches. Further, there was an increasing need for documentation of specific knowledge and skills. The lack of documentation of this knowledge led to it being lost when the officers are transferred or when they leave the organization, resulting in interruptions in the branches' daily operations.
The officers were clearly in need of a comprehensive technical reference system that they could quickly and easily access on-the-job. It should allow for fast updates in tax legislation amendments, and enable the officers to document best practices and effective work methods for other officers to learn and adopt. Thus, it became apparent that IRBM was in need of a computerized database that could be accessed by tax officers to perform their duties more effectively.
K-base system
The creation of the knowledge base or k-base was first proposed by IRBM's Tax Operations Department in 1998 as a response to challenges experienced by the organization then. It was to be a 2-year system development project, which saw the kbase originally named 'Technical Reference System'. After several careful reviews of the original plan, the project finally commenced in 2001 and its name was changed to 'knowledge base'. It was developed using Lotus Notes 4 application by IRBM's Information Technology Department, according to content requirements of the Tax Operations Department as the system owner. Until today, the Tax Operations Department remains the system owner as its tax officers are the main target group of users for the kbase.
The k-base first came into use in 2007 and was later made to undergo a debugging and re-coding process in 2010 for system improvement purposes. The k-base has now stabilized and supports a high volume of content that is uploaded by authorized personnel based in the respective departments and branches. As the IRBM is a regulatory body under the Malaysian government, the language used for the k-base is Bahasa Malaysia, which is the national language used by the Malaysian government. The discussion that ensues will explain the basis for the creation of the k-base, and the system characteristics in terms of structure, content, features, user categories, and access types.
System characteristics
Based on the problems faced by IRBM described earlier, the Information Technology Department was tasked to create a computerized database that features user-friendly interface for easy information search and content that is indexed according to technical subject areas. The k-base was to support fast and accurate decision making by the tax officers. The initial two main database components were details of relevant tax legislations and their interpretations. Today, the k-base carries a variety of information types which include operational policies and instructions of departments and branches in the form of images, videos, and documents.
The system owner, namely the Tax Operations Department grants access for specific individual users from each department and branch. The Information Technology Department then creates the 'User ID' and 'passwords' for authorized users. Only these users are allowed to upload content into the k-base. However, prior to the content upload, formal approval must be obtained from their superiors. All departments and branches are responsible for their own content. To ensure good organization of the k-base content, 'folders' that are created by the departments and branches to store relevant information must first be approved by the Information Technology Department. This move is to ensure that all departments and branches abide by standardized categories of information and also to avoid the creation of too many 'folders'. If the content does not belong to any of the subject matters indicated by the respective 'folders', requests can be made to the Information Technology Department for the setting up of a new 'folder'.
Information search features
As for the viewing of k-base content, all Hasilians are allowed access via the employee portal. However, some content can only be viewed but not printed out, such as the Tax Act 1967. When searching for information, users are to type in relevant 'keywords' into the search panel, which is similar in approach to search engines like Google. If the content type is an image, users can still find the image using a 'keyword' because all uploaded content comes with a 'content summary'. Users are also provided with three alternative types of searches, namely 'quick search', 'detailed search', and 'manual search', which respectively means that users can do a quick search involving all content in all years, a more detailed search only in selected content and in selected years, or manually select the folder, subfolder, and year of interest. Apart from the three types of searches above, k-base also features the 30 'most popular search' and also 30 'most recent search' for the convenience of users. The system allows users to 'bookmark' their search for later reference. A 'user manual' is also available in the form of a video tutorial.
K-base impact on IRBM
IRBM management has described the benefits of the k-base to the organization in terms of increase in the level of technical know-how among its tax officers, standardization of practices across the branches, rapid distribution of new information, cost saving by no longer depending on paper-based communication, and ease of data transfer with other internal computerized databases. There has also been a reduction in time taken by tax officers in processing case files. Equipped with a rich information source available at their fingertips, tax officers have been able to learn from other officers' best practices and as a result, they are able to perform their duties more effectively.
Apart from the above claims by IRBM management, there is evidence on the benefits of the k-base obtained from a study on leadership, people management, KM capacity, and organizational transformation in IRBM conducted by Rosdi and Norhashim (2015) of Multimedia University, Malaysia as a university-industry collaborative research project with IRBM. Citing the works of Chen and Huang (2009) and Lin and Lee (2005), the study by Rosdi and Norhashim (2015) defines the 'KM capacity' of an organization as its capacity to utilize and facilitate knowledge management activities and tools. The study further cites the work of Gold, Malhotra, and Segars (2001) in highlighting that the term 'KM capacity' has been widely and consistently reflected in KM literature in terms of an organization's knowledge acquisition, knowledge sharing, and knowledge application activities.
The study involved 43 selected respondents who had served the organization between 2008 and 2013 so as to capture the scenario before and during the transformation period which began in 2011. The researchers had specified to IRBM that employees selected must be those involved in policy making, operations, as well as the provision of support services (HR, IT systems, etc.) during that time period. They were also to be selected from all job levels in the organization, namely from non-executive to top management. Respondents who fit the criteria were duly identified by the IRBM management team, who had subsequently issued official invites for the focus group discussion sessions. The respondents were then engaged via seven different focus group discussion sessions whereby they were group based on similarity in work functions and job levels. The sessions were conducted at IRBM headquarters by the university researchers without any involvement from IRBM personnel. This approach was intended to ensure a non-threatening environment for a more thorough and forthright sharing of views.
Respondents were engaged in discussion sessions on how various initiatives implemented by IRBM during the transformation period had influenced them on the job; more specifically in terms of their behaviours with regards to knowledge acquisition, knowledge sharing, and knowledge application. The guiding questions which helped trigger statements from respondents relevant to the k-base are listed below: 1.
What are the initiatives implemented as part of IRBM transformation? 2.
What are the objectives of those initiatives? 3.
What are the KM practices in IRBM (systems, platforms, stakeholders)? 4.
How do the initiatives impact the capacity for knowledge acquisition, knowledge sharing, and knowledge application? 5.
What kind of knowledge critical for tax revenue collection? 6.
How do the initiatives impacts tax revenue collection?
The findings revealed that activities of knowledge acquisition, knowledge sharing and application of new knowledge by Hasilians throughout the organizational transformation period had been significantly enhanced with the existence of the k-base. Impacts of the k-base on employees' knowledge acquisition activities are reflected in the following quotes:
"Most of our (work-related) information is highly complex and technical…fast acquisition of new knowledge is important… (it is) necessary to go through the kbase"
"What really helps (in acquiring work-related information)…is a computerized system such as the k-base" "(We) have to be constantly updated…apart from our morning briefings…we access the k-base on a daily basis…" "There is a lot (of work related information) that we can obtain from the k-base…" Besides acquiring new knowledge, employees in the study had also indicated that IRBM's technological infrastructure was revitalized during the organizational transformation process, and that the k-base enabled them to better share knowledge among fellow Hasilians. A few employees' remarks concerning knowledge sharing are as follow: "There is a variety of information in the k-base…(as) it is our work reference system…for example, whenever there is a technical issue, the Drafting and Law Revision Department would issue a public ruling on its interpretation…(so that) every branch would refer to the same ruling" "We share a lot (of important information) in the k-base… (such as) operational instructions, circulars, technical rulings…" Employees in the study reported increased tendencies to apply newly acquired knowledge that had helped improve work quality and efficiency. They had the following to say on activities relevant to knowledge application: "(In the past) everything was on print….and things were not as orderly ….(but) I had recently attended a convention during which they informed that the Drafting and Law Revision Department would be updating the k-base to capture everything from the past...(which means) all that we need will be in the k-base…from technical circulars to administrative information…" "Previously, our information sources were quite limited…but with (the current) computerized systems, we have been able to widen our scope in terms of new tax revenue sources…" "It (the k-base) helped us to achieve the (targets set for) Key Performance Indicators in our jobs and helped widen the (tax) base" Findings from the study by Rosdi and Norhashim (2015) had concluded that enhanced levels of knowledge acquisition, knowledge sharing, and knowledge application among Hasilians during the organizational transformation period had resulted in increased work-related competencies and job performance. Work performance improvements had in turn became an important factor leading to significant increases in IRBM tax revenue collections in 2011 and 2012.
In sum, the descriptions of the k-base as well as the employee narratives above have addressed the question that the paper intended to answer, which is on how technology can be utilized by a governmental agency such as the IRBM for knowledge management. For the first objective of this paper which focuses on the role of technology in IRBM, the k-base is an example of technology used for KM. It functions as a system that is open for use by all Hasilians to connect to relevant knowledge that supports their work performance. It supports the CEO's vision of morphing IRBM into a learning organization by allowing the organization to capture and analyze job-related knowledge to learn from past mistakes and achievements.
The second objective of this paper is on the impact of technology on three specific KM activities, namely knowledge acquisition, knowledge sharing, and knowledge application. For IRBM, the impact of the k-base was positive and significant in terms of the three KM activities as well as on the organization's performance in terms of tax revenue collection.
Significance to the nation
From the perspective of a Malaysian tax payer, efficiency improvements in IRBM have translated into a shorter time taken to process appeals and tax refunds. IRBM's move toward technology-based operations and the management successes that followed had received positive attention not just from the Malaysian public, but also from other governmental departments and bodies. It becomes an important reference for others on how technology enable an organization to better serve its employees as well as customers. As for the nation, improvements in IRBM operations had favourable impacts on tax revenue collection, which subsequently increased the country's resources for nationbuilding. With such an important impact on IRBM stakeholders, it is hoped that the future will bring more positive development for the k-base.
Looking ahead
At the moment, there is only a number of Hasilians that have mobile access to the k-base via their smartphones. Due to the nature of their jobs, these individuals had been handpicked and allowed such access by the system owner, the Tax Operations Department. As capabilities of the k-base continue to grow, perhaps the future will see mobile access to the k-base becomes an unlimited privilege to all Hasilians. With the trend toward digitalization of services and increased roles of technology in learning, the future of IRBM's k-base appears bright. A rebranding of the k-base is in the pipeline and a name change to 'Hasilpedia' may happen in 2016. This latest development reflects the IRBM's vision of seeing the k-base plays a bigger, more important role in the organization's aspiration of becoming a learning organization in today's knowledge economy. | 6,350.4 | 2016-12-06T00:00:00.000 | [
"Business",
"Computer Science"
] |
Effective Combination Immunotherapy with Oncolytic Adenovirus and Anti-PD-1 for Treatment of Human and Murine Ovarian Cancers
Simple Summary This study was conducted to find a new, more efficient, treatment for ovarian cancer. A combination of an oncolytic adenovirus (TILT-123) with immune checkpoint inhibitors was employed to treat ex vivo patient samples and was found statistically significantly more effective than control treatments ex vivo and showed potent efficacy towards in vivo tumor growth. Abstract Ovarian cancer (OvCa) is one of the most common gynecological cancers and has the highest mortality in this category. Tumors are often detected late, and unfortunately over 70% of OvCa patients experience relapse after first-line treatments. OvCa has shown low response rates to immune checkpoint inhibitor (ICI) treatments, thus leaving room for improvement. We have shown that oncolytic adenoviral therapy with Ad5/3-E2F-d24-hTNFa-IRES-hIL2 (aka. TILT-123) is promising for single-agent treatment of cancer, but also for sensitizing tumors for T-cell dependent immunotherapy approaches, such as ICI treatments. Therefore, this study set out to determine the effect of inhibition of the immune checkpoint inhibitors (ICI), in the context of TILT-123 therapy of OvCa. We show that simultaneous treatment of patient derived samples with TILT-123 and ICIs anti-PD-1 or anti-PD-L1 efficiently reduced overall viability. The combinations induced T cell activation, T cells expressed activation markers more often, and the treatment caused positive microenvironment changes, measured by flow cytometric assays. Furthermore, in an immunocompetent in vivo C57BL/6NHsda mouse model, tumor growth was hindered, when treated with TILT-123, ICI or both. Taken together, this study provides a rationale for using TILT-123 virotherapy in combination with TILT-123 and immune checkpoint inhibitors together in an ovarian cancer OvCa clinical trial.
Introduction
Every cancer type has unique characteristics that should be taken into consideration for successful treatment outcomes. For ovarian cancer (OvCa), challenges often lie in the frequently disseminated disease and suppressive tumor microenvironment [1]. Amongst other cells, adipocytes, tissue resident macrophages and myeloid derived suppressor cells (MDSCs) in the omentum and/or in the tumor secrete growth factors and other bioactive molecules that facilitate tumor cell and regulatory T cell (Treg) proliferation and viability and hinder T cell cytotoxicity functions [2,3]. Collectively, this creates a tumor environment that supports tumor growth and treatment resistance and makes the tumor difficult to cure.
Immune checkpoint inhibitors (ICIs) targeting programmed cell death-1 (PD-1), programmed cell death-ligand 1 (PD-L1) or cytotoxic T lymphocyte-associated antigen-4 (CTLA-4) have become widely studied and used immunotherapies because of their impressive clinical results in some cancer types, such as melanoma. For example, in a Phase III study of Stage III melanoma, the authors reported an improved 5-year recurrence-free survival (40.8% vs. 30.3% with placebo) with ipilimumab (anti-CTLA4), and a 5-year overall survival of 65.4% vs. 54.4% with placebo [4], thus leading to the FDA approval for ipilimumab for adjuvant therapy of melanoma.
In another phase II study, investigators enrolled patients with relapsed/refractory classic Hodgkins lymphoma after autologous hematopoietic cell transplantation treatment failure (in regard to brentuximab vedotin treatments). All patients received nivolumab (anti-PD-1) 3 mg/kg every 2 weeks until disease progression/unacceptable toxicity. After a median follow-up of 1.5 years, the objective response rate varied between 65% to 73% between treated groups [5].
However, the clinical efficacy of ICI is not as impressive in all cancer types and thus biomarkers indicating efficient use of ICI have been a relevant field of study [6]. In ovarian cancer the efficacy of ICI is hampered due to the aforementioned suppressive environment and often late diagnosis [7].
Taking this into consideration, it is evident that new therapies need to be developed. From an immunological point of view, it is promising that some tumor types such as epithelial ovarian cancers express many known tumor-associated and mutational antigens (Tumor associated antigens (TAAs) or neo-antigens, respectively), and some tumors are infiltrated by lymphocytes (TILs) [8][9][10]. However, this is not always the case, and in a comparison made by Alexandrov et al. it was reported that OvCa has fewer somatic mutations compared to tumor types that have high response rates to ICI (placed 15/30 among analysed cancer types) [11]. This is underlined by the fact that, despite the success of immunotherapy in other malignancies, ICI for epithelial ovarian serous cancer has only resulted in modest results so far, with median response rates usually varying from 10% up to 15% [12].
There is some evidence indicating that the lack of tumor-infiltrating cytotoxic lymphocytes and chemokines for T cell recruitment significantly reduces the antitumor effects of ICIs [13]. Therefore, oncolytic viruses (OVs) have emerged as a way to boost immunotherapy. They are promising tools as selective replication and direct oncolysis in tumor cells are coupled with the successful recruitment of immune cells [14]. Nevertheless, it has been speculated that single treatments have limited therapeutic efficacy due to intrinsic immune regulating counter reaction that limits the effect of recruited immune cells [15]. However, for example talimogene laherparepvec, an oncolytic herpes virus, has already received FDA and EMA approval for use in certain stages of melanoma [16], showing the positive effects of virotherapy. Clinical trials on oncolytic viruses as a single agent therapy for treatment of ovarian cancer have been started, not only with adenoviruses but with others too. For example reo-, vaccinia-and measles-viruses have entered clinical trials [17]. An oncolytic measles virus called MV-NIS, modified to express the sodium iodine symporter, had dose dependent effects in an clinical evaluation and was well tolerated [18]. Thus, to compensate for the modest results of single treatments, the combination of OVs and ICIs may be a reasonable and promising strategy to synergistically overcome immunosuppression in the TME [15]. Subsequently, in this study, an oncolytic adenovirus called TILT-123 that was created to be synergistically employed together with adoptive T cell therapy [14,19] was used in combination with ICI. This was studied in an ovarian cancer setting. Thus, the study rationale was that the cytokines coded by the virus, TNFa and IL2, would enhance T cell recruitment, proliferation and activation, while the ICI would plausibly retain an active antitumor immune reaction. Thus, the ICI combination would enable avoidance of the normally inevitable immune suppressive counteraction towards the virus and ICI. Furthermore, in support of our hypothesis, it has been shown that the combination of checkpoint inhibitors and TILT-123 work synergistically. More specifically, Cervera-Carrascon et al. showed enhanced efficacy and positive tumor microenvironment changes [20].
In this study we demonstrate that TILT-123 therapy with or without ICI caused a TME change by inducing pro-inflammatory cytokine release. The combinatory treatment with anti-PD-1 produced an activated T cell phenotype (in both CD4+ and CD8+ cells), characterized by higher granzyme B, Lamp-1 and CD69+ expression on T cells. Additionally, mice treated with TILT-123 and ICI anti-PD-1 showed tumor growth reduction in both subcutaneous and intraperitoneal tumors. The TILT-123 virus was created using the Ad5/3-E2F-d24 as backbone. Transgenes were inserted using a BAC recombineering system described in [14]. The built adenovirus has a backbone of Ad5/3-E2F-d24 (OAd) carrying human IL-2 (hIL2) and hTNFa. Two modifications renders the virus replication tumor specific: an E2F promoter and a 24-base pair (bp) deletion in the constant region 2 of E1A.
Cell Lines and Viruses
The construction of replication-deficient adenoviruses Ad5-CMV-mIL-2 and Ad5-CMV-mTNF-α, was carried out as described previously [14,21]. In short, Ad5-CMV-mTNFα or mIL2 were constructed by inserting expression cassettes with murine cytokines into the multiple cloning site of the shuttle plasmid pDC315 (AdMax, Microbix Biosystems, Mississauga, ON, Canada). Shuttle plasmids were recombined with pBHGloxdelE13cre (AdMax), which carries the whole adenovirus genome, and resulting rescue plasmids were transfected to 293 cells to generate the final virus constructs.
Patient Material and Processing
Patient samples that were used for this study are listed in Table 1. The samples were collected from OvCa patients at the department of obstetrics and gynecology, Helsinki University Central Hospital in 2021. The samples were collected from patients undergoing surgical resection at the Helsinki University Central Hospital (Helsinki, Finland). Patients included in the study were treatment naïve. Patient samples were processed to fresh single-cell tumors. Tumors were diced into small fragments and placed in a 50 mL falcon tube containing RPMI 1640 (Sigma-Aldrich, St. Louis, MO, USA) supplemented with 1% L-glutamine, 1% Pen/strep (Gibco, Thermo Fisher Scientific, Waltham, MA, USA), collagenase type I (170 mg/L), collagenase type IV (170 mg/L), DNase I (25 mg/mL) (all enzymes from Worthington Biochemical, Lakewood, NJ, USA) for 2 h enzymatic digestion with rocking at +37 • C. After digestion, the cell suspension was filtered through a 100 µm filter and treated with Ammonium-Chloride-Potassium lysis buffer (Sigma-Aldrich, St. Louis, MO, USA) for the removal of undigested fragments and erythrocytes. The resulting single-cell suspension was used to establish ex vivo tumor cultures by plating fresh 0.35 × 10 6 cells in a 96-well U-bottom plate followed by centrifugation at 300× g for 5 min. Wells (with supernatant intact) were then infected with 100 VPs/cell of either Ad5/3-E2F-D24 (from here referred to as the backbone virus) or Ad5/3-E2F-D24-hTNFa-IRES-hIL2 (TILT-123), anti-PD-1 (20 µg/mL), anti-PD-L1 (20 µg/mL) or medium (vehicle/neg. control). After incubation, the samples were analysed 1 to 7 days after incubation (marked in graphs) by flow cytometry or for viability. The use and collection of patient samples were approved by the ethics board of the University Hospital and was based on informed consent (ethic board approval number 120/03/02/16).
Cell Viability
Cell viability was measured by incubating patient sample for 2 h with 20% of CellTiter 96 AQueous One Solution Proliferation Assay reagent (Promega, Madison, WI, USA). Absorbance was read at 490 nm using a Hidex Sense plate reader (Hidex, Turku, Finland). Data was normalized to the uninfected mock control group.
Samples were measured in triplicates using the BD Accuri C6 flow cytometer (BD biosciences, San Jose, CA, USA) and analyzed using LEGENDplex Data Analysis Software Suite (Biolegend, San Diego, CA, USA). Data were normalized to the total protein content measured with Qubit4 Fluorometer (Invitrogen, Waltham, MA, USA).
Animal Model
For treatment efficacy studies, two subcutaneous tumors, 5 × 10 6 cells per flank or intraperitoneal (i.p.) tumors (3 × 10 6 cells) were implanted into C57BL/6NHsda mice (N = 6). Animals randomized into groups and treated i.p. by mock treatments (PBS, 100 µL), mouse anti-PD-1 (CD279) (NH BioXCell), or Equal amounts of Ad5-CMV-mTNFa and Ad5-CMVmIL2 (5 × 10 8 VP of each virus per injection). The animals were treated nine times. The tumor size was followed by LAGO live imager by D-Luciferin i.p. injection (PerkinElmer, MA, USA, 3 mg/animal). Luciferin live imaging with LAGO (Spectral Instruments Imaging, Tucson, AZ, USA) was provided by The Bio Imaging Unit of University of Helsinki. The pictures were quantified by AURA (version 4.0.0.). The ROI values for each tumor was measured and percentual difference in growth compared to respective tumor on day 0 was calculated. The mean of % differences within one treatment group is shown. Comparison of TILT-123 to the "backbone vector" Ad5/3 has been carried out before [14] and, thus this group was not analysed in this study.
Experimental protocols and procedures were approved by the ethical committee of the Animal Experimental Board (ELLA) of the Regional State Administrative Agency of Southern Finland, license number ESAVI/28404/2019.
Statistical Analysis
Prism 8 (GraphPad Software, San Diego, CA, USA) was used for statistical analysis. p values < 0.05 were considered significant. Viability tests and T cell maturation status tests were performed in triplicates, while cytokines were measured twice in duplicates. Animal experiments were performed with N = 6.
Anti-PD-1 and Oncolytic Adenovirus Treatment Kills Patient Tumor Cells Efficiently
In order to see the efficacy of TILT-123 treatment alone or in combination with ICIs, ovarian cancer patient samples were infected with TILT-123 and/or treated with anti-PD-1/PD-L1 ( Figure 1A,B). Anti-PD-1 treatment alone did not reduce the viability of the cancer cells. However, both TILT-123 monotherapy and combinatory treatments showed efficient killing; by day 5 the viability of double treated sample dropped to under 40% and by day 7 only 33% of cancer cells were viable in the combination group (Figure 1, mock vs. TILT-123+ anti-PD-1 p < 0.001). Logically, in line with studies by Santos et al., some stromal cells are not infected and killed by the cancer cell specific oncolytic adenovirus treatment and thus a 100% killing cannot be achieved in this test setup [22].
Cell viability was not affected by any ICI treatment alone, while the combination of ICI and TILT-123 caused an efficient cell killing; less than 40% of cells were viable on day 7, with no difference between the ICI + virus groups. The addition of transgenes to the adenovirus vector caused an increase in cell death by 10-20%, depending on the measured time point. This is plausibly due to TNFa as it is a known necrosis factor. The noted limited effect of ICI might be due to systematic reasons, with no lymph nodes with APC to train the immune compartment. Additionally, as it is an in vitro setup, normally (in patents) viable cells start to die, thus the effects of the immune system cannot be seen as potently in this setup.
Oncolytic Adenovirus and ICI Therapy Induced Change in the Cytokine Environment
Ovarian cancer patient samples were analysed for cytokine concentration changes after adenovirus treatments, with or without anti-PD1/anti-PD-L1 (pembrolizumab and atezolizumab) (Figure 2). In 1/3 samples, the IL1b concentration rose markedly on both day 3 and 7 in TILT-123 and combination treated samples; however, this could not be reproduced in the other samples and was not statistically significant.
ICIs and unarmed backbone viruses did not cause an IFNg release. The combination groups and TILT-123 caused a slight trend of higher release of IFNg in 3/3 samples on both day 3 and 7 compared with the control group; however, this was not statistically significant.
IP10 was statistically significantly higher in samples treated with TILT-123 in combination with anti-PD-L1 compared to mock treatment (p = 0.0104).
No statistically significant changes in MCP1, IL12 and IL17 could be measured. Interestingly, out of the anti-inflammatory cytokines ( Figure 2B) TILT-123 treatment caused a slight trend of lowered IL10 secretion in all treated patient samples on day 7, and the combination of TILT-123 and anti-PD-1 (not anti-PD-L1) enhanced it further. Backbone treatment caused a statistically significant difference in release of IL4, while other treatments did not affect the secretion of IL4. and PD-L1 antibodies. All ICI antibodies were used as 20 mg/mL. MTS assay was performed as mentioned above, with same concentrations but several different ICI. Statistics; One way ANOVA (Tukey's multiple comparison) was used for statistics on day 7 (**** p ≤ 0.0001). Data has been gathered as triplicates and is presented as mean + SEM.
In summary, we can see that the combinatorial treatment enhances the concentration of pro-inflammatory cytokines, giving a rationale to TILT-123 + ICI therapy.
TILT-123 and Combinatorial Treatment of Ex Vivo Patient Ovarian Tumor Samples Changes the Activation Status and Number of Patient T Cells
Processed single cell samples of patient tumors were treated with TILT-123 and/or ICI or control treatments, to gain mechanistic insight into the previously noted cancer cell killing. The samples were gated to contain only lymphoid cells and analysed for their activation markers (Figure 3, Gating strategy shown in Supplementary data B).
Samples treated with backbone virus, TILT-123 only, or with both TILT-123 virus and ICI, showed a significantly higher % of CD3+ CD4+ and CD3+ CD8+ T cells compared to mock (p ≤ 0.0001), while treatment with only ICI did not cause a significant effect on day 7.
TILT-123 with or without anti-PD-1 caused similar reactions in both CD4+ and CD8+ cells. In all virotherapy or combination treated groups the expression of CD69+, perforin, granzyme b, lamp-1 and IFNg were statistically significantly higher compared to mock (p ≤ 0.0001). Overall, the results were similar, but not as strong on day 3 (Supplementary data A). Thus, it seems that the virus in itself causes the growth in the immune population.
(a) Interestingly, when a significant rise in the percentage of double positive lymphocytes (CD4+ and CD8+ out of CD3+ cells) in patient samples treated with virotherapy and/or ICI was analysed, a significant a rise in the percentage of double positive cells could be seen compared to negative control (see Supplementary data C).
In order to verify the effect of TILT-123 and anti-PD-1 treatment on tumor growth in vivo, mice were implanted subcutaneously (Supplementary data D) and intraperitoneally ( Figure 4) with ID8-luc2 tumors. The treatments were administered every other or third day for 5 treatments (10 days), and continued once a week until day 38. Tumor growth was measured by luciferase emission. In order to see the effects of TILT-123's cytokines in the murine model, adenoviruses coding for murine cytokines were used. The measurements showed a trend that the subcutaneous tumor burden of virus and combinatorial treatment animals (anti-PD-1 and TILT-123) was reduced compared to anti-PD-1 only. The positive tumor reducing effect was even more evident when intraperitoneal tumor burdens were measured. The normalized burden of anti-PD-1 treated mice was 15 000-fold higher compared with combination treatments (not statistically significant, Kruskal-Wallis anti-PD-1 vs. anti-PD-1 +TILT-123 p = 0,403). Notably, 1/6 TILT-123 treated and in 3/6 combination treated animals (peritoneal results), showed no measurable luciferase signals on day 38 of treatment, which could indicate that the animals might have been cured or that the tumor volume had decreased beneath detectable size.
Discussion
Broadly speaking, tumors can be divided into two different categories according to their immune infiltration and inflammation status: immunologically hot and cold tumors. This classical division has later been refined into a theory of immune infiltrated (hot), immune excluded (infiltration in the periphery of the tumor) and immune deserted (cold) tumor models. The T cells that are already present in hot tumors can be activated with the help of ICIs and, thus, they should theoretically benefit the most from ICI treatment. However, the efficacy of immunotherapies vary even in these tumor types. This can be seen, for example, in the modest results of a phase III clinical trial, where pembrolizumab and ipilimumab regimes were compared. In this study, 834 patients with advanced melanoma were enrolled and randomly assigned to receive pembrolizumab either every 2 weeks (N = 279) or every 3 weeks (N = 277). A third group of patients received ipilimumab (N = 278). In a close to five years' follow up, the median overall survival was 32.7 months in the combined pembrolizumab groups and 15.9 months in the ipilimumab group. Median progression-free survival was 8.4 months in the combined pembrolizumab groups while it was much shorter, 3.4 months, in the ipilimumab group [23].
On the other hand, cold tumors, with few if any T cells, do not usually benefit from checkpoint inhibition, simply because there are only few if any T cells for ICIs to apply their effect. Some results indicate that prostate cancer and pancreatic cancer are typically cold tumors, limiting the benefit gained from immunotherapeutic regimes [11], while ovarian cancer is placed in the analysis somewhere in the middle, as per having a median value of somatic mutations (Alexandrov et al. 2013). Thus, OVs have been employed to attract immune cells to the tumor by modulating the tumor micro-environment to a more ICI permissive state, in other words, for the development of a hot tumor [14]. For example, it has been shown that OVs cause neoantigen release, danger and pathogen associated molecular pattern signaling in addition to their cancer cell killing lytic effects [24,25]. This causes the tumor microenvironment to change and attract lymphocytes while the virus itself also de-bulks some tumor mass. This reaction is then hoped to be further enhanced by the addition of ICIs in this study.
To capture the complexity present in the tumor microenvironments we used human patient samples. While interpreting these findings, it should be noted that results obtained in disaggregated cells cannot automatically be assumed also to apply to whole tumors.
Here, we noted that TILT-123, with or without ICIs, killed ovarian cancer patient tumorderived cells efficiently. One can speculate that the remaining cells could be non-cancerous cells, such as fibroblasts or immune cells. For example, in a study where cancer cell lines were developed from patient samples, authors reasoned that stroma outgrowth was one of the major reasons why they could not always establish cell lines [26]. Additionally, further explanation as to why no synergy was seen could be related to lack of or limited amount of active cytotoxic cells in the sample on which ICI function The main difference affecting the results between TILT-123 and the Ad5/3 are probably due to the cytokine encoding genes (hTNFa and hIL2) that have been added to the E3 region of the oncolytic adenovirus in TILT-123. As TNFa is known as a potent tumor necrosis factor, the local, high concentrations of TNFa in the in vitro study has likely caused some cell killing [27].
IL2 is mainly known for its effects on T cells (differentiation and growth); however, many nonlymphoid cell types also express the IL-2R. Unfortunately, less information on its effect on these cells is available. What is known, however, is that it has a strong impact on gastrointestinal epithelial cells, endothelial cells, and fibroblasts [28]. In studies where IL2 production or its receptor were blocked, effects were seen on mouse smooth muscle, and it also caused vascular leakage through gap formation in endothelial cell layers (loss of cell-to-cell contact) and cell damage [29]. Therefore, it is likely that the IL2 produced by TILT-123 causes some additional loss of cell viability. These differences will probably be highlighted in test setups with immune cells, which additionally react to the produced cytokines. Thus, this test setup showed proof of concept and rationale to continue with the further tests. The positive cancer cell killing result encouraged us to take a further look into the changes in the tumor microenvironment, in order to understand the reaction better. The concentration of IP10 was significantly higher in the group treated with both anti-PD-L1 and TILT-123. IP10 is a chemokine, also known as CXCL10, which attracts several cell types, such as lymphocytes and natural killer cells. Studies have shown that high CXCL10 concentrations in advanced serous ovarian cancer patients correlated with better survival through attraction and infiltration of T cells [30], thus encouraging the use of combinatorial treatments. We measured high concentrations of TNFa, IL-2 and IFNg in samples treated with TILT-123 and/or anti-PD-1. These three cytokines all contribute to positive T cell reactions, such as T cell activation and proliferation. These cytokine measurements indicate that the changes made with the treatments could result in cold tumors becoming hot, which could eventually favor patient outcomes in the context of ICI therapy.
After grouping the measured cytokines into pro-and anti-inflammatory cytokines and seeing the change in their relative expression, it is evident that TILT-123 alone, and the combination treatment with anti-PD-1, triggers the release of pro-inflammatory cytokines, which the ICI treatments alone or the backbone virus were not able to do. Simultaneously, as often noted, every action in immunology triggers a counter response. Thus, all of the treatments, which are historically thought of as pro-inflammatory, also triggered a slight relative rise in the release of anti-inflammatory cytokines.
Activated T cells secrete IFNg, granzymes and have high lamp-1 and CD69 expression. IFNg directly acts as a cytotoxic CD8 T cell differentiation signal, and it is essential for the induction of cytotoxic T cell precursor proliferation. Additionally, IFNg regulates cell surface MHC class II on APCs, thus promoting peptide-specific activation of CD4 T cells. This indicates that, if TILT-123 and ICI treatments are given together, T cells will be primed and activated for killing of cancer cells [31]. As the tumor microenvironment changes are linked to patient outcome, it would be interesting to conduct further studies for more information on the subject.
When studied in vivo, we saw a positive tumor debulking effect when double treatment was given to tumor bearing mice, thus further supporting the use of combination treatment in clinical trials. To compensate for species incompatibilities, we utilized multiple injections, and thus an optimized treatment frequency should be evaluated in future studies, and could further improve results. Interestingly, the luciferase signal for 1/6 TILT-123 treated and in 3/6 combination treated animals decreased below measurable values, indicating that the animals were possibly cured. In this study it was unfortunately not possible to collect tumor samples for, e.g., immunological analyses. However, we consider such analyses valuable and therefore they could be performed in a follow-up study. The comparison of treatments to untreated animals has been performed before, and was thus not conducted in this study [14,20].
All in all, the data in this study suggests that TILT-123 causes a change in the tumor microenvironment. It acts as a two-pronged attack, debulking the tumor by immune stimulation and direct lysis. Of note, recent data suggests that this approach acts also on hard-to-reach distant tumors and metastases, through the abscopal effect [14,32,33]. The lysis of infected cells releases danger and pathogen associated molecules together with pro-inflammatory cytokines, attracting T cells. [20,25]. Then, when T cells infiltrate the tumor, they are more active to cytotoxic actions as the virus induced milieu change triggers them into action. This immune activation and infiltration has also been seen in other adenoviral therapies [34], e.g., in n in vivo mouse model, the adenovirus vector Delta-24-RGDOX. In the study made by Jiang et al., potent anti-glioma activity was reported in immunocompetent C57BL/mice. This was not seen in immunodeficient athymic mice, suggesting specific immune memory against the tumor [35].
When the tumor is simultaneously treated by checkpoint inhibitors, the treatment might function as a boosting agent to upkeep this T cell reaction resulting in a synergistic outcome. As the goal of ICI is to activate the immune system to fight cancer, logically, the administration of ICI has been shown to cause some autoimmune reactions, manifesting usually as mild adverse events [36]. However, this phenomenon can be thought as potentially beneficial in our treatment setup. When the virus lyses cells, TAA and other non-cancer specific antigens are released and T cells react on them, thus the concomitant administration can be thought to boost the efficacy of ICI in this way too.
Thus, this treatment regime could help patients that have hard to treat resistant tumors and could be a solution to the lack of effective therapies available to ovarian cancer patients with platinum refractory disease. Based on the preclinical work reported here, and other pertinent data [14,20,22], a clinical trial is under way, testing the combination of anti-PD-1 and TILT-123 in platinum refractory ovarian cancer patients.
Conclusions
In conclusion, in this preclinical study, we found that TILT-123 treatment in combination with ICI is an attractive new treatment modality for treatment of ovarian cancer in clinical trials (NCT05271318). | 6,227.2 | 2022-08-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
Training differentially regulates elastin level and proteolysis in skeletal and heart muscles and aorta in healthy rats
ABSTRACT Exercise induces changes in muscle fibers and the extracellular matrix that may depend on elastin content and the activity of proteolytic enzymes. We investigated the influence of endurance training on the gene expression and protein content and/or activity of elastin, elastase, cathepsin K, and plasmin in skeletal and heart muscles and in the aorta. Healthy rats were randomly divided into untrained (n=10) and trained (n=10; 6 weeks of endurance training with increasing load) groups. Gene expression was evaluated via qRT-PCR. Elastin content was measured via enzyme-linked immunosorbent assay and enzyme activity was measured fluorometrically. Elastin content was significantly higher in skeletal (P=0.0014) and heart muscle (P=0.000022) from trained rats versus untrained rats, but not in the aorta. Although mRNA levels in skeletal muscle did not differ between groups, the activities of elastase (P=0.0434), cathepsin K (P=0.0343) and plasmin (P=0.000046) were higher in trained rats. The levels of cathepsin K (P=0.0288) and plasminogen (P=0.0005) mRNA were higher in heart muscle from trained rats, but enzyme activity was not. Enzyme activity in the aorta did not differ between groups. Increased elastin content in muscles may result in better adaption to exercise, as may remodeling of the extracellular matrix in skeletal muscle.
INTRODUCTION
Physical activity, particularly endurance training, causes many adaptive changes in the organism. These adaptations mainly occur in skeletal muscles and include changes in metabolism and tissue composition (Röckl et al., 2007). Adaptive changes in the extracellular matrix (ECM) occur at the same time. ECM not only provides scaffolding and structural support for cells and organs, it also exchanges information with cells and thereby modulates cellular development, attachment, and differentiation as well as tissue repair (Hayden et al., 2005;Fonovićand Turk, 2014). ECM remodeling in skeletal muscle influences cellular processes including DNA synthesis, microtubule fragmentation, and myoblast fusion (Calve et al., 2010), all of which improve muscle strength and render tissue more compliant and resistant to damage (Hayden et al., 2005). The ECM is also involved in the regeneration of muscle fibers (Suelves et al., 2002). Elastase, cathepsin K, and plasmin contribute to the remodeling of ECM components, including elastin (Antonicelli et al., 2007), which is mainly responsible for tissue elasticity (Boudoulas et al., 2012); inhibition of ECM-modifying enzymes previously resulted in aberrant muscle regeneration (Vinarsky et al., 2005). Proteolytic enzymes may also directly influence muscle fibers, for instance by inducing apoptosis (Doeuvre et al., 2010).
The aim of this study was to investigate the influence of 6 weeks of endurance training on the mRNA levels of tropoelastin, elastase, cathepsin K and plasminogen in skeletal muscle (soleus) and heart muscle (ventricle) from healthy rats. We also characterized the effect of training on elastin protein levels and the activities of elastase, cathepsin K, and plasmin in muscles and the aorta.
We did not measure mRNA levels in aorta samples due to the small amounts of available material. In the aorta, there were no significant differences in elastin content (UT, n=10; T, n=7) or the activities of the proteolytic enzymes elastase (UT, n=10; T, n=10), cathepsin K (UT, n=9; T, n=10), and plasmin (UT, n=10; T, n=10) in trained rats versus untrained rats (Fig. 3).
All results are presented as medians with min and max in Table 1.
DISCUSSION
The principal finding of this study is that endurance training differentially modulates elastin mRNA and protein content as well as the mRNA expression and activity of proteolytic enzymes in a tissue-dependent manner. Here, skeletal and heart muscle exhibited similar adaptive changes in elastin expression after training; gene expression did not differ between groups, but elastin protein levels were higher in trained rats than in untrained rats. Post-transcriptional modifications may underlie this differential response. In mammalian cells, the correlation coefficient between mRNA and protein levels was previously determined to be <0.5 (Pradet-Balade et al., 2001). Elastin levels may influence the elastic and force-bearing features of the ECM (Lehti et al., 2006). Heart muscle contains few elastic fibers; its physiological compliance stems mainly from cardiomyocytes (Mizuno et al., 2005). Nonetheless, in the myocardial ECM, elastin makes important contributions to the maintenance of structural integrity, the transmission of mechanical stress into and out of myocardial cells, elasticity and compliance during the cardiac cycle, and the prevention of excessive stretching (Kwak et al., 2011).
There are some investigations addressing the influence of physical exercise on elastin mRNA and protein levels in skeletal muscle. Lehti et al. showed that endurance training reversed decreases in elastin transcription in skeletal muscle from diabetic mice but in accordance with the present study, elastin mRNA levels were not affected by training in healthy mouse and sedentary healthy controls (Lehti et al., 2006). Additionally, few studies have evaluated elastin expression and protein content in the heart, and these studies mainly focused on heart failure. Consistent with our Fig. 2. Effect of endurance training on gene expression, and protein content and activity in heart muscle. The mRNA levels of cathepsin K (UT, n=10; T, n=10; P=0.0288) and plasminogen (UT, n=10; T, n=10; P=0.0005) were higher in the heart muscle (ventricle) of trained rats than in untrained rats. There were no significant between-group differences in the mRNA levels of tropoelastin (UT, n=10; T, n=10) and elastase (UT, n=10; T, n=10). Elastin protein concentrations (UT, n=10; T, n=9; P=0.000022) were significantly higher in trained rats than in untrained rats. The activities of proteolytic enzymes did not differ between groups (UT, n=10; T, n=10). The experiments were performed in duplicates, except for elastin protein concentration which was made in single repetition. Error bars express s.d. Mann-Whitney test was used for comparisons. *P≤0.05; ***P≤0.001; ****P≤0.0001.
observations, (Marshall et al., 2013) reported that relative elastin mRNA levels did not significantly differ between Yucatan miniature swine with induced heart failure that exercised versus those that remained sedentary (both healthy control and sedentary with heart failure). In our study, despite similarity at the level of gene expression, elastin protein levels were higher in our trained rats than in our untrained rats, which may reflect an adaptive mechanism in healthy subjects that affects force transmission and the resistance to injury of skeletal muscle after physical training (McHugh, 2003). In heart muscle, this mechanism may contribute to the well-known increase in heart compliance after training (Stickland et al., 2006). The specific roles of elastin in skeletal and heart muscle are not well described in the literature (Fomovsky et al., 2010).
In the present study, post-training changes in proteolytic enzymes differed between skeletal muscle and heart muscle. In skeletal muscle, the mRNA levels of the investigated enzymes were similar in trained and untrained rats, but the activities of elastase, cathepsin K, and plasmin were significantly higher in trained rats than in untrained rats. In heart muscle, the mRNA levels of cathepsin K and plasminogen were higher in trained rats than in untrained rats, but the activities of these enzymes did not differ between groups. The discrepancy between gene expression and enzyme activity observed here may stem from the low coefficient of correlation between mRNA levels and protein levels in mammalian cells (Pradet-Balade et al., 2001). This discrepancy also suggests the presence of a posttranslational mechanism and perhaps other mechanisms that influence enzyme activity. For example, numerous studies have reported decreased activity of plasminogen activator inhibitor-1 in plasma after training (Jahangard et al., 2009).
The roles of elastase, cathepsin K, and plasmin in the adaptation of skeletal muscle to physical exercise are unclear. It is worth mentioning that in our study, proteolytic activity in skeletal muscle coincided with increased elastin levels in the soleus muscle of trained rats, indicating that adaptation does not translate into lower elastin content in soleus muscle.
The elastases belong to the group of serine, metallo-, or cysteine proteases. They degrade elastin and several matrix and non-matrix substrates such as fibronectin, laminin, collagen (types III, IV, and VI), and proteoglycans (Antonicelli et al., 2007;Paczek et al., 2008). While there is little data on the influence of physical training on the generation of elastase in skeletal muscle, single bouts of physical activity are known to increase elastase (Serteyn et al., 2010;Gleeson et al., 1998). Elastase content remained increased in triathletes as long as 19 days after the race (Neubauer et al., 2008).
Cathepsin K belongs to the family of lysosomal cysteine cathepsins; it is involved in the turnover of ECM proteins in many organs, and contributes to cardiovascular disease (including atherosclerosis and aortic aneurysms), inflammation, and obesity (Lv et al., 2013;Podgorski, 2009). In addition, cathepsin K may be a collagenase (Antonicelli et al., 2007) and may play a role in the prevention of muscle fibrosis.
Plasmin mediates blood-clot dissolution and is necessary for myogenesis, muscle regeneration, and hypertrophy (Suelves et al., 2002;López-Alemany et al., 2003). It can degrade several ECM proteins either directly or by activating matrix metalloproteinases 1-3 or 9. Plasmin also drives the inflammatory response (Syrovets and Simmet, 2004;Li et al., 2007). Plasmin may prevent intramuscular fibrin accumulation and contribute to an accurate inflammatory response in muscles after injury (Lluís et al., 2001).
Given these previous reports, we conclude that all of the enzymes evaluated in the present study take part in ECM remodeling and that ECM in skeletal muscle plays a very important role in providing tissue with elastic properties, giving mechanical support to myofibers during muscle contractions, and participating in the transmission of force from myofibers to tendons (Lehti et al., 2006). Additionally, extracellular proteolysis is necessary for the development and regeneration of skeletal muscle. The adaptation of muscle to physical exercise is a complex process that relies, at least in part, on the increased local proteolytic activity observed in the present study. However, we note that despite concomitant increases in gene expression, the lack of change in proteolytic activity in heart muscle that was detected here indicates that adaptation does not take place in heart muscle.
In our study, there were no significant differences in elastin content and enzyme activity in the aorta of trained versus untrained rats. Such results are in line with the results obtained by others. For example, 8 weeks of aerobic training had no effect on aortic elastin content in 6-month-old normotensive rats (Niederhoffer et al., 2000); another study failed to uncover a difference in elastin content between trained rats and sedentary controls (both young and old) after 17-21 weeks of swimming training (Nosaka et al., 2003). Similarly, no training effect occurred in a voluntary running group (Matsuda et al., 1989;Matsuda et al., 1993). Training-induced increases in elastin levels were previously observed in aged mice or hypertensive rats (Moraes-Teixeira et al., 2010; Kadoglou et al., 2011). However, spontaneously hypertensive rats exhibited higher mRNA levels of elastin and markedly higher elastin/collagen content; training effectively corrected the elastin content in the aorta of these hypertensive rats, reducing pulsatility, facilitating buffering, and reducing cardiovascular risk (Jordão et al., 2011). Overall, most previous studies described differences in the elastin content of the aorta in the context of existing pathology or aging, but not in healthy subjects.
Conclusions
Our results indicate that endurance training activates different signaling pathways in various tissues. Increased elastin content may translate into increased compliance; we detected this increase in heart and skeletal muscle but not in the aorta. The activities of enzymes responsible for ECM remodeling increase in skeletal muscle and may function in concert with the adaptation of skeletal muscle to physical training, mainly by this mechanism, but also via direct effects on muscle cells. Such a mechanism was not evident in heart muscle or in the aorta in the present investigation.
MATERIALS AND METHODS
All procedures used in this study were approved by the Ethical Committee of the Medical University in Bialystok, Poland (Resolution No. 23/2011 on the proposal No./dated 27.04.2011) and were performed in accordance with European Union regulations regarding the humane treatment of laboratory animals.
Twenty male Wistar rats were used in this study. The rats had ad libitum access to water and were fed with Labofeed B under a 12 h light/12 h dark cycle. For the first 5 days, rats were subjected to exercise adaptation via a once-daily regime of 10 min of running on a treadmill at 15 m/min. Rats were then randomly assigned to one of two groups: untrained (UT, n=10) or The expression of mRNA for tropoelastin (Eln), elastase (Elane), cathepsin K (Ctsk), and plasminogen (Plg) in skeletal and heart muscle expressed as ΔCT median (min, max) (after the normalization of CT to the expression of GAPDH gene). Elastin protein level and enzyme activities of elastase, plasmin and cathepsin K skeletal muscle, heart muscle and aorta in untrained UT and trained T groups. Results are presented as: * the ratio of elastin concentration to total protein concentration; ** the ratio of enzyme fluorescence to total protein concentration.
trained (T, n=10). Rats in the trained group were subjected to exercise training 5 days per week for 6 weeks. Exercise intensity and duration were gradually increased over time. Initially, sessions lasted 10 min (1200 m/h); this duration was increased 10 min each day during the first week for a final duration of 60 min/day, which was maintained over the rest of the training period. The running speed was 1500 m/h in the second week and 1680 m/h for weeks 3-6. There was no additional running stimulation. The untrained group remained sedentary throughout the training period. The age of the rats at the beginning of exercise was 5-6 weeks. Twenty-four hours after the last training session, all rats were sacrificed under anesthesia (intraperitoneal chloral hydrate, 1 ml/100 mg body mass). The average body mass of rats on the day of sacrifice was 271±11.6 g in the untrained group and 283.17±24.67 g in the trained group. Samples of soleus muscle, heart muscle (ventricle), and aorta were collected and immediately stored at −80°C. Soleus muscle was chosen because it contains a large proportion of type I slow-twitch fibers (Feng et al., 2011). Soleus muscle is primarily recruited during running at the speeds used in our study, while fast-twitch muscles generally are not (Lambert and Noakes, 1989).
We measured the mRNA levels of tropoelastin, elastase, cathepsin K and plasminogen in skeletal and heart muscle. Tropoelastin is a soluble precursor of elastin (Vrhovski and Weiss, 1998) and plasminogen is the inactive precursor of plasmin (Novokhatny, 2008). We also evaluated elastin protein content as well as the activities of elastase, cathepsin K, and plasmin in both muscle types. Only elastin protein content and the activity of proteolytic enzymes were investigated in samples from the aorta due to the small amount of available material.
Total RNA isolation
Approximately 50 mg of heart muscle (ventricle) or soleus muscle were homogenized in QIAZOL (Qiagen, Germany) plus 8 µl proteinase K (Qiagen) in a TissueLyser bead mixer (Qiagen) at 25 Hz in two 5-min repetitions. Total RNA isolation was performed with an EZ1 RNA Universal Tissue Kit and Biorobot EZ1 (Qiagen) in accordance with the manufacturer's instructions. Total RNA concentrations were measured at 260 nm via spectrophotometry (ND-1000 spectrophotometer, NanoDrop Technologies, Inc.). Samples were frozen and stored at −80°C for subsequent analysis.
Quantitative reverse transcription polymerase chain reaction (qRT-PCR) mRNA levels were measured with the ABI-Prism 7500 Sequence Detection System (Applied Biosystems, USA). Specific probes and primers for rat glyceraldehyde 3-phosphate dehydrogenase (Assay ID: Rn01775763_g1), tropoelastin (Assay ID: Rn01499782_m1), neutrophil elastase (Assay ID: Rn01535456_g1), cathepsin K (Assay ID: Rn00580723_m1) and plasminogen (Assay ID: Rn00585167_m1) and the TaqMan One-Step RT-PCR Master Mix Reagents Kit were purchased from Applied Biosystems. mRNA levels were calculated using the comparative cycle threshold (C T ) method. The C T of each sample was normalized to the expression of glyceraldehyde 3-phosphate dehydrogenase (GAPDH), with results reported as ΔC T . According to Pérez et al. GAPDH is optimal gene to be used as reference gene in the heart (Pérez et al., 2007). The relative mRNA levels of the investigated proteins were calculated by subtracting the normalized C T values for the trained group relative to the median untrained value (ΔΔC T =ΔC T , trained−ΔC T ,untrained), and the relative fold change of the mRNA levels of the investigated proteins was calculated as 2 −ΔΔCT (Livak and Schmittgen, 2001).
Tissue homogenization and total protein quantification
Due to the limited amount of sample, homogenization of each sample was performed as follows. All samples were homogenized in water in a TissueLyser bead mixer (Qiagen) and centrifuged twice at 7826 g for 10 min at 4°C. Plasmin activity and elastase activity were assayed directly after centrifugation. Supernatants were stored at −80°C for further analyses of cathepsin K, elastin, and total protein content.
For the determination of elastin levels, samples of heart muscle were homogenized in phosphate-buffered saline in accordance with the manufacturer's (see below) instructions and stored overnight at −20°C. After two freeze-thaw cycles, the homogenates were centrifuged for 5 min at 5000 g. The supernatant was removed and assayed immediately as described below.
Total protein concentration was measured at 562 nm on a BioTek Power Wave XS spectrophotometer (BioTek Instruments, USA) using the bicinchoninic acid Protein Assay Reagent (Pierce, Holland) in accordance with the manufacturer's instructions.
Quantification of elastin levels
Elastin levels were measured in tissue homogenates via enzyme-linked immunosorbent assay (ELISA). Concentrations were measured at 562 nm on a BioTek Power Wave XS spectrophotometer using the Elastin ELISA Kit (EiAab, China). Results are presented as the ratio of elastin concentration to total protein concentration.
Assays of enzyme activity
Enzyme activity was measured using a spectrofluorimeter (LS-50B, PerkinElmer, USA). Fluorescence measurements were made with induction at λ=355 nm and emission at λ=460 nm. The substrate for elastase was Z-Arg-Arg-7-amido-4-methylcoumarin and the substrate for plasmin was Boc-Val-Leu-Lys-7-amido-4-methylcoumarin (Bachem, Biochemica GmbH, Germany). A commercial kit (Cathepsin K Activity Fluorometric Assay Kit, BioVision, Inc., USA) was used to measure cathepsin K activity (substrate Ac-Lys-Arg-amino-4-trifluoromethyl coumarin) with a 400-nm excitation filter and a 505-nm emission filter. Results are presented as the ratio of enzyme fluorescence to total protein concentration.
Statistical analyses
Results are reported as medians with min and max, as mean±standard deviation (s.d.) and as relative fold changes. Differences in mRNA levels (for statistics, ΔC T was used) and protein levels between groups were analyzed with the Mann-Whitney U-test. P-values <0.05 were considered to be statistically significant. | 4,250.4 | 2016-04-11T00:00:00.000 | [
"Biology"
] |
Diamond step-index nanowaveguide to structure light efficiently in near and deep ultraviolet regimes
Two-dimensional metamaterials, consisting of an array of ultrathin building blocks, offer a versatile and compact platform for tailoring the properties of the electromagnetic waves. Such flat metasurfaces provide a unique solution to circumvent the limitations imposed by their three-dimensional counterparts. Albeit several successful demonstrations of metasurfaces have been presented in the visible, infrared, and terahertz regimes, etc., there is hardly any demonstration for ultraviolet wavelengths due to the unavailability of the appropriate lossless materials. Here, we present diamond as an ultra-low loss material for the near and deep ultraviolet (UV) light and engineer diamond step-index nanowaveguides (DSINs) to achieve full control over the phase and amplitude of the incident wave. A comprehensive analytical solution of step-index nanowaveguides (supported by the numerical study) is provided to describe the underlying mechanism of such controlled wavefront shaping. Due to the ultra-low loss nature of diamond in near and deep UV regimes, our DSINs and metasurfaces designed (from them) exhibit a decent efficiency of ≈ 84% over the entire spectrum of interest. To verify this high efficiency and absolute control over wavefront, we have designed polarization-insensitive meta-holograms through optimized DSINs for operational wavelength λ = 250 nm.
Scientific Reports | (2020) 10:18502 | https://doi.org/10.1038/s41598-020-75718-x www.nature.com/scientificreports/ Metasurfaces consisting of metallic materials are successfully demonstrated for the wavelengths ranging from terahertz (THz) to near-infrared (NIR) domains [26][27][28] . These plasmonic metasurfaces suffer from temperature instability, intrinsic Ohmic losses, chemical inertness and non-compatibility with CMOS technologies 29 . Above mentioned limitations substantially deteriorate their performance for visible to ultraviolet spectrums, where a diverse range of practical applications is of interest. For shorter wavelengths, however, Aluminum (Al) possesses potentially better response 30 , again induced oxidation and complex fabrication requirements while scaling down have degraded their overall performance 31 . Moreover, the absence of magnetic dipole resonance in transmissiontype plasmonic metasurfaces has significantly depreciated their overall transmission efficiency and restricted it to 25% in the visible regime 32 . In parallel, high-index dielectric materials possessing a transparent window (k ≈ 0) in the region of interest and excitation of both (electric and magnetic dipole) resonances have dominant advantages over these plasmonic counterparts for a wide range of optical wavelengths. Based upon the concept of index waveguide theory or Mie resonances, these all-dielectric metasurfaces appearing as a best-suited candidate for the realization of transmission-type efficient solutions. In this regard, lossless dielectric materials like gallium nitride (GaN), titanium dioxide (TiO 2 ), silicon nitride (Si 3 N 4 ) and hydrogenated amorphous silicon (a-Si:H) present themselves as ideal contestants and successfully employed to realize numerous applications in infrared and visible spectrums [33][34][35][36] . However, these dielectric materials offer significant absorption for the near and deep ultraviolet regime, also sophisticated fabrication techniques required to implement these metasurfaces have hampered their integration with practical applications. So, the hunt for an appropriate material which ensures an efficient and miniaturized solution in the near and deep ultraviolet regimes is going on.
To find an appropriate material with a transparency window in the desired spectral band, Fig. 1 illustartes the energy bandgap characteristics of three different dielectric materials diamond, titanium dioxide, and hydrogenated amorphous silicon (a-Si:H), respectively. With the help of bandgap energy, cutoff wavelength for any material can be calculated using c = h * c E g , where h is the Planck's constant, 'c' represents the speed of light and E g is the bandgap energy. The transparency window can be defined as the range of wavelengths for which the extinction coefficient of the materials is zero (k ≈ 0) and it contains all the values greater than the cutoff wavelength (λ c ). Figure 1 predicts that as compared to the other two dielectric materials (TiO 2 and a-Si:H) diamond exhibits a smaller cutoff wavelength ( c = 226 nm) which validates its applicability for the near and deep ultraviolet wavelengths. This specific behavior of diamond material is also validated from its optical characteristics described in Fig. 3a.
Most of the other dielectric materials (e.g., CaF 2 , SiO 2 , and MgF 2 ) possessing a transparent window (k ≈ 0) for ultraviolet wavelengths, have a lower index of refraction which requires costly and challenging fabrication techniques. Low refractive index materials are not difficult to fabricate themselves but it is due to the fact that the unit cell made of these materials, requires a larger height-to-diameter ratio (aspect ratio) for the realization of complete (0-2π) phase control. As the refractive index of dielectric material increases, for complete phase coverage, the required height of the unit cell decrease, eventually resulting in a lower aspect ratio which facilitates the fabrication and vice versa. This point can also be understood with the help of the following mathematical calculation: www.nature.com/scientificreports/ Here φ = 2π d · n eff · H is used to calculate the propagating phase where 'n eff ' is the effective refractive index 'H' is the thickness of the material and ' d ' is the design wavelength. In the above calculation, two dielectric materials with different thicknesses and refractive indices ( n = 1.5, H 1 = 1000 nm , and n = 3.25, H 2 = 400 nm ) are chosen for the clarity of the concept. It is obvious from the above calculation that a dielectric material with a lower refractive index (n = 1.5) is unable to control complete (360 o ) phase profile even with a height of 1000 nm whereas other dielectric material with high refractive index (n = 3.25) can easily accumulate complete phase profile with height of 400 nm. It is concluded that to ensure miniaturized on-chip devices, dielectric materials with a high refractive index would be an ideal candidate. A detailed analysis of few other dielectric materials is presented in Table 1 which shows that diamond and silicon nitride (Si 3 N 4 ) possessing a transparency window for the desired spectral band while others not, however, Si 3 N 4 possesses a lower index of refraction as compared to diamond.
The idea of using diamond as an efficient dielectric material for metasurfaces working in the ultraviolet regime has been proposed and few research papers are reported 38,43 . Hu, J. et al. reported full-field simulations-based high-quality factor diamond metasurfaces that enhance optical chirality by over three orders of magnitude in the ultraviolet regime 38 . The diamond nanostructures enable ultraviolet Mie resonance while the asymmetry in adjacent disk lattice activates high-quality factor resonances that significantly enhance the circular dichroism (CD) and increase the electromagnetic field intensities. This research work utilized 60 nm thick diamond nanopillar as a fundamental building block. Spectral overlapping of the dipole modes exhibits a Kerker like condition where the transmission approaches unity. This research work presents a totally different phenomenon from our proposed phase-dictated device in which complete (0-2π) phase control is acquired by spatially varying the physical dimension of the unit cell. In another research work, Huang, T. et al. leverages diamond as a high refractive index ( n ∼ 2.4 at visible wavelengths) material to design and fabricate high-numerical-aperture alldiamond immersion metalens. A fundamental building block consisting a 1 µm thick diamond pillar extending from the surface of the single-crystal diamond is chosen to build the metasurface. The fundamental building block is optimized and the metasurface is designed for an operational wavelength of d = 700 nm.
Here, we utilize diamond as an ideal dielectric material, for the near and deep ultraviolet light, to realize highly efficient phenomena through artificially engineering it (diamond). Its sufficient high refractive index and absolute transparent window for our desired range of wavelengths make it a best-suited candidate for the realization of compact devices. From Fig. 1, it is safely stated that our proposed diamond material is the best-suited candidate for the implementation of all-dielectric metasurfaces in the near and deep ultraviolet regimes with sufficient high index of refraction and absolute zero extinction coefficient. We propose diamond step-index nanowaveguides (DSINs) to efficiently control the phase and amplitude of the incident UV light desirably. Theoretical modeling of DSINs is presented in detail and the underlying mechanism to control the phase through varying the diameter (of DSINs) is provided comprehensively. Numerical optimization is performed to validate the proposed theoretical modeling which enables complete (0-2π) phase coverage (with maximum possible transmission amplitude) by spatially varying the diameter of DSINs. To verify this absolute control over amplitude and phase of the incident light's wavefronts, the optimized DSINs are used to implement the polarization-insensitive meta-holograms for near and deep UV regimes (particularly for an operational wavelength of λ = 250 nm). As compared to existing dielectric materials ( TiO 2 , Si 3 N 4 , GaN and a-Si:H), diamond exhibits very high efficiency for the wavelengths of interest while its sufficiently high refractive index keeps the aspect ratio ( AR = 5.7 for our case) within the fabricate-able limits. Our meta-hologram exhibits the words "NUST" and "ITU" for wavelengths λ = 250 nm
Theoretical modelling of DSINs and their underlying mechanism for absolute wavefront engineering
The ultimate goal is to obtain full control over the wavefronts of UV light through spatially varying the diameter of meta-atoms. Any phase-dictated phenomena can be implemented if such dynamic wavefront control is available. Our proposed DSINs can provide such control for the UV light. To understand the underlying physics of this control, we comprehensively developed the theoretical modeling for our DSINs. Figure 2a shows the structure and index profile of the step-index waveguide where the core is characterized as DSIN, "a" and "c" represent the radii while n a and n c indicate the refractive indices of cladding (air) and core (diamond), respectively. Strong confinement of propagating modes within the waveguide is ensured by the significant difference of the refractive index of the DSIN and surrounding medium (air). Standard Maxwell's equations can be helpful in determining the transverse and longitudinal components of the electromagnetic field within the DSIN. Detailed derivation can be found in 18,37 . According to the indexed waveguide theory, by adjusting the dimension (diameter in our case) of the index waveguide, the effective refractive index of the propagating mode can be modified, which can also be verified from the mathematical derivations provided below. At the core clad interface (r = a), boundary conditions must be satisfied and continuity of E φ , E z , H φ , and H z field components yield the following relationship 1-4) can be determined for S, T, U and V, provided the determinants of their coefficients vanishes. By enforcing the above condition, the following mode matching equation can be achieved. This relationship will help us in achieving " β , " the unknown propagation constant.
Here " β " is unknown and k 0 is a free space wavenumber. In Eq. (5), the parameters f , g and β are intermixed, forming a transcendental equation. Due to the quadratic nature of Eq. (5), two different types of solutions are possible. By considering the most general case, when ℓ = 0 , all six electromagnetic field components will be present (non-vanishing), conventionally designating these modes as hybrid (HE and EH) modes 37 . These solutions can be represented as.
For HE modes For EH modes: where Above mention Eqs. (6-7) can be solved graphically where each side of the equation can be plotted as a function of fa. Here we consider ga 2 = n 2 c − n 2 a k 2 0 − fa 2 . For ℓ = 1, graphical determination of the propagation constants for HE and EH modes are expressed in Fig. 2b,c, which indicates the two curves representing the two sides for HE and EH mode condition equations. Here for a particular case, we consider n c = 2.48,n a = 1, a = 80 nm and = 350 nm which results in normalized frequency W = k 0 a n 2 c − n 2 a as 3.259. Figure 2b shows that, for the above-listed values, only fundamental mode (HE 11 ) is propagating as there is only one intersection available. It is also evident from Fig. 2b that HE 11 mode has no cut off wavelength; in other words, this mode always be propagating regardless of the value of normalized frequency W. Similarly, Fig. 2c indicates that for these particular values, there is no EH propagating mode because there is no intersection point of two curves is available. On the same lines, as the dimension of indexed waveguide increases, additional higher-order modes start appearing and this phenomenon is depicted in Fig. 2d where normalized propagation constant is plotted vs. diameter of the nanopillar. All other higher-order EH 1m and HE 1m modes have specific cutoff values but fundamental mode HE 11 doesn't have any cutoff value. The cutoff values for other higher-order EH 1m and HE 1m modes for a given by.
For EH modes For HE modes www.nature.com/scientificreports/ Presence of both E z and H z components validate the existence or propagation of hybrid modes. The designation of HE and EH modes is purely based on their relative contribution to a transverse component of the electromagnetic field at a particular reference point. A propagating mode is designated as HE ℓm mode if H z plays a significant role and vice versa. Propagation constant β is an essential characteristic of any propagating mode, which is eventually a function of normalized frequency W (or frequency ω). Figure 2d shows the behavior of the mode index of the confined mode as a function of the diameter of the DSIN. The relationship between the mode index and phase constant of propagating mode can be described as Here n eff is called an effective refractive index of confined mode which varies between n c and n a . Subsequently, according to indexed waveguide theory, the phase imparted by each DSIN of specific diameter can be calculated in terms of effective refractive index as: where d is the design wavelength.
DSINs based highly efficient near and deep UV meta-holograms
The realization of phase-only meta-holograms requires the utilization of complete phase distribution obtained through numerical optimization of the fundamental building block (here diamond step-index nanowaveguide) in such a way to demonstrate the maximum possible transmission amplitude for desired operational wavelengths. Here in this work, DSINs with the thickness of H = 400 nm are numerically optimized and used as building blocks for near and deep UV wavelengths. Figure 3a describes the complex refractive index of the diamond material for wavelength ranging from 100 to 700 nm 39 , which shows that diamond possesses an absolute transparent window (k ≈ 0) for wavelengths greater than 200 nm. The choice of circular cylindrical geometry of the DSIN is due to its unique property of polarization-insensitivity. Figure 3b shows the DSIN patterned on the glass substrate where D and H represent its diameter and height, while U is the lattice constant of the unit cell.
Although holography was invented back in 1948 by Dennis Gabor 41 , recently, it shows significant potential and successfully employed for numerous versatile practical applications like optical manipulation 42 , information encryption 43 , biological imaging 44 , data storage 45 , 3D displays 46 and so on. Generation of the holographic image using metasurfaces essentially requires complete phase control via meta-atoms, where some iterative Fourier transform algorithm 47 can be utilized to perform the numerical calculations to determine the amplitude and phase distribution of the diffracted light. Finally, the phase-only meta-holograms are constructed by translating the above extracted discrete phase distribution through spatial variation of geometric parameters of metaatoms 48 . Inspired from the step-indexed waveguide concepts, for a design wavelength of 250 nm, the imparted An iterative Fourier transform algorithm proposed by Gerchberg and Saxton also called as Gerchberg-Saxton algorithm 47 , is frequently used for phase retrieval to be used in the meta-holograms. For design wavelength d = 250 nm , GS algorithm script is provided with period U = 140 nm, focal length as 14 μm along with target image, which produces an array of 175 × 175 phase elements to be utilized for structuring the meta-hologram with 12.2 μm × 12.2 μm in size. Reconstructed holographic image showing images "NUST" and "ITU" at the desired focal plane is illustrated in Fig. 4c,d. Obtained results are in excellent agreement with MATLAB results validating the proposed concept.
For a design wavelength of 300 nm, the surface plot of the phase profile obtained through numerical simulation by spatially varying the diameter of DSIN ranging from 70 to 110 nm is depicted in Fig. 5a. The dashed line indicates the optimized value of the period as U = 165 nm covering the complete (0-2π) phase profile. Figure 5b illustrates the numerically simulated behavior of the transmission profile by varying diameter of DSIN (ranging from 70 to 110 nm) vs. period of the unit cell achieving an average 89.5% transmission efficiency. For the design wavelength of 350 nm, the complete phase coverage is obtained through numerical simulations by spatially sweeping the diameter of DSIN and illustrated in Fig. 5c. For a range of diameter from 76 to 140 nm, complete (0-2π) phase distribution is achieved where the dashed line indicates the optimized value of period as U = 190 nm. Figure 5d presents the simulated behavior for transmission profile by varying diameter of nanopillar vs. period of the unit cell achieving an average 81.2% transmission efficiency. Finally, for a design wavelength of 400 nm, for the value of diameter ranging from 76 to 172 nm, the 2D plot of the acquired phase profile is illustrated in Fig. 5e where the dashed line indicates the optimized value of periodicity as U = 230 nm. Figure 5f presents the numerically simulated behavior of transmission profile by the sweeping diameter of the DSIN vs. period of the unit cell achieving an average transmission efficiency as high as 96.5%. www.nature.com/scientificreports/ of the dominant mode can be represented as n eff = β/k 0 , subsequently, according to indexed waveguide theory, the phase imparted by each DSIN of specific diameter can be calculated in terms of effective refractive index as φ = (2π/ ) · n eff · H, where represents the operational wavelength. Due to limited computational resources, meta-holograms consisting of an array of 175 × 175 pixels is numerically simulated and obtained results are presented in the above section.
Conclusion
In conclusion, to fulfill the gap generated by the absence of the appropriate lossless dielectric material for near and deep ultraviolet wavelengths, we proposed diamond material as a best-suited candidate to demonstrate highly efficient phenomena. Based on the concept of index waveguide theory, a comprehensive analytical study (supported by the numerical simulations) of circular cylindrical diamond step-index nanowaveguides (DSINs) is presented. Finite-Difference Time-Domain technique based numerical simulations are performed for optimization of DSIN to achieve the efficient control of phase and amplitude of impinged ultraviolet light. To validate the proposed analytical modeling and to verify the acquired phase control along with maximum possible transmission amplitude, highly efficient polarization-insensitive meta-hologram is demonstrated for ultraviolet regime (particularly for an operational wavelength of = 250 nm). Holographic images "NUST" and "ITU" are reconstructed in the far-field region possessing sufficient high transmission efficiency and image fidelity. Due to limited computational resources, metasurface having an array of 175 × 175 elements is numerically simulated to demonstrate the meta-holograms.
Data and material avability
All data required to aveluate the findings of this work is available in the presented paper. Addational data related to this work may be requested from the authors. All data and analysis details presented in this paper are avialable upon request to F.A.T. | 4,599 | 2020-10-28T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Cropped Quad-tree Based Solid Object Colouring with Cuda
In this study, surfaces of solid objects are coloured with Cropped Quad-Tree method utilizing GPU computing optimization. There are numerous methods used in solid object colouring. When the studies carried out in different fields are taken into consideration, it is seen that quad-tree method displays a prominent position in terms of speed and performance. Cropped quad-tree is obtained as a result of the developments seen with the frequent use of this method in the field of computer sciences. Two different versions of algorithm which operate recursively on CPU and at the same time which use GPU computing optimization are used in this study. Besides, OpenGL is used for graphics drawing process. Within the setting of the study, results are obtained via CPU and GPU's, at first using Quad-Tree method and then Cropped Quad-Tree method. It is observed that GPU computing is obviously faster than CPU computing and Cropped Quad-Tree method produces rapid results compared to Quad-Tree method as a result of performance. GPU computing method boosted approximately performance by up to 20 times compared to only CPU usage; furthermore, cropped quad-tree method boosted approximately performance of algorithm by up to 25 times on average dependent on screen and object size.
INTRODUCTION
Developments seen in the fields where computer technology has been used require shorter durations of processing time on huge data sets for solving problems.Design of the systems operating faster becomes obligatory as a result of rapid increase in data sizes.Data processing speed of the systems becomes more important, whereas shorter response period is expected due to the data processing speed.Solutions to scientific problems, especially, to engineering problems are obtained by powerful computers running in parallel.
In the recent years, parallel computing and its applications become widespread in computer industry.Processing of data on graphics processing units (GPU) occurs as a new technology besides the use of central processing units on data processing.Although the studies carried out on GPU's are not new, GPU's are included in current computing areas as a new field.GPU's are generally optimized for computer graphics processes which require rapid calculation such as computer games and images.Despite the fact that the powers of high arithmetical calculations of GPU's mark out for a brilliant future, they include some limitations for programmers.They are almost used in every desktop pc, laptop pc, game console and mobile devices as a standard part of them.
When compared to CPU, they have higher memory bandwidth and floating point [1].Nvidia developed CUDA (Compute Unified Device Architecture) programming model which enables software developers to use parallel computing by utilising C programming language.CUDA programming model allows programmers to use multithreaded GPU's effectively in parallelisation.This model enables thousands of threads to run synchronously on GPU.Parallel computing is provided by the fine organization of threads, blocks and grids [2,3].CUDA eliminates all difficulties since it creates parallelism manually.A program written with the support of CUDA is a series of a program sequence called as kernel.GPU makes this kernel parallel by duplicating it in requested numbers and running.Since CUDA is an extension of C programming language, there is no need to change their architectures in order to generally direct programs to CUDA library or make them multi-threaded [4].
There is not enough study on colouring solid objects with Cropped Quad-Tree method utilising GPU computing.But there are many studies related to this topic.An algorithm design was developed from a display of binary image series to generate a quad-tree in a study.Algorithm was carried out only one process on every pixel in image.In addition to this, when the tree-data structure is being generated, only maximum size of nodes are generated and therefore temporary nodes are not needed.Running duration of algorithm becomes equal to the numbers of pixels in the image [5].In another study, representation of image with Quad-tree in deeper levels, in other terms, gradually decreasing sub-divisions, was studied.Within the scope of the study, an algorithm was given for superposing N quad trees in time proportional to the total number of nodes in the trees.Warnock-type algorithms were then presented for building the quad tree for the picture of the boundary of a polygon, and for colouring the interior of such a polygon [6].
In another study, relational-linear Quad-Tree approach for two dimensional spatial representation and manipulation was presented as a new approach.This approach unifies relational database models and the advantages of hierarchical data structures.Moreover, this approach offers flexible and powerful tools for spatial data structures and manipulation.Another advantage of this approach is that the rules are obviously clear and easily applicable [7].An algorithm was presented for constructing a Quad-Tree for a binary image given its row-by-row description.Within the study, the algorithm processes the image one row at a time and merges identically coloured children as soon as possible, so that Quad-Tree which is a minimal size exists after processing the each pixel.According to the study, this method is superior to one which reads in an entire array and then attempts to build the Quad-Tree [8].In another study, fast algorithm design operates on GPU for Quad-Tree structure was developed.
Three different implementations were realised for algorithm.These are completely GPU based implementation, CPU based sequential implementation and hybrid implementation.In hybrid implementation, first levels are constructed on CPU before data transfer to GPU in order to perform the rest of the stages.At the end of the study, it was seen that hybrid implementation provides better performance compared to others on sufficiently large datasets [9].In another study, key factors in design and evaluation of image processing algorithms on the massive parallel graphics processing units (GPUs) using the compute unified device architecture (CUDA) programming model was studied.Within the settings of the study, a set of metrics especially customized for image processing was proposed to quantitatively evaluate algorithm characteristics.Besides, the algorithms were carefully selected from major domains of image processing.It was seen that the speeds observed varies according to the characteristics of the algorithms.Intensive analyses were conducted to show the appropriateness of the proposed metrics in predicting the effectiveness of an application for parallel implementation [10].A novel algorithm is presented to solve dense linear systems using CUDA.According to results of this study, GPU computation approximately worked 3 times faster than the CPU computation.This implementation provides significant performance improvement and can easily be used to solve dense linear system [11].An implementation is proposed for quad-tree based solid object colouring using CUDA.The computation studies were evaluated for different solid objects and a better performance was obtained with GPU computing.According to results, GPU computation was 20 times faster than the CPU computation [12].
In this study, solid objects were coloured with Cropped Quad-Tree method utilising GPU computing optimization.Although, there are many methods used for colouring solid objects, when the studies carried out in different fields are taken into consideration, it is seen that Quad-Tree method displays a prominent position in terms of speed and performance.Within the setting of the study, results were obtained via CPU and GPU's, at first using Quad-Tree method and then Cropped Quad-Tree method.It was observed that GPU computing is obviously faster than CPU computing and Cropped Quad-Tree method produces rapid results compared to Quad-Tree method as a result of performance results obtained from the use of two methods.
MATERIALS AND METHODS
Quad-Tree and Cropped Quad-Tree methods are implemented to present solid objects.CPU and GPU computing is realized for comparisons of methods.
Presentation of Solid Objects by Using Cropped Quad-Tree Method
Quad-Tree is a tree data structure in which each internal node has four children.It is a structure which is used to organise pixels in the processes performed on images and computer graphics.Thousands, even millions of records can be stored within this structure.Each leaf node should not be obliged to contain a record but more than half of them should contain a record.Quad-Tree is a unique algorithm used in studies on locating pixels in two-dimensional image.Images are divided into quad parts and each quad is again divided into quads.They are generally classified according to data type they represent such as area, point, line and curves.In our study, we used area Quad-Tree structure which is appropriate for data type that should be represented.Partitioning of two dimensional spaces, dividing of the specific region or divided sub-regions into four equal quadrants can be represented with area Quad-Tree method.Each node in tree structure has either four children or no children [13,14,15].Cropped Quad-Tree method is the enhanced version of Quad-Tree method.Here, the minimal screen part where the object is located is determined instead of performing operations on the entire image.Later, division operation is performed only within the window determined previously, in this way, adscititious processes are avoided.Consequently, benefit is obtained in algorithm in terms of speed dependent upon the size of the object on image.The structure related to Cropped Quad-Tree structure that we used in the study is shown in Figure 1.On the left, the minimal screen part which can be represented with the object on image is selected and surrounded by the red dots.The representation of data structure of Cropped Quad-Tree related to the object which was selected on the screen is given on the right.Figure 1.Data structure of cropped quad-tree A Quad-Tree having a depth of N can be used to represent an image consisting of 2nx2n pixels where each pixel value is equal to either 0 or 1.The entire object located on image is represented by root node.If the pixels in any region are not completely 0 or 1, relevant region is again divided.In this structure, each leaf node should include a pixel block consisting of 0s or 1s.The division operation is carried out until each leaf contains only one pixel.If the region consists of pixels having same value, there is no need to divide the region again [16,17].Application of Cropped method besides Quad-Tree method provides computational easiness.Furthermore, it is more advantageous to store only the region where the object is located in the image in the memory.As a result, it can be said that images can dynamically be represented with Cropped Quad-Tree method and this is more appropriate for image processes.
Gpu Computing and Cuda Programming Model
GPU's have been used for general programming purposes in the recent years and high speed performances have been obtained in many applications.The issue of GPU programming has not been limited with only graphics and game applications; moreover, it begins to attract the attentions of users from many different fields and also it provides many opportunities for new applications besides providing high speed in computations.GPU computational model is the use of a GPU in scientific and engineering problems.GPU computation is, in other terms, to use CPU and GPU together with in heterogeneous calculation model.Heterogeneous programming is based on the idea of independently utilising CPU and GPU which are the primary main processors of a PC according to the type of the application in order to obtain maximum efficiency from the applications.Ordered part of the application is operated on CPU and computationally predominantly computational part is operated on GPU.CPU gives the best results in serial operations as a result of many parses and random memory access.On the other hand, GPU is an expert in parallel processing with floating point operations.Briefly, the best results in serial processes are obtained via CPU and the best results in parallel processes are obtained via GPU.Heterogeneous programming is related with the issue of utilising the appropriate processor for the appropriate process [18,19,20].
GPU floating point performance has achieved TeraFlop levels in the recent years with technological developments.Nvidia provides faster structure on GPU compared to CPU besides floating point operations per seconds and performance increases on chip bandwidths.GPU provides perfect computing power with its high parallelism and multi-threaded structure and multi-core processor architecture.Products with higher memory bandwidth have been developed by taking the demands of the users into consideration.Maximum Flops values of CPU and GPU's are given on left and memory bandwidths of CPU and GPU's are shown on the right in Figure2.Floating-point operations amount per second accessed by GPU reach to higher values rapidly.For instance, in single sensitivity computations, CPU processors can operate at maximum 475 GFLOPS level while Nvidia GeForce GTX 680 GPU processor operates at 3100 GFLOPS level.Similarly, in double sensitivity computations, CPU processors can operate at maximum 240 GFLOPS level while Nvidia Tesla C2050 GPU processor operates at 515 GFLOPS level.In the figure given, GPU bandwidth also displays higher increase compared to bandwidth of CPU.As it can be seen from these values, GPU's are rather speedy processors.Therefore, GPU's, which we selected for parallelisation, make many parts of our algorithms composed of synchronously and benefit from the advantages of high computation power of GPU.In this study, significant speed acquisitions are obtained as a result of colouring designed objects with Cropped Quad-Tree method.NVIDIA makes necessary changes on GPU in order to be completely programmable in scientific applications and added support for high level programming languages such as C and C++ in order to allow users use the performance obtained in various widespread platforms.This effort resulted in the development of CUDA architecture for GPU.CUDA allows users to programme GPU with various high level programming languages as software and hardware architecture.This parallel programming model allows programmers to solve problems by dividing it into sub-problems that can be solved independently in parallelization [18].NVIDIA gives support to users to programme GPU with C, C++, Fortran, OpenCL and DirectCompute.We developed a new algorithm design using C++ programming language in our study.
It is known that CUDA is used for calculation, data generation, and image manipulation, on the other hand, OpenGL is used to draw pixels or vertices on the screen.CUDA and OpenGL share data through common memory in the frame buffer.OpenGL buffer, texture, and render buffer objects are the OpenGL resources that may be mapped into the address space of CUDA.Sharing memory between CUDA and OpenGL can be realized by the interoperability API, as a result the particle system can be updated using CUDA, and can be rendered from the same memory using OpenGL [20].In our study the results are displayed using OpenGL graphic functions.
CPU and GPU Design of Cropped Quad-Tree Based Solid Object Colouring
At first, the version of Cropped Quad-Tree algorithm operating on CPU was developed in the study.Later, kernel function which will operate on GPU was designed after the determination of parts which will be parallelised on algorithm.It is important to determine the intersection points of object when the screen is divided and the coordinates of the points.Furthermore, functions which will decide whether any specified point is located within a known area or not should be defined.As a result of the reasons mentioned above, the straight line denoted by xs, ys, xf and yf points also y coordinate of the point intersected by the line of screen dividing line whose x coordinate is known depicted in Figure3 are calculated by the formulae given in equation ( 1) and (2).The order of the points should be in the order given in Fig. 6 when performing operations on the objects.Points which are in this order in normal circumstances change their locations after 3D rotation.The order of these points should be rearranged.An ordering function was designed for this order: At first, points are ordered according to y coordinate in a descending order. X coordinate of 0. should be lower than 1.Otherwise, points are shifted. X coordinate of 3. should be lower than 2. Otherwise, points are shifted.Cropped Quad-Tree algorithm which was prepared provides benefits in terms of speed by computing the minimal screen part where the object is located and by performing the division operation only within the window determined instead of colouring the object by dividing the entire screen; furthermore, it prevents performing extra operations.The coordinates of the minimal rectangle which includes the shape are found out in order to crop the shape.The steps of the algorithm of the function which realises these operations are given below.Necessary parallelisation operations are carried out on the functions used and they are prepared to be able to operate on GPU after the design of the algorithm which can operate on CPU.Kernel function which will operate on GPU is designed to cover the program blocks which will operate parallel.Whether there is a vertex line within each quadrant within algorithm is checked, the minimum and maximum values for each quadrant and also the minimum and maximum values of vertex coordinates of the object is compared with each other.Algorithm steps of the Kernel function which perform these operations are given below.Before invoking Kernel function, the transfer of data which will be used within this function was realised from CPU to GPU.After running Kernel function and performing calculation operations, the transfer of data from CPU to GPU was realised.After memory area allocated on GPU, data related to object were displayed in screen via OpenGL libraries.(Gpu1: Nvidia Gtx560ti, Gpu2: Nvidia Quadro2000) In addition, graphical representation of performance of CPU and GPUs with cropped quad-tree algorithm is given in Figure7.As a result, we propose an implementation for cropped quad-tree based solid object colouring using CUDA.We have tried our study on different systems that have different GPUs and CPUs.The computation studies were also evaluated for different solid objects.When we compared the results obtained from both systems, a better performance was obtained with GPU computing.
Figure 2 .
CPU and GPU comparisons (a) The maximum number of FLOPS with CPU&GPU, (b) Memory bandwidth for the CPU&GPU from 2003 to 2012 [20].
function CropImage{Takes Points as a Parameter} Initialize Parameters Set Minimum-Maximum X-Y Coordinates Search for Min-Max X Coordinates for All Points Search for Min-Max Y Coordinates for All Points end functionAfter the crop process, screen dividing operations are initialised on the screen cropped with Quad-Tree algorithm and the object is coloured.A sample screenshot is given in Figure6.
Figure 6 .
Figure 6.Sample output screen of cropped quad-tree algorithm.
Figure 7 .
Figure 7. Performance of CPU and GPUs with cropped quad-tree algorithm After the design of the Kernel function which will operate on GPU, CropQuadtree function which will operate in main function was prepared.Program blocks operating on CPU and which cannot be parallelised were included within this function.CUDA library which were included and necessary CUDA parameters were defined within main function.
Table 2 .
CPU and GPUs performance of quad-tree and cropped quad-tree based solid object colouring method | 4,247.6 | 2013-12-01T00:00:00.000 | [
"Computer Science"
] |
Corrosion and Wear-Resistant Composite Zirconium Nitride Layers Produced on the AZ91D Magnesium Alloy in Hybrid Process Using Hydrothermal Treatment
: The aim of the study was to investigate the possibility of an effective improvement in performance properties, including corrosion and wear resistance of magnesium AZ91D alloy using a surface engineering solution based on zirconium nitride composite surface layers produced on AZ91D alloy in a hybrid process using hydrothermal final sealing. Research results show that the formation of a composite ZrN-Zr-Al-type zirconium nitride layer on zirconium and aluminum sublayers results in a significant increase in resistance to corrosion and wear. The decrease in chemical activity of the sealed zirconium nitride composite layer on AZ91D, expressed by the displacement of the corrosion potential in the potentiodynamic test, reaches an outstanding value of ∆ E corr = 865 mV. The results of the SIMS chemical composition analysis of the layers indicate that the sealing of the composite layer occurs at the level of the aluminum sublayer. The composite layer reduces wear in the Amsler roll on block test by more than an order of magnitude. The possibility of effective sealing of zirconium nitride layers on the AZ91D alloy demonstrated in this study, radically increases the corrosion resistance and combined with the simultaneous mechanical durability of the layers, is of key importance from the point of view of new perspectives for application in practice.
Introduction
Magnesium alloys, with the lowest density (1.84 g/cm 3 ) of all metallic materials, have a number of advantageous performance properties, such as high specific strength, vibration damping ability, effective shielding of electromagnetic fields, and biocompatibility, as well as advantageous technological properties, especially casting properties and good thermal conductivity predestining them for die casting, are of growing interest in modern technology in a wide range of applications beyond their original dominance in the aerospace and automotive industries [1,2].In practice, however, the use of magnesium alloys is limited by their often insufficient corrosion resistance, as well as their resistance to wear.In industrial practice, the predominant way to obtain satisfactory corrosion resistance of products is their surface treatment by anode oxidation in a classic variant, and in particularly demanding applications, the modern variants of Plasma Electrolytic Oxidation (PEO) [3], which also offer increased mechanical properties, including hardness and wear resistance.Potential new specialized applications of magnesium alloys necessitate the search for adequate, effective surface engineering solutions.Among such presently investigated modern solutions one could mention Ni-diamond micro-composites coatings [4], lasercladded coatings [5] and new PEO treatment variants [6].The production of surface layers of nitrides on magnesium alloys of such metals as aluminum [7], chromium [8,9], titanium [8,10,11], zirconium [12][13][14], and their composites [15,16], and other advanced nitride-based solutions [17][18][19], due to the high corrosion and tribological resistance, seems to be also a promising direction, systematically explored by various researchers.One of the new promising areas of application of magnesium alloys is in biomedical applications, which have recently attracted the attention of researchers and engineers.This is due to the advantageous properties of magnesium such as biocompatibility and mechanical properties, in particular its bone-like stiffness, hence the concept of using suitable magnesium alloys for implants, including resorbable implants.Due to the high activity of magnesium, which can also be subject to more or less intense corrosion processes in body fluids, the key issue for the success of this concept in practice is the development of adequate controlled corrosion resistance of the magnesium alloy forming the implant.This can be achieved on the one hand by selecting the alloy forming the implant and, on the other hand, most effectively by surface engineering methods, producing suitable surface layers controlling corrosion processes.Recently, there has been a growing interest among researchers in using zirconium nitride-based layers in biomedical applications [12][13][14], due to their advantages [20], in particular biocompatibility, which is key in such applications.It should be noted that the surface nitride layers on magnesium alloys, including zirconium nitride layers, modify the corrosion properties of the magnesium alloy to a relatively small extent, particularly as regards changes in chemical activity, as measured by the corrosion potential (E corr ) [11,14].This is due to structural defects typical of Physical Vapor Deposition (PVD) methods used in their production processes, such as droplets and craters, which are the source of potential micro-leaks, through which the corrosive medium can penetrate through the layer to the substrate, and consequently, due to the conductive, cathodic nature of the nitride layers on magnesium alloys, leads to the formation of corrosion cells and accelerated galvanic corrosion [21].This unfavorable behavior of the layers, more or less effectively, may be counteracted by applying various solutions limiting the harmful effects of defects [22].In this work, in order to obtain high performance properties of the AZ91D alloy, in particular corrosion resistance, as well as resistance to wear, a solution was proposed based on the production of a composite three-zone zirconium nitride, zirconium and aluminum surface layer of the ZrN-Zr-Al type produced by a hybrid process using earlier developed [21] hydrothermal final sealing in boiling water bath.In particular, the main objective of the study was to verify the susceptibility of zirconium nitride-based layers to hydrothermal sealing.The origin of the concept of final sealing of zirconium nitride composite layers was in earlier works on the hybrid production of tight, highly resistant to corrosion and wear, TiN-Ti-Al titanium nitride composite layers, which showed outstanding effectiveness [21,23,24].
Materials and Methods
All layers tested in this work were produced on a substrate of pressure die casting magnesium alloy AZ91D containing 9.0 wt.% Al, 0.7 wt.% Zn and 0.1 wt.% Mn.Surface composite layers of the ZrN-Zr-Al type, consisting of an outer zirconium nitride zone, zirconium in the middle zone and an aluminum zone near the substrate, were produced on the substrates initially prepared by mechanical polishing, using diamond suspension with sequential powder gradation from 9 to 1 µm.A hybrid method was used, consisting of a combination of PVD processes, sequentially Magnetron Sputtering (MS) for Al and Zr and Arc Evaporation (AE) for ZrN, with the final hydrothermal sealing by immersion in a boiling water bath for 30 min [21].A scheme describing the above method is shown in Figure 1.The scheme additionally indicates the presence of typical defects arising during the deposition of subsequent sublayers by PVD methods, i.e., droplets and craters that are crucial from the point of view of corrosion resistance.The parameters of the methods used to produce the successive zones of the tested layer and the control samples are included in Tables 1 and 2. Table 1 additionally presents the designations of the samples used in the further part of the article for their identification.
in Figure 1.The scheme additionally indicates the presence of typical defects arising during the deposition of subsequent sublayers by PVD methods, i.e., droplets and craters that are crucial from the point of view of corrosion resistance.The parameters of the methods used to produce the successive zones of the tested layer and the control samples are included in Tables 1 and 2. Table 1 additionally presents the designations of the samples used in the further part of the article for their identification.First, the microstructure, chemical and phase composition as well as surface morphology of the produced layers were examined.Embedded in resin, ground, polished metallographic specimens were observed in a reflection metallographic microscope at magnifications up to 1000 times.X-ray diffraction analysis (XRD) was performed using a Rigaku SmartLab SE diffractometer using a Cu Kα lamp.The surface morphology was analyzed using a Scanning Electron Microscope (SEM) Hitachi SU8000.The accelerating First, the microstructure, chemical and phase composition as well as surface morphology of the produced layers were examined.Embedded in resin, ground, polished metallographic specimens were observed in a reflection metallographic microscope at magnifications up to 1000 times.X-ray diffraction analysis (XRD) was performed using a Rigaku SmartLab SE diffractometer using a Cu Kα lamp.The surface morphology was analyzed using a Scanning Electron Microscope (SEM) Hitachi SU8000.The accelerating voltage was 10 kV.The analysis of the chemical composition was carried out using Secondary Ion Mass Spectrometry (SIMS) on a Cameca IMS6F spectrometer (Cameca, Gennevilliers, France).SIMS measurement was conducted using a cesium (Cs + ) primary beam and secondary ions as measured, and secondary ions as MeCs + clusters were analyzed.The method of measuring nitrogen as NCs + and oxygen as OCs + clusters has been described elsewhere [25].
Corrosion resistance is a key aspect of suitability for use of the produced composite layers of the ZrN-Zr-Al type on the magnesium alloy AZ91D.It was tested using the potentiodynamic method on the AutoLab PGSTAT100 device from EcoChemic B.V. The range of potentials used in the study was from −1600 mV to −1000 mV at room temperature with a potential change rate of 0.1 mV/sec.The tests were carried out in a 0.5 M NaCl solution.The potential before sample polarization was stabilized by immersing the sample in the tested solution under electroless conditions for 60 min.The reference electrode was a saturated calomel electrode (Hg/Hg 2 Cl 2 /KCl) with a potential of +240 mV relative to the hydrogen electrode.The auxiliary electrode was a platinum electrode.
The mechanical durability of cathodic anti-corrosion layers on magnesium alloys, which include zirconium nitride layers, is crucial for maintaining corrosion resistance during the operation of machine parts and devices made of magnesium alloy in conditions of exposure to mechanical puncture damage, scratches or tribological wear.Any violation of the continuity of the layer under the influence of mechanical factors, similar to the natural, leaking defects of the structure, will result in the formation of corrosion microcells and, as a result, accelerated destruction of the layer as a result of galvanic corrosion.Therefore, in order to ensure the service life of products made of magnesium alloys exposed to corrosion, apart from corrosion resistance, high mechanical resistance is also required [23].Therefore, a number of mechanical tests were performed.The resistance of the layers to concentrated point loads was tested using a Vickers hardness indentation at a load of 9.81 N (HV1) in order to detect the layer's eventual cracking or exfoliation.Scratch resistance in the scratch test was performed on a Micro Combi Tester (CSM Instruments SA, Puseux, Switzerland) device.A Rockwell indenter was used.The load on the indenter increased from 1 N to 20 N, and the feed rate was 5 mm/min.The length of the scratch was 6 mm.On the basis of the recorded acoustic emission and microscopic observations after the test, the critical forces Lc 1 , Lc 2 and Lc 3 breaking the continuity of ZrN-Zr sublayers system were determined.The resulting scratches were observed in a Scanning Electron Microscope Hitachi S-3500N.The accelerating voltage was 15 kV.The distribution of elements in the scratch area was also studied using Energy Dispersive Spectroscopy EDS on the same microscope.Wear resistance by the roll on block method was performed in sliding wear conditions on a Amsler A-135 apparatus.Heat-treated C45 steel with a hardness of 35 HRC was used as the roll (counter-body).The test time was 60 min.Three tests were performed with loads of 10, 25 and 50 N.
Microstructure, Chemical and Phase Composition of the Layer
The sealed composite layer of the ZrN-Zr-Al type produced on the AZ91D alloy (ZrN-Zr-Al_S) was characterized by a macroscopically homogeneous appearance and no visible defects in the form of exfoliation or cracks.The microstructure of this layer is shown in Figure 2. In cross-section, the zirconium nitride layer has a characteristic golden color, the zirconium intermediate layer is dark grey, and the aluminum is light grey.
The results of the phase composition tests using X-ray phase analysis confirm that in the process of producing the outer surface layer of zirconium nitride, a nitride with the stoichiometry of ZrN was obtained.
Within the range of available magnifications in optical microscopy, no defects in the form of discontinuities or cracks, or decohesion of the layer from the substrate or between the sublayers were observed (Figure 2).The occurrence of defects typical for layers produced by PVD methods in the form of droplets and craters are mainly confined to the outer nitride zone and ZrN-Zr zirconium sublayer.As it can be assumed, by analogy to the previously developed TiN-Ti-Al composite titanium nitrides on the AZ91D alloy [24], the composite layer in this study has a diffusion character.As shown [24] in the hybrid manufacturing process, due to the increased temperature of the substrate in the process of deposition of the ZrN coating by evaporation in the arc and its hydrothermal sealing, diffusion processes should occur between the aluminum sublayer and the substrate, and probably also in a thin nanometric zone between the aluminum and zirconium sublayers.Within the range of available magnifications in optical microscopy, no defects in the form of discontinuities or cracks, or decohesion of the layer from the substrate or between the sublayers were observed (Figure 2).The occurrence of defects typical for layers produced by PVD methods in the form of droplets and craters are mainly confined to the outer nitride zone and ZrN-Zr zirconium sublayer.As it can be assumed, by analogy to the previously developed TiN-Ti-Al composite titanium nitrides on the AZ91D alloy [24], the composite layer in this study has a diffusion character.As shown [24] in the hybrid manufacturing process, due to the increased temperature of the substrate in the process of deposition of the ZrN coating by evaporation in the arc and its hydrothermal sealing, diffusion processes should occur between the aluminum sublayer and the substrate, and probably also in a thin nanometric zone between the aluminum and zirconium sublayers.
The thicknesses of individual component layers of the composite layer-outer zirconium nitride, intermediate zirconium and aluminum layers (Figure 2), were, respectively, ca. 2, 1 and 9 µ m, which gives a total layer thickness of ca. 12 µ m.
A view of the surface of the composite zirconium nitride layer ZrN-Zr-Al is shown in Figure 3.The surface shows a morphology typical for coatings deposited by arc evaporation with characteristic defects formed in this process, in the form of the aforementioned droplets and craters formed after some droplets chipping off (Figure 3).The thicknesses of individual component layers of the composite layer-outer zirconium nitride, intermediate zirconium and aluminum layers (Figure 2), were, respectively, ca. 2, 1 and 9 µm, which gives a total layer thickness of ca. 12 µm.
A view of the surface of the composite zirconium nitride layer ZrN-Zr-Al is shown in Figure 3.The surface shows a morphology typical for coatings deposited by arc evaporation with characteristic defects formed in this process, in the form of the aforementioned droplets and craters formed after some droplets chipping off (Figure 3).It should be noted that the morphology of the outer surface of the zirconium nitride layer, as a result of the hydrothermal sealing treatment in a boiling water bath, does not seem to change, which indicates that the hydrothermal treatment may not result in covering the surface with a homogeneous oxide film sealing it and its defects (Figure 3a,c), as was observed in our previous studies for the sealing of composite titanium nitride layers of the TiN-Ti-Al type [24].Moreover, the gaps separating the droplets from the nitride It should be noted that the morphology of the outer surface of the zirconium nitride layer, as a result of the hydrothermal sealing treatment in a boiling water bath, does not seem to change, which indicates that the hydrothermal treatment may not result in covering the surface with a homogeneous oxide film sealing it and its defects (Figure 3a,c), as was observed in our previous studies for the sealing of composite titanium nitride layers of the TiN-Ti-Al type [24].Moreover, the gaps separating the droplets from the nitride layer remain open (Figure 3d).Therefore, the SEM observation does not unambiguously settle whether a sealing oxide coating is formed on the surface of the zirconium nitride layer and its discontinuities.
A comparison of the distribution of selected elements in the unsealed layer (ZrN-Zr-Al), and in the hydrothermally sealed layer (ZrN-Zr-Al_S) as determined by the SIMS method (Figure 4), shows that the hydrothermal treatment results in an increased concentration of oxygen and magnesium at the interface of the zirconium sublayer and aluminum (Figure 4c).It leads to the conclusion that the sealing zone, most likely of magnesium hydroxide, is formed at the level of the interface between the zirconium and aluminum sublayers, in the aluminum sublayer, where magnesium diffusing from the substrate through the aluminum sublayer reacts with the environment of the boiling water bath, resulting in the formation of magnesium hydroxide, building discontinuities.The SIMS analysis results also shows a magnesium diffusion profile in the aluminum sublayer (Figure 4d) that confirms the diffusion character of the ZrN-Zr-Al composite layer.
Corrosion Resistance
Figure 5 shows the results of corrosion resistance tests using the potentiodynamic method in the form of the polarization curves of the tested variants of the AZ91D magnesium alloy, i.e., the variant with a composite layer in the initial state (as deposited) (ZrN-Zr-Al), in the state after sealing with the hydrothermal method (ZrN-Zr-Al_S) and for comparative variants, i.e., alloy without layer (AZ91D) and alloy with the reference zirconium nitride layer without aluminum sublayer (ZrN-Zr).Table 3, on the other hand, contains the corresponding values of corrosion parameters, i.e., corrosion potentials and currents.
Corrosion Resistance
Figure 5 shows the results of corrosion resistance tests using the potentiodynamic method in the form of the polarization curves of the tested variants of the AZ91D magnesium alloy, i.e., the variant with a composite layer in the initial state (as deposited) (ZrN-Zr-Al), in the state after sealing with the hydrothermal method (ZrN-Zr-Al_S) and for comparative variants, i.e., alloy without layer (AZ91D) and alloy with the reference zirconium nitride layer without aluminum sublayer (ZrN-Zr).Table 3, on the other hand, contains the corresponding values of corrosion parameters, i.e., corrosion potentials and currents.Analyzing the layout of the polarization curves, it can be seen that the formation of a layer of zirconium nitride on the zirconium sublayer on the AZ91D magnesium alloy (reference variant ZrN-Zr) causes the corrosion potential Ecorr to shift in the negative direction by ΔEcorr = −47 mV to the value of Ecorr = −1577 ± 0.3 mV, which indicates an increase in chemical activity, and therefore a deterioration in corrosion resistance.The corrosion current, on the other hand, decreases nominally by almost five times, which seemingly indicates a significant slowdown of corrosion processes; however, taking into account that the damage to the coating occurs locally, by creating a pit, the actual corrosion kinetics are probably significantly higher.A similar effect of nitride coatings on the corrosion behavior of the AZ91D alloy, manifested by the deterioration of corrosion resistance, was already observed in earlier works on titanium nitrides [24].The reasons for such unfavorable behavior of the alloy with nitride coatings, including zirconium nitrides, should be sought in the lack of tightness of the nitride coating associated with the occurrence of typical defects in the form of droplets and craters, characteristic for PVD methods used for their production, and even more so in the case of possible discontinuities of the layer in the form of micro-cracks or micro-flakes.Layer discontinuities, such as deep gaps between the surface of the droplets and the layer when they pass through the layer to the substrate, allow the corrosive environment to access the magnesium alloy substrate.Due to the conductive nature of most nitride coatings, including zirconium, and their cathodic nature in relation to the magnesium alloy, this leads to the formation of local corrosion microcells between the highly active magnesium alloy and the relatively Analyzing the layout of the polarization curves, it can be seen that the formation of a layer of zirconium nitride on the zirconium sublayer on the AZ91D magnesium alloy (reference variant ZrN-Zr) causes the corrosion potential E corr to shift in the negative direction by ∆E corr = −47 mV to the value of E corr = −1577 ± 0.3 mV, which indicates an increase in chemical activity, and therefore a deterioration in corrosion resistance.The corrosion current, on the other hand, decreases nominally by almost five times, which seemingly indicates a significant slowdown of corrosion processes; however, taking into account that the damage to the coating occurs locally, by creating a pit, the actual corrosion kinetics are probably significantly higher.A similar effect of nitride coatings on the corrosion behavior of the AZ91D alloy, manifested by the deterioration of corrosion resistance, was already observed in earlier works on titanium nitrides [24].The reasons for such unfavorable behavior of the alloy with nitride coatings, including zirconium nitrides, should be sought in the lack of tightness of the nitride coating associated with the occurrence of typical defects in the form of droplets and craters, characteristic for PVD methods used for their production, and even more so in the case of possible discontinuities of the layer in the form of micro-cracks or micro-flakes.Layer discontinuities, such as deep gaps between the surface of the droplets and the layer when they pass through the layer to the sub-strate, allow the corrosive environment to access the magnesium alloy substrate.Due to the conductive nature of most nitride coatings, including zirconium, and their cathodic nature in relation to the magnesium alloy, this leads to the formation of local corrosion microcells between the highly active magnesium alloy and the relatively nobel nitride coating [16,21,23], and as a result, galvanic corrosion causing local perforation of the layer by the mechanism of pitting corrosion.It should be noted that the occurrence of defects, and consequently leak thickness, is statistically unavoidable in the case of layers produced by PVD methods, therefore, by nature, such layers must be susceptible to galvanic corrosion.As a result, in order to obtain the absolute tightness necessary to eliminate the risk of galvanic corrosion and accelerated degradation, these layers require final sealing [22].The developed three-layer composite zirconium nitride layer in a deposited state, on the contrary, significantly increases the corrosion resistance (Figure 5, ZrN-Zr-Al curve), which is manifested by the shift of the corrosion potential in the positive direction by almost ∆E corr = 400 mV to the value E corr = −1147 ± 0.7 mV, while the corrosion current decreases by about an order of magnitude (about 5 µA/cm 2 ).What is more, the character of the polarization curve changes, on which a clearly marked, stable passive region appears, about 250 mV wide, with a breakdown potential of about E p = −905 ± 0.5 mV.The reason for such favorable behavior as observed earlier in the case of composite layers of the TiN-Ti-Al type [21,24] is certainly consistent with the assumption of a separation in the structure of the composite layer, the outer layer of zirconium nitride from the active substrate by a relatively thick, corrosion-resistant intermediate, sealing aluminum sublayer.The key sealing hydrothermal treatment of the composite layer of zirconium nitride in a boiling water bath, which ends the hybrid process, results in a further significant reduction in chemical activity (Figure 5, ZrN-Zr-Al_S curve), and consequently an improvement in corrosion resistance expressed by a shift of the corrosion potential in the positive direction to the value E corr = −665 ± 0.9 mV, i.e., further approx.∆E corr = 865 mV.This represents a radical reduction in chemical activity, unprecedented in the literature, and, as a result, a significant improvement in corrosion resistance.The exception is our previous works on analogous composite layers of titanium nitride of the TiN-Ti-Al type [21,23], in the case of which an even greater increase in corrosion resistance was obtained with a shift of the corrosion potential to positive values.The reasons for the differences in the behavior of the composite layers of titanium nitride and zirconium nitride with an aluminum sublayer are probably due to the different layer sealing mechanism.In the case of titanium nitride layers, it was shown to occur at the level of the titanium sublayer, hence the recorded corrosion potential, as for titanium, is positive [24].Sealing of the composite zirconium nitride layer with the aluminum sublayer, as indicated by the similar values of corrosion potentials recorded for the ZrN-ZR-Al_S composite layer and for the hydrothermally sealed and unsealed aluminum layers on the magnesium alloy AZ91D, occurs on the level of the aluminum sublayer [24].Moreover, this is supported by the results of the SIMS study (Figure 4), which reveal an increased concentration of oxygen and magnesium at the interface between the zirconium and aluminum sublayers in the case of the composite zirconium nitride layer subjected to hydrothermal treatment, while the oxygen levels in the outer zirconium nitride layer and the zirconium sublayer remain at a similar level.This effect suggests the formation of magnesium hydroxide sealing the interface between the two sublayers.It should be mentioned that due to the fact that earlier works [24] showed the ineffectiveness of attempts to seal the layers of TiN-Ti titanium nitrides with the hydrothermal method, i.e., without an aluminum sublayer, in this study similar attempts with regard to the analogous layers of zirconium nitride of the ZrN-Zr type were considered groundless.
Mechanical Properties
The results of mechanical damage resistance tests are shown in Figures 6-8.From the result of the Vickers hardness indentation test (Figure 6), it can be concluded that the composite layer does not show any tendency to cracking or exfoliating of layer fragments in the area of the indentation, which proves a qualitatively good connection of the component layers with each other and the composite layer with the substrate.Similarly, in the case of the scratch test (Figure 7a), until the outer zirconiu layer on the ZrN-Zr zirconium sublayer is completely removed, no damage to th the vicinity of the scratch in the form of cracks or exfoliation of the nitride la ments is observed.On the other hand, in the scratch trace, the nitride layer, start the critical load Lc1 = 1.88 ± 0.019 N (Table 4), successively cracks radially as the creases and is dented into the relatively plastic aluminum sublayer, but witho ating it.The first decohesion of the fragments of the zirconium nitride layer bet cracks, resulting in the exposure of the aluminum sublayer, is observed under th load Lc2 = 2.69 ± 0.027 N, and the complete removal of the nitride layer in the tra crack for the load Lc3 = 5.66 ± 0.057 N. It should be noted that the composite lay ZrN-Zr-Al type is damaged at lower critical load values than the reference lay conium nitride ZrN-Zr produced directly on the AZ91D alloy (Table 4), but th ages, as shown in Figure 7 (EDS), do not lead to the exposure of a highly chem tive magnesium substrate and, in case of potential contact with a corrosive envi pose a serious risk of accelerated galvanic corrosion.The aluminum sublaye scratching progresses and the load increases, is not subject to cracking or exfolia fragments, but due to its plasticity, it is gradually abraded, protecting the ma alloy substrate from the corrosive environment for a relatively long and effective complete removal from the surface of the AZ91D alloy occurs for a critical load approx.Lc3" = 12.5 ± 0.125 N, significantly exceeding the value of the critical for comparative variant ZrN-Zr, which is Lc3 = 8.93 ± 0.083 N, for which actually layer crack at Lc1 = 2.64 ± 0.026 N creates a critical risk of galvanic corrosion.A mechanical damage to the ZrN-Zr-Al type composite layer is localized in the o conium nitride layer and the zirconium sublayer, without affecting the cohesi aluminum sublayer with the magnesium alloy substrate, it leads only to a de corrosion resistance, and not to accelerated galvanic corrosion, as in the case o layers without an aluminum sublayer.It can therefore be assumed that the seal nium nitride composite layer of the ZrN-Zr-Al type on the AZ9D alloy ha Similarly, in the case of the scratch test (Figure 7a), until the outer zirconium nitride layer on the ZrN-Zr zirconium sublayer is completely removed, no damage to the layer in the vicinity of the scratch in the form of cracks or exfoliation of the nitride layer fragments is observed.On the other hand, in the scratch trace, the nitride layer, starting from the critical load Lc 1 = 1.88 ± 0.019 N (Table 4), successively cracks radially as the load increases and is dented into the relatively plastic aluminum sublayer, but without exfoliating it.The first decohesion of the fragments of the zirconium nitride layer between the cracks, resulting in the exposure of the aluminum sublayer, is observed under the critical load Lc 2 = 2.69 ± 0.027 N, and the complete removal of the nitride layer in the trace of the crack for the load Lc 3 = 5.66 ± 0.057 N. It should be noted that the composite layer of the ZrN-Zr-Al type is damaged at lower critical load values than the reference layer of zirconium nitride ZrN-Zr produced directly on the AZ91D alloy (Table 4), but these damages, as shown in Figure 7 (EDS), do not lead to the exposure of a highly chemically active magnesium substrate and, in case of potential contact with a corrosive environment, pose a serious risk of accelerated galvanic corrosion.The aluminum sublayer, as the scratching progresses and the load increases, is not subject to cracking or exfoliation of its fragments, but due to its plasticity, it is gradually abraded, protecting the magnesium alloy substrate from the corrosive environment for a relatively long and effective time.Its complete removal from the surface of the AZ91D alloy occurs for a critical load value of approx.Lc 3" = 12.5 ± 0.125 N, significantly exceeding the value of the critical force for the comparative variant ZrN-Zr, which is Lc 3 = 8.93 ± 0.083 N, for which actually the first layer crack at Lc 1 = 2.64 ± 0.026 N creates a critical risk of galvanic corrosion.As long as mechanical damage to the ZrN-Zr-Al type composite layer is localized in the outer zirconium nitride layer and the zirconium sublayer, without affecting the cohesion of the aluminum sublayer with the magnesium alloy substrate, it leads only to a decrease in corrosion resistance, and not to accelerated galvanic corrosion, as in the case of ZrN-Zr layers without an aluminum sublayer.It can therefore be assumed that the sealed zirconium nitride composite layer of the ZrN-Zr-Al type on the AZ9D alloy has a good prognosis in terms of service life of products made of this alloy, also in conditions of simultaneous corrosion and mechanical hazards.The result of the production of the ZrN-Zr-Al-type zirconium nitride composite layer on the AZ91D magnesium alloy on the intermediate zirconium and aluminum sublayers in the process is more than a 1.5-fold increase in surface hardness from 84 ± 7 HV0.05 for the alloy to 132 ± 4 HV0.05.Hardening the surface of the AZ91D alloy with a layer of zirconium nitride results in a significant, more than one order of magnitude, increase in resistance to wear in the load range up to 50 N in the modified Amsler roll on block test (Figure 8).It should be noted that during the test, the outer layer of zirconium nitride with a thickness of approx. 2 µm does not wear through.As can be seen, the wear of the ZrN-Zr-Al type layer in the roll on block test is nearly 2.5 times lower than for the ZrN-Zr layer without an aluminum sublayer.
1.
The effect of the zirconium nitride composite layer of the ZrN-Zr-Al type with zirconium and aluminum intermediate layers produced on the magnesium alloy AZ91D using PVD methods sealed in the final hydrothermal treatment process is a significant improvement in corrosion resistance in the 0.5M sodium chloride environment, manifested in the potentiodynamic test shift of the corrosion potential towards positive values by ∆E corr = 865 mV in comparison to the alloy without the layer (E corr = −1530 ± 1.4 mV).The unsealed ZrN-Zr-Al layer improves the corrosion resistance to a much lower extent (∆E corr = 385 mV).On the other hand, the zirconium nitride layer of the ZrN-Zr type, produced directly on the AZ91D alloy, reduces the corrosion resistance of the alloy with a negative shift of the corrosion potential to the value of E corr = −1577 ± 0.3 mV.The unfavorable effect of the ZrN-Zr-type zirconium nitride layer on the corrosion resistance of the AZ91D alloy is the result of the presence of inevitable defects in the layer structure, typical for PVD methods, in the form of droplets and craters, which are the source of micro-discontinuities of the nitride layer.This leads to the formation of corrosion cells between the cathodic zirconium nitride coating and the highly chemically active magnesium alloy, and consequently results in accelerated galvanic corrosion.Hence, the key role of the hydrothermal final sealing treatment of the composite layer of zirconium nitride with an intermediate layer of zirconium and aluminum plays a key role in achieving maximum tightness.
2.
The production of a composite layer of zirconium nitride on the intermediate zirconium and aluminum sublayer of the ZrN-Zr-Al type on the AZ91D magnesium alloy resulted in a more than two-fold increase in surface hardness.The effect of hardening the surface of the AZ91D alloy with a layer of zirconium nitride results in a significant, approximately more than one order of magnitude increase in friction wear resistance in the load range up to 50 N in the modified Amsler roll on block test, while the outer layer of zirconium nitride is not worn through.The increase in resistance observed for the ZrN-Zr layer without the aluminum sublayer is almost 2.5 times lower.The composite zirconium nitride layer of the ZrN-Zr-Al type in the Vickers hardness indentation test carried out using the Vickers HV1 indentation showed resistance to mechanical damage under the influence of concentrated loads.This layer also showed favorable behavior in the scratch test, because damage to the external layer of zirconium nitride during scratching, in the form of cracks, its local or even complete exfoliation, did not expose the substrate made of magnesium alloy AZ91D, but only the aluminum sublayer.During the scratching progresses and the load increases,
Figure 1 .
Figure 1.Scheme of the hybrid method for producing the composite zirconium nitride ZrN-Zr-Al type layers.
Figure 1 .
Figure 1.Scheme of the hybrid method for producing the composite zirconium nitride ZrN-Zr-Al type layers.
Figure 2 .
Figure 2. Microstructure of the sealed composite zirconium nitride layer of the ZrN-Zr-Al type.
Figure 4 .
Figure 4. Distribution of elements in a composite zirconium nitride layer of the ZrN-Zr-Al type (SIMS): (a) as deposited (ZrN-Zr-Al)-near surface area; (b) after hydrothermal treatment (ZrN-Zr-Al_S)-near surface area; (c) comparison of the distribution of selected elements for states (a,b); (d) after hydrothermal treatment (ZrN-Zr-Al_S), depth profiles of elements on a AZ91D magnesium alloy substrate.
Figure 4 .
Figure 4. Distribution of elements in a composite zirconium nitride layer of the ZrN-Zr-Al type (SIMS): (a) as deposited (ZrN-Zr-Al)-near surface area; (b) after hydrothermal treatment (ZrN-Zr-Al_S)-near surface area; (c) comparison of the distribution of selected elements for states (a,b); (d) after hydrothermal treatment (ZrN-Zr-Al_S), depth profiles of elements on a AZ91D magnesium alloy substrate.
Figure 5 .
Figure 5. Results of corrosion tests in 0.5M NaCl using the potentiodynamic method.
Crystals 2023 ,Figure 6 .
Figure 6.Image of the Vickers (HV1) indentation in the resistance to concentrated point of a sealed layer of the ZrN-Zr-Al type.
Figure 6 .
Figure 6.Image of the Vickers (HV1) indentation in the resistance to concentrated point loads test of a sealed layer of the ZrN-Zr-Al type.Crystals 2023, 13, x FOR PEER REVIEW 11 of 14
Figure 7 .
Figure 7.The scratch in the scratch test on the surface of the ZrN-Zr-Al type layer fragment: (a) microscopic image with the EDS analysis area localization-red rectangle (b-e) the distribution of elements (EDS) of Zr, N, Al and Mg, respectively, in the area of ZrN outside layer exfoliation corresponding to the critical force exceeding Lc2, in the area marked in the microscopic image (a) with a red rectangle.
Figure 7 .
Figure 7.The scratch in the scratch test on the surface of the ZrN-Zr-Al type layer fragment: (a) microscopic image with the EDS analysis area localization-red rectangle (b-e) the distribution of elements (EDS) of Zr, N, Al and Mg, respectively, in the area of ZrN outside layer exfoliation corresponding to the critical force exceeding Lc 2 , in the area marked in the microscopic image (a) with a red rectangle.
Figure 8 . 1 .
Figure 8. Linear wear values in the Amsler modified roll on block test.The relative error for the method is 13%.4. Conclusions 1.The effect of the zirconium nitride composite layer of the ZrN-Zr-Al type with zirconium and aluminum intermediate layers produced on the magnesium alloy AZ91D using PVD methods sealed in the final hydrothermal treatment process is a significant improvement in corrosion resistance in the 0.5M sodium chloride environment, manifested in the potentiodynamic test shift of the corrosion potential towards positive values by ΔEcorr = 865 mV in comparison to the alloy without the layer (Ecorr = −1530 ± 1.4 mV).The unsealed ZrN-Zr-Al layer improves the corrosion resistance to a much lower extent (ΔEcorr = 385 mV).On the other hand, the zirconium nitride layer of the ZrN-Zr type, produced directly on the AZ91D alloy, reduces the corrosion resistance of the alloy with a negative shift of the corrosion potential to the value of Ecorr = −1577 ± 0.3 mV.The unfavorable effect of the ZrN-Zr-type zirconium nitride layer on the corrosion resistance of the AZ91D alloy is the result of the presence of inevitable defects in the layer structure, typical for PVD methods, in the form of droplets and craters, which are the source of micro-discontinuities of the nitride layer.This leads to the formation of corrosion cells between the cathodic zirconium nitride coating and the highly chemically active magnesium alloy, and consequently results in accelerated galvanic corrosion.Hence, the key role of the hydrothermal final sealing treatment of the composite layer of zirconium nitride with an intermediate layer of zirconium and aluminum plays a key role in achieving maximum tightness.2. The production of a composite layer of zirconium nitride on the intermediate zirconium and aluminum sublayer of the ZrN-Zr-Al type on the AZ91D magnesium alloy resulted in a more than two-fold increase in surface hardness.The effect of hardening the surface of the AZ91D alloy with a layer of zirconium nitride results in a significant, approximately more than one order of magnitude increase in friction wear resistance in the load range up to 50 N in the modified Amsler roll on block test, while the outer layer of zirconium nitride is not worn through.The increase in resistance observed for the ZrN-Zr layer without the aluminum sublayer is almost 2.5 times lower.The composite zirconium nitride layer of the ZrN-Zr-Al type in the Vickers hardness indentation test carried out using the Vickers HV1 indentation showed resistance to mechanical damage under the influence of concentrated loads.This layer also showed favorable behavior in the scratch test, because damage to the
Figure 8 .
Figure 8. Linear wear values in the Amsler modified roll on block test.The relative error for the method is 13%.
Table 1 .
Variants of the investigated materials.
Table 2 .
PVD processes parameters used to produce the investigated layers.
Table 1 .
Variants of the investigated materials.
Table 2 .
PVD processes parameters used to produce the investigated layers.
Table 3 .
Values of potentials and corrosion current densities in tests in 0.5M NaCl using the potentiodynamic method.
Table 3 .
Values of potentials and corrosion current densities in tests in 0.5M NaCl using the potentiodynamic method.
Table 4 .
Values of critical forces Lc1, Lc2, Lc3 in the scratch test.
Table 4 .
Values of critical forces Lc 1 , Lc 2 , Lc 3 in the scratch test. | 9,174.2 | 2023-09-30T00:00:00.000 | [
"Materials Science"
] |
CRISPR/Cas9-mediated correction of mutated copper transporter ATP7B
Wilson's disease (WD) is a monogenetic liver disease that is based on a mutation of the ATP7B gene and leads to a functional deterioration in copper (Cu) excretion in the liver. The excess Cu accumulates in various organs such as the liver and brain. WD patients show clinical heterogeneity, which can range from acute or chronic liver failure to neurological symptoms. The course of the disease can be improved by a life-long treatment with zinc or chelators such as D-penicillamine in a majority of patients, but serious side effects have been observed in a significant portion of patients, e.g. neurological deterioration and nephrotoxicity, so that a liver transplant would be inevitable. An alternative therapy option would be the genetic correction of the ATP7B gene. The novel gene therapy method CRISPR/Cas9, which has recently been used in the clinic, may represent a suitable therapeutic opportunity. In this study, we first initiated an artificial ATP7B point mutation in a human cell line using CRISPR/Cas9 gene editing, and corrected this mutation by the additional use of single-stranded oligo DNA nucleotides (ssODNs), simulating a gene correction of a WD point mutation in vitro. By the addition of 0.5 mM of Cu three days after lipofection, a high yield of CRISPR/Cas9-mediated ATP7B repaired cell clones was achieved (60%). Moreover, the repair efficiency was enhanced using ssODNs that incorporated three blocking mutations. The repaired cell clones showed a high resistance to Cu after exposure to increasing Cu concentrations. Our findings indicate that CRISPR/Cas9-mediated correction of ATP7B point mutations is feasible and may have the potential to be transferred to the clinic.
Introduction
The genome editing tool CRISPR/Cas9 (clustered regularly interspaced short palindromic repeats (CRISPR) associated nuclease 9) offers a new gene therapeutic potential to efficiently target inherited monogenetic or infectious diseases. Within the last couple of years it has been used to correct the genetic basis of many diseases in animal models or isolated cells [1][2][3][4][5][6]. WD is an excellent model to study genetic corrections, since a majority of WD patients carry point a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 mutations such as H1069Q, which is the most frequent mutation in the Caucasian population [7]. This inherited autosomal recessive disorder is caused by mutations in the ATP7B gene encoding for a copper (Cu) efflux pump [8]. It provokes a functional impairment of Cu excretion by the liver, followed by excess Cu deposition in organs, mostly in the liver and brain [9]. Patients display clinical heterogeneity ranging from acute or chronic liver failure and/or neurological symptoms [10]. The progression of WD can be partly ameliorated by zinc or chelating agents such as D-penicillamine and trientine [11][12][13]. Though these treatments are usually effective, severe side effects have been reported in a significant portion of WD patients [14,15]. As a result, patients may stop the medication, leading to an acute clinical presentation with rapid deterioration [10,16]. The only curative therapy remains an orthotropic liver transplantation [17]. Gene therapy may overcome the need for liver transplantation as well as the shortage of donor livers [18]. The traditional approach of gene therapy is to transfer a functional copy of the mutated gene within clinically relevant cells from the patient using viral vectors. For inherited metabolic diseases of the liver, the goal is to obtain high expression levels in the patient's hepatocytes correcting the disease phenotype. However, the risk of a gene therapy approach is that viral DNA may being incorporated randomly into cellular DNA, disrupting a valuable gene such as a tumor-suppressor gene [19]. Other genome-editing technologies have been widely used to modify or inactivate specific genes in therapeutic approaches or in functional studies, such as zinc finger nucleases (ZFNs) and transcription activator-like effector nucleases (TALENs) [20,21]. In 2013, the CRISPR/Cas9 system has become a powerful gene editing tool and replaced the previously developed technologies [22][23][24]. In this system, a single guide RNA (sgRNA) is used to guide the Cas9 nuclease to target DNA containing the protospacer adjacent motif (PAM), which is 5´-NGG -3´for Streptococcus pyogenes Cas9. A double-strand break (DSB) is generated by Cas9 at *3 base pairs upstream from the PAM region. Two major repair mechanisms may be activated after a DSB. The error-prone nonhomologous end joining (NHEJ) results in a variety of mutations, such as insertion/deletion (INDEL) frameshift mutations leading to transcript degradation [25]. Thus, NHEJ is used intentionally in order to initiate a gene knockout (KO). The second repair mechanism occurs during homologous recombination, represented by the homology-directed repair (HDR) [23,25], which repairs a DSB precisely using a DNA repair template. The HDR mechanism can only be utilized by the cell in the presence of a homologous set of DNA, usually the sister chromatid, within the G2 stage of the cell cycle. As a tool for site-specific single base corrections, the introduction of a single-stranded oligo DNA nucleotide (ssODN) into a target cell may lead to the repair of an aberrant gene after DSB [26,27]. The design of ssODNs as well as the directed shift from the NHEJ to the HDR pathway has been improved to enhance the efficiency of gene modification [28,29]. To prevent a re-editing by the highly active Cas9 nuclease, the introduction of blocking mutations in ssODNs located within the PAM sequence or guide RNA target have minimized undesirable re-editing during gene editing [30].
CRISPR/Cas9 technology has been applied in two studies targeting WD. Jiang et al. created a single amino acid substitute rabbit model for WD, representing the most frequent WD missense mutation in Asia (p. Arg778Leu) in exon 8 of ATP7B [31,32]. Lately, Liu et al. replaced exon 8 of the ATP7B gene in a mouse model using CRISPR/Cas9 [33]. However, a gene correction of ATP7B point mutations on human cellular level has not been described so far.
Prior studies by our lab established a novel ATP7B KO human intestinal cell line (Caco-2 cells) using CRISPR/Cas9 technology, demonstrating a crucial role of Cu and ATP7B in the storage, processing, and secretion of lipids in a human enterocyte model [34]. In the present study, our aim was to create a point mutation in order to mimic a WD-specific mutation leading to a loss of function of the ATP7B gene. Subsequently, the initiated point mutation within the ATP7B gene was repaired using the CRISPR/Cas9 system plus specific ssODNs, with focus on cell selection efficiency by Cu addition. Since WD is characterized by the dysfunction of the Cu transporting protein ATPase7B, genetically corrected cells can be positively selected in vitro by the addition of Cu [35].
Cell transfection
For CRISPR/Cas9-mediated ATP7B KO experiments, 0.5 x 10 6 HEK293T cells were seeded in one well of a 6-well plate using standard cell culture medium. The next day, 2 μg of the targeting vector PX459.ATP7B was transfected into HEK293T cells with Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) according to manufacturer instructions. After 24 hours, cells were seeded as single cells in 96-well plates and selected with 1 μg/ml puromycin. After 72 hours, medium was changed to standard cell culture medium. Single-cell-derived clonal cell lines were obtained after 2 to 3 weeks for further analysis.
For CRISPR/Cas9-mediated ATP7B repair experiments, 0.5 x 10 6 HEK293TΔC cells were lipofected (Lipofectamine 2000) with 1 μg of PX459.ATP7BΔC plasmid plus a mixture of four different ssODNs (0.25 μg each), varying in total length (61, 81, 101 and 121 nt) (S1 Table). One group received four different ssODNs exhibiting three blocking mutations (ssODN_3M), another group received four ssODNS with two blocking mutations (ssODN_2M) and the last group was transfected with four ssODNs carrying no blocking mutations as a control (ssODN_C). Cells were positively selected with 0.5 mM of copper chloride (CuCl 2, Sigma Aldrich, St. Louis, MO, USA) in standard culture medium at 24 hours (day 1) or 72 hours (day 3) after lipofection for two days before medium was changed to standard culture medium. Two to three weeks after lipofection cells were plated to 96-well plates for single-cell-cloning.
After cultivation in 96-well plates for 15 days, the monoclonal cells were transferred in 6-well plates for 6 days. Chromosomal DNA of selected cell clones was isolated using QIAamp DNA mini kit (Qiagen), followed by Sanger sequencing using primers 5'-AGAGGGCTATCG AGGCAC-3' / 5'-GGGCTCACCTATACCACCATC-3' and Big Dye Version 3.1 (Life Technologies) to confirm editing efficiency.
Generation of ATP7B knockout by CRISPR/Cas9
An ATP7B point mutation was created in HEK293T cells to mimic a WD relevant genotype and apply gene therapy. CRISPR/Cas9 vector PX459.ATP7B was transfected into HEK293T to induce an ATP7B KO. After single-cell separation, 93 cell clones were tested on Cu sensitivity in an MTT assay using a Cu concentration of 0.25 mM. About 82% of the tested cell clones showed an increased Cu sensitivity, suggesting an impaired Cu detoxification. Further analysis of cell viability in presence of different Cu concentrations confirmed previous results of increased Cu sensitivity (Fig 1A). At a Cu concentration of 0.5 mM, all of the tested cell clones showed no cell viability, whereas 24.7% of the HEK293T WT cells were vital. For sequence analysis 7 cell clones with decreased Cu resistance were cultivated and Sanger sequencing was performed. All analyzed cell clones indicated deletions in exon 2 of ATP7B (S2 Table). Cell clone #1 harbors a deletion of one cytosine nucleotide (p.E396KfsX11) of exon 2 (Fig 1B and S2 Table). This clone was named HEK293TΔC and used in the following for CRISPR/Cas9-mediated repair.
Generation of ATP7B knockin by CRISPR/Cas9
Here, the genetic correction of an ATP7B point mutation using CRISPR/Cas9 technology was assessed. Previous repair experiments of the HEK293T ATP7B KO clone using the PX459. ATP7B plasmid and ssODNs showed neither an integration of the deleted cytosine (C) nucleotide, nor an integration of silent mutations or any Cas9 activity. Thus we assumed that the sgRNA of the PX459.ATP7B plasmid did not match to the protospacer region of the HEK293T ATP7B KO clone (Clone #1, HEK293TΔC cells) exhibiting a deletion of a C nucleotide. Using site-directed mutagenesis we deleted the C nucleotide of the PX459.ATP7B plasmid within the sgRNA and generated the PX459.ATP7BΔC plasmid for application in subsequent repair experiments.
HEK293TΔC cells were transfected using the PX459.ATP7BΔC plasmid plus a mixture of four different ssODNs that varies in total length (Fig 2 and S1 Table). In order to test the transfection efficiency and intensity, HEK293TΔC cells were transfected with a plasmid encoding GFP (pmaxGFP, Amaxa, Köln, Germany) and analyzed by fluorescence microscopy. 24 hours after lipofection an estimated cellular number of 90% displayed an intense GFP expression (Fig 3A). PX459.ATP7BΔC transfected cells were positively selected with 0.5 mM of Cu, 24 hours (day 1), or 72 hours (day 3) after lipofection, or with no Cu for a period of two days. Cells recovered two to three weeks after Cu treatment, and vital cells of every group underwent single-cell-cloning. Stable cell populations grew within three weeks after plating of the cells. A total of 126 cell clones were cultivated, of which 93 cell clones were analysed by Sanger sequencing. An overall CRISPR/Cas9 activity of 73% was calculated, including homozygous and heterozygous repaired cell clones, plus cell clones indicating deletions, which was independent of Cu or ssODN treatment (see sections below). Almost 50% of the cells indicated an ATP7B repair, of which 12% showed a homozygous repair (Fig 3B), and 37% a heterozygous ATP7B gene editing (Fig 3C).
Copper selection post transfection enhances repair efficiency
One of the aims of this study was to increase the efficiency of an ATP7B repair by the addition of Cu. Three groups of cells were transfected with plasmid PX459.ATP7BΔC plus ssODNs, and two of these groups were selected with 0.5 mM of Cu, one group received no Cu. The Cu selection that started 24 hours (day 1) after lipofection resulted in 41.9% CRISPR/Cas9 activity, which was comparable to the group that received no Cu (41.6%). Cu selection that started 72 hours (day 3) after lipofection led to an overall CRISPR/Cas9 activity of 100% (Fig 4A). These results show that a later start of cell selection using Cu leads to a higher yield of the CRISPR/Cas9 activity, based on heterozygous and homozygous clones and clones with deletions. Considering the CRISPR/Cas9 repair efficiency comprising heterozygous and homozygous repaired cell clones, a Cu selection three days after lipofection led to 60% of repaired clones, whereas a Cu selection one day after lipofection produced only cell clones exhibiting deletions (0% repair) (Fig 4B). Without any Cu selection, the repair efficiency was at least 25%, which may indicate that a selection with Cu at an early time point after lipofection reduces the yield of repaired clones.
Assessment of ssODN quality
To identify the best conditions for a CRISPR/Cas9-mediated repair of a point mutation within the ATP7B gene, different ssODNs have been used in the current study. All three groups received a cocktail of ssODNs varying in total lengths of 61, 81, 101 and 121 nt, but differed in the number of silent or blocking mutations. The quality of the applied ssODNs was measured based on their repair efficiency. The repair efficiency of the control group exhibiting no blocking mutation (ssODN_C) was 9.5%. The group that received ssODNs with two blocking mutations (ssODN_2M) showed an efficiency of 31%, whereas the percentage of clones repaired with ssODNs carrying three blocking mutations (ssODN_3M) was 58%, indicating the most progressive attainment (Fig 4C). When ssODNs with a total of three silent mutations were used in a CRISPR/Cas9-mediated repair, a total of two mutations were most often inserted. When ssODNs carrying two silent mutations were used, only one mutation was most often inserted. We next addressed the question at which position a blocking mutation was most frequently integrated. All 18 clones being treated with ssODN_3M and undergoing homo-or heterozygous repair incorporated mutation No. 2. This blocking mutation is located two nucleotides upstream the repair site and within the guide RNA binding sequence (Fig 2). 16 out of 18 clones additionally integrated mutation No. 1, which is positioned 11 nucleotides upstream the repair site, and is located also within the guide sequence. 5 out of 18 clones additionally incorporated the blocking mutation No. 3, which is positioned seven nucleotides downstream the repair site, immediately behind the PAM region, but beyond the guide sequence. The group treated with ssODN_2M revealed 13 homo-or heterozygous repaired cell clones that all incorporated blocking mutation No. 2 and three out of 13 clones additionally integrated mutation No. 3. In summary, mutation No. 2 was the most frequent blocking mutation, which was inserted by ssODN_2M or ssODN_3M.
CRISPR/Cas9-mediated ATP7B repair generates Cu resistance
To evaluate the cellular Cu resistance after a CRISPR/Cas9-mediated repair of a point mutation within the ATP7B gene, we incubated the rescued cell clones to various Cu concentrations and measured the cell viability in an MTT assay (Fig 5). Four heterozygous repaired cell clones and four homozygous repaired cell clones were compared to parental HEK293TΔC cells and HEK293T ATP7B WT cells carrying an intact ATP7B gene. No differences in cellular viability were observed comparing the homo-and heterozygous repaired clones with HEK293T ATP7B WT cells, indicating a regained resistance to increasing Cu concentrations after CRISPR/Cas9 treatment. Cell viability of HEK293TΔC (KO) cells significantly differed to all other cell groups, ranging from 0.2-1.2 mM Cu. At a Cu concentration of 0.6 mM, HEK293TΔC KO cells showed no cell survival, whereas all ATP7B KI cells indicated a cell survival of 46-48%. Interestingly, no differences have been observed between homozygous and heterozygous ATP7B KI groups with regard to Cu sensitivity.
Since the ATP7A gene is a copper-transporting P-type ATPase mainly expressed in nonhepatic tissues, the expression level was analyzed by real-time RT PCR in HEK293T WT, KO and KI cells (S1 Fig). The results indicate no significant change within the ATP7A expression after induction of CRISPR/Cas9-mediated KO or KI of ATP7B as compared to WT HEK293T cells.
ATP7B protein is restored after CRISPR/Cas9-mediated repair
In order to define the protein expression of ATP7B after CRISPR/Cas9-mediated repair a Western Blot analysis was performed, comparing HEK293T ATP7B WT cells with the HEK293TΔC (KO) cells, four homozygous repaired cell clones, and four heterozygous repaired cell clones (Fig 6). ATP7B protein expression was not detectable in HEK293TΔC cells, whereas ATP7B protein expression in all repaired homozygous cell clones was found to be as high as in HEK293T ATP7B WT cells, indicating a restoration of the ATP7B protein. Heterozygous cell clones showed a substantial protein expression, which was generally lower as compared to homozygous repaired cell clones, according to a lower protein synthesis by one repaired allele.
Discussion
Since WD is characterized by mutations of the ATP7B Cu transporter new gene therapeutic treatment options are eligible and of high demand. This study is the first to demonstrate a gene correction of an ATP7B point mutation on human cellular level. First, using CRISPR/ Cas9 gene editing, an ATP7B KO was successfully initiated in the human cell line HEK293T. After addition of Cu, KO cell clones showed a decreased cellular viability and indicated various deletions on exon 2, confirming a loss of function of ATPase7B. About 122 mutations on exon 2 have been registered so far, affecting the first four Cu binding sites of the ATP7B gene [39,40]. Accordingly, the deletion of HEK293TΔC cells on exon 2 (c.1184delC) impairs the fourth Cu binding site. Interestingly, a naturally occurring mutation (c.1186G>T) was detected in the close proximity of this artificial mutation [41]. It is located between the PAM site and the cutting site of the current study, comprising a substitution (GAA>TAA), which leads to a stop codon. This homozygous point mutation was detected in six Egyptian children with WD that displayed neurological and/or hepatic manifestations. Since the CRISPR/Cas9-mediated point mutation of the current study (c.1184delC) also compromises the fourth Cu binding site it is conceivable that this mutation may have clinical relevance.
A gene correction of this point mutation was initiated using CRISPR/Cas9 technology and specific ssODNs. Previous experiments demonstrated that the repair of a point mutation cannot occur if the sgRNA sequence of the expressing plasmid does not fully match with the protospacer region of the target DNA. Apparently, if only one nucleotide is missing the base pairing and the following repair will not take place, demonstrating that the sgRNA used here is highly specific. We solved this problem by adapting the sgRNA of the plasmid to the C deletion of the HEK293TΔC cells using site-directed mutagenesis.
Combined with the appropriate selection method, in this case the addition of 0.5 mM of Cu three days after lipofection, a high yield of CRISPR/Cas9-mediated ATP7B repaired cell clones was achieved (60%). Moreover, the repair efficiency was enhanced using ssODNs that incorporated blocking mutations, which increase HDR accuracy by preventing re-cutting of the repaired allele by Cas9 nuclease [30]. The percentage of clones repaired with ssODNs carrying three blocking mutations (ssODN_3M) was 58%, indicating the most progressive attainment. Thus, the application of ssODNs carrying three blocking mutations, with at least two blocking mutations located within the guide RNA binding sequence, may represent to be a valuable tool for the repair of point mutations. In summary, it was shown that all 31 homo-and heterozygous repaired clones treated with either ssODN_3M or ssODN_2M have incorporated the blocking mutation at position No. 2, which is the closest to the repair site. Since mutation No. 3, which was located outside of the guide RNA sequence and was the least frequently installed mutation, one could argue that a blocking mutation at this position has no significant effect on the repair rate. Thus, the highest repair rate could be achieved if the blocking mutations were within the guide RNA region. However, since the highest repair rate was detected within the group with ssODN_3M, it could be assumed that the use of ssODNs carrying a total of three silent mutations is required in order to achieve a high CRISPR/Cas9-mediated repair rate. In order to determine the function of the ATPase7B after the repair of an ATP7B point mutation, heterozygous and homozygous repaired cell clones were incubated in high Cu concentrations. Cellular viability of both repaired cell groups was as high as in HEK293T ATP7B WT cells, indicating a regain of Cu transporting function of ATPase7B. Interestingly, there was no difference observed between both groups, demonstrating that even a heterozygous repair of a point mutation on one allele leads to resistance to high Cu concentrations. This may go along with the fact that ATP7B heterozygous patients present no or mild clinical symptoms [42].
ATP7A gene is a copper-transporting P-type ATPase and mainly expressed in non-hepatic tissues such as kidneys. Therefore, one could assume that ATP7A may compensate a high and toxic Cu concentration in HEK293T cells, when ATP7B is knocked out. However, since the ATP7A expression in all 3 cell lines (WT, KO and KI) is in the same range, it can be assumed that ATP7A has no effect on the compensation of the toxic copper. Moreover, it also proves that the CRISPR/Cas9 treatment does not affect the ATP7A expression.
Since WD is characterized by multiple different forms of heterozygous, homozygous and compound heterozygous mutations, ranging from point mutations, insertions or deletions, a potential therapeutic treatment using CRISPR/Cas9 technology has to be individualized for every WD patient before administration. Moreover, delivery of therapeutic CRISPR/Cas9 molecules to the liver has to be guaranteed. Investigations on liver-specific targeting of the CRISPR/Cas9 system have been demonstrated by e.g. Singh et al., using adeno-associated virus (AAV) 9-delivery of truncated guide RNAs and Cas9 under the control of a computationally designed hepatocyte-specific promoter, leading to liver-specific and sequence-specific targeting in the mouse factor IX (F9) gene [43]. Compared to the use of viral vectors for gene therapy, which may activate the innate or adaptive immune system and leads to severe inflammatory response, CRISPR/Cas9 application may also be used to circumvent these issues by non-viral application, thus representing the safer therapy option. Since the liver has the advantage of being a target organ for oligonucleotide therapeutics [44,45], a systemic application of naked plasmid DNA offers the opportunity for a high yield of gene editing in monogenetic liver diseases. In animal models, this was accomplished by hydrodynamic delivery, which is an effective non-viral method of liver-targeted gene delivery via blood circulation [46][47][48]. Once in the liver, the pressurized solution enlarges the liver fenestrae, and forces the permeability of the plasma membrane to allow the DNA to enter the cells [49]. Several studies combined this method with the application of CRISPR/Cas9 therapeutic molecules to treat rare liver diseases [50][51][52]. In a mouse model of the human disease hereditary tyrosinemia Yin et al. demonstrated a CRISPR/Cas9-mediated correction of the Fah mutation in hepatocytes [53]. The hydrodynamic injection of components of the CRISPR/Cas9 system resulted in an expression of the wild-type Fah protein in~1/250 liver cells and rescued the body weight loss phenotype. The same group developed an optimal set for a safer clinical application by using chemically modified RNAs [54]. These studies underline the potential of the CRISPR/Cas9 system for allele-specific genome editing in WD. Since WD livers exhibit high concentrations of Cu, a CRISPR/Cas9-mediated repair of the ATP7B gene may benefit from this condition as a selection advantage.
Lately, CRISPR/Cas9 engineered T cells were applied in patients with refractory cancer, demonstrating the feasibility of CRISPR gene editing for cancer immunotherapy [55]. Although there are still limitations of CRISPR/Cas9 application in the clinic, e.g. in vivo offtarget effects, or possible immune responses to the Cas9 protein, which is a bacterial enzyme, this technology may be a first step towards curing WD. The current study demonstrates that CRISPR/Cas9 technology is not only highly efficient in introducing specific ATP7B mutations, but also in correcting ATP7B point mutations, which are highly frequent in WD patients.
While the use of ssODNs is limited to non-viral delivery methods, the application of these therapeutic molecules may initiate a direct and safe correction of point mutations within the ATP7B gene, thus contributing to a WD gene modification with high therapeutic potential in clinical application.
Supporting information S1 Raw image. Original Western Blot image. ATP7B protein expression of four homozygous cell clones and four heterozygous cell clones were compared to HEK293T WT cells and ATP7B KO cells. β-Actin staining was used as a protein loading control. Panels 3 to 12 were used to create Table. List of single-stranded oligo DNA nucleotides (ssODNs). All 12 ssODNs carry the reintroduced cytosine nucleotide (red), and the PAM region (blue). Group ssODN_3M (1-4) carry three blocking mutations, group ssODN_2M (5-8) carry two blocking mutations, shown as capital letters. Group ssODN_C (9)(10)(11)(12) | 5,703.6 | 2020-09-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
A Cyber-Physical-Human System for One-to-Many UAS Operations: Cognitive Load Analysis
The continuing development of avionics for Unmanned Aircraft Systems (UASs) is introducing higher levels of intelligence and autonomy both in the flight vehicle and in the ground mission control, allowing new promising operational concepts to emerge. One-to-Many (OTM) UAS operations is one such concept and its implementation will require significant advances in several areas, particularly in the field of Human–Machine Interfaces and Interactions (HMI2). Measuring cognitive load during OTM operations, in particular Mental Workload (MWL), is desirable as it can relieve some of the negative effects of increased automation by providing the ability to dynamically optimize avionics HMI2 to achieve an optimal sharing of tasks between the autonomous flight vehicles and the human operator. The novel Cognitive Human Machine System (CHMS) proposed in this paper is a Cyber-Physical Human (CPH) system that exploits the recent technological developments of affordable physiological sensors. This system focuses on physiological sensing and Artificial Intelligence (AI) techniques that can support a dynamic adaptation of the HMI2 in response to the operators’ cognitive state (including MWL), external/environmental conditions and mission success criteria. However, significant research gaps still exist, one of which relates to a universally valid method for determining MWL that can be applied to UAS operational scenarios. As such, in this paper we present results from a study on measuring MWL on five participants in an OTM UAS wildfire detection scenario, using Electroencephalogram (EEG) and eye tracking measurements. These physiological data are compared with a subjective measure and a task index collected from mission-specific data, which serves as an objective task performance measure. The results show statistically significant differences for all measures including the subjective, performance and physiological measures performed on the various mission phases. Additionally, a good correlation is found between the two physiological measurements and the task index. Fusing the physiological data and correlating with the task index gave the highest correlation coefficient (CC = 0.726 ± 0.14) across all participants. This demonstrates how fusing different physiological measurements can provide a more accurate representation of the operators’ MWL, whilst also allowing for increased integrity and reliability of the system.
Introduction
Advancements in technologies such as Artificial Intelligence (AI), sensor networks and agent-based systems are rapidly changing the operations of Unmanned Aircraft Systems (UASs) and are introducing systems with higher levels of intelligence and autonomy [1]. Particularly, system automation is becoming increasingly complex with heterogeneous sensor networks and algorithms that incorporate increasing amount of input data and with multiple objectives. A negative effect of this complexity is the human operators' loss of Situational Awareness (SA) and the increase in Mental Workload (MWL) in certain scenarios, where it is paradoxically meant to alleviate MWL [2]. A Cyber-Physical-Human (CPH) system is a particular class of Cyber-Physical Systems (CPS), which fundamentally addresses these issues. The implementation of a CPH system is vital as it ensures that the human maintains a central role in the operation of the system as the Human-Machine Interfaces and Interactions (HMI 2 ), intelligence and autonomy advances.
The measurement of cognitive load, particularly MWL, in real-time gives CPS the ability to sense and adapt to the human operator. The proposed Cognitive Human Machine System (CHMS) is a CPH system concept that incorporates system automation support, which modulates as a function of measured cognitive state of the human operator [3][4][5]. Among other functions, the system allows dynamic adaptation of the system Automation Level (AL) and actual command/control interfaces, while maintaining desired MWL and the highest possible level of situational awareness. This new adaptive form of HMI 2 is central to support the airworthiness certification and widespread operational deployment of One-to-Many (OTM) systems in the civil aviation context [6][7][8].
An important consideration for a CHMS is implementing a sensor network with different physiological measurements, such as those originating from electrical and metabolic brain activity, eye movement activity and cardiorespiratory activity. This is important as each physiological parameter observes different biological processes, and their corresponding sensors are thus sensitive to signal contamination originating from distinctly different disturbances. For instance, Electroencephalogram (EEG) electrodes are prone to internal and external artifacts such as eye blinks, movement, heartbeat artifacts and other electromagnetic interference [9], whereas blink rate and pupillometry are among other sensitive to ambient light stimuli [10,11]. Henceforth, in a CHMS the monitoring of multiple parameters in a sensor network ensures the integrity of the system [3]. Such a sensor network is also natively suited to exploit data fusion of the physiological measurements to increase the overall accuracy and reliability of the human operators' estimated MWL. The disturbances mentioned above additionally mean that it is challenging to identify the true signal of interest from the noise. As such, the comparison with other MWL measures, such as subjective questionnaires and objective task performance measures, is important for cross referencing with the physiological measures, in order to verify that they are correctly and accurately measuring MWL. Moreover, having additional MWL measures is needed for potentially implementing them as labels for inference methods such as supervised Machine Learning (ML) techniques in the training/calibration phase [12].
The measurement of the physiological response and inferring cognitive states, with and without system adaptation has been demonstrated in previous studies [12][13][14][15][16][17][18]. However, there are still considerable challenges with the implementation of such methods, where some extensive reviews have identified that measures of MWL are not universally valid for all task scenarios [19,20]. A reason for this is that the physiological responses for MWL can be scenario dependent and are thus influenced by a range of individual differences and task characteristics [20].
In this paper we present a study with two physiological sensors, including an EEG and eye tracker, as well as a secondary task performance index and a subjective questionnaire as measurements of MWL in an OTM UAS wildfire detection mission. Here the participants assume the role of a UAS pilot controlling multiple Unmanned Aerial Vehicles (UAVs), where the task scenario is designed to incrementally increase in difficulty throughout a 30-min mission. This study capitalizes on existing approaches for measuring MWL and proposes a multi-sensory approach, with the data fusion of the eye tracking and EEG measures. This extends the research on the CHMS concept and demonstrates the ability to measure MWL in a complex OTM UAS task scenario. As such the contribution of this study is the relationship between the physiological and objective measures in the context of CHMS for OTM UAS operations. The contribution towards the development of a real-time measure of a human operators' MWL will support the implementation of more adaptive and intelligent forms of automation in OTM UAS operation.
Background on Mental Worklaod (MWL) and MWL Measurements
Among the various forms of cognitive load, MWL is of central importance as it influences the operators' performance and thus the system performance [21]. MWL is a complex construct and is challenging to define accurately [22], however MWL is assumed to be a reflection of the level of cognitive engagement and effort as an operator performs one or more tasks. Henceforth, a general definition of MWL is "the relationship between the function relating the mental resources demanded by a task and those resources available to be supplied by the human operator" [23]. Mental workload can thus be determined by exogenous task demands and endogenous supply of processing resources (i.e., attention and working memory). A notable distinction to make is between MWL and task load, where MWL reflects the operators' subjective experience while undergoing particular tasks during certain environments and time constraints. However, task load is the amount of work or external duties that the operator has to perform [24]. The operators' resulting MWL can thus be an outcome of the task demand and also endogenous factors such as experience, effort, stress and fatigue [25].
A significant human factor concern for complex, safety-critical aerospace systems is the prevention of suboptimal MWL such as mental underload and overload. Both are discriminated by referring to the source of error during operation, where the former relates to reduced alertness and lowered attention, while the latter refers to information overload, diverted attention and/or insufficient time required for information processing [2,21]. This relationship between MWL and operator performance can be modeled with the inverted U function, which indicates when an operator enters suboptimal workload that can lead to errors and accidents [26].
The present method of measuring MWL is generally done with either subjective measures, performance measures or physiological measures. Subjective measures include having the operator fill out questionnaires and self-confrontation reports such as NASA Task Load Index (NASA-TLX, [27]) and Instantaneous Self-Assessment (ISA, [28]). These measures are not in real-time and are generally collected following the completion of a task or at infrequent intervals during the experiment. Overcoming this challenge would mean interrupting the participant/operator more frequently, which would take away attention and mental resources from the primary task. Moreover, as questionnaires are self-reported the answers are prone to bias and a peak end effect [29].
Task performance measures can be further categorized into primary task performance measures and secondary task performance measures. The task performance measures generally evaluate speed or accuracy including tracking performance, reaction time or number of errors, where it can be seen as the overall effectiveness of the human machine interaction [30]. As compared to subjective questionnaires, task performance measures can be collected at much more frequent intervals. When additional tasks are added to the demand, secondary measures or dual task technique can be used. This could include the operator performing a primary task varying in cognitive demand, while having to fulfill a relatively low-demand secondary task, such as pressing a button immediately prior to hearing a tone. Here it is assumed that as cognitive capacity is increased by the primary task, there is less capacity available for the secondary task [31]. Although a more widely accepted measure for MWL, secondary task performance can be disturbing as it interferes with the primary task and may not be operationally relevant [32].
Controller inputs have also been used as a potential task-based measure of the operators' cognitive load [33]. Among other, this measurement includes measuring the speed at which the operator responds to a task or accuracy of clicking a button. However, since reaction speed and accuracy measures Sensors 2020, 20, 5467 4 of 21 are usually difficult to implement for more complex tasks, a more straightforward implementation involves the rate of control inputs, or the count of control inputs within a given time.
Lastly, physiological measures are derived from the operators' physiology, and include measures from two anatomically distinct structures, namely the Central Nervous System (CNS) and Peripheral Nervous System (PNS) [34]. From these categories the physiological response of interest for passive control of the system are the involuntary reactive responses of the human operator [30]. These physiological measures have in recent years gained traction with the new technological developments and affordable prices and can allow for objective, unobtrusive and real-time measurement of MWL. Although there are numerous techniques for performing physiological response measures, the current notable ones include eye tracking measures, EEG, Functional Near Infrared Spectroscopy (fNIR), Electromyogram (EMG) and Electrocardiogram (ECG). In previous studies, the measurement of MWL has been demonstrated to modulate task load based on mental overload cases including the use of EEG measures in an Air Traffic Management (ATM) scenario [14]. Another study has contrarily modulated task load based on a mental underload case, where the difficulty presented to a pianist increased when an fNIR sensor detected that the presented material became too easy for the participant [18]. More commonly however, studies have mainly measured cognitive states in response to task load without dynamic task adaptation [13,15,17,35]. Nonetheless, the inference of cognitive states based on physiological data is still an active area of research, with the most promising avenue being the use of AI techniques including supervised Machine Learning (ML) to generate models of the users' cognitive states based on labeled data [12,13,[15][16][17]. For a more detailed review on the various physiological sensors, and the corresponding methods implemented for processing MWL measurements see the following reference [3].
For this study EEG and eye tracking measures were used, as such the remaining section outlines the EEG and eye tracking methods as needed for this study. When applied in clinical use, EEG frequencies bands are generally categorized into five different ranges. These include delta (δ, <4 Hz), theta (θ, 4-7 Hz), alpha (α, 8-12 Hz), beta (β, 12-30 Hz) and gamma (γ, >30 Hz) [36]. The layout of the electrode placement is standardized and follows the international 10-20 system. Previous studies have indicated that the changes in workload are observed with variations in the theta and alpha bands [35,[37][38][39][40][41][42]. More specifically, with higher workload the power in the theta band has been observed to increase at the frontal and central regions [35,37,41], while in the alpha band there has been observed a decrease in power at the left and right occipital regions [41]. Additionally, previous studies have indicated that 4-6 electrodes are sufficient to achieve accurate EEG recordings of cognitive states [43].
Eye tracking features can be deduced from gaze features or pupillometry and is performed with either wearable or remote sensors. Gaze features further includes fixation, saccade, dwell, transition and scan path, while pupillometry includes eye closure, blink rate and pupil radius [44]. In regard to gaze features, the scan path can allow for more complex features to be extracted such as visual entropy [45]. The eye tracking features correlated with the cognitive state include fixation, blink rate, saccades, pupil diameter, dwell time and visual entropy [3]. Visual entropy provides a particularly useful measure, where studies have shown that visual entropy was able to discriminate between control modes and flight phases associated with different levels of MWL [46]. This measure uses the randomness of the users' gaze patterns and once Areas of Interest (AOI) have been defined on the Human Machine Interface (HMI), visual entropy can be simply calculated from gaze data as a single, easily interpretable value.
Cognitive Human Machine System (CHMS) and Design Considerations
The proposed CHMS is based on an advanced CPH architecture incorporating both adaptive interfaces and automation support, which are modified dynamically as a function of the human operators' cognitive states as well as other relevant operational/environmental observables. The counterpart of a CPH system is an Autonomous Cyber-Physical (ACP) system, which operates Sensors 2020, 20, 5467 5 of 21 without the need for human intervention or control. Many of the CPS implemented today are a part of the subclass Semi-Autonomous Cyber-Physical (S-ACP) system that perform autonomous tasks in certain predefined conditions but require a human operator otherwise. However, the S-ACP systems are unable to dynamically adapt in response to external stimuli. Hence a CPH system addresses this as the interaction between the dynamics of the system and the cyber elements of its operation can be influenced by the human operator, and the interaction between these three elements are continuously modulated to meet specific objectives.
A key feature of the CHMS, initially described in [4,5], is the real-time physiological sensing of the human operator to infer cognitive states that drive system adaptation. In its fundamental form, the CHMS framework can be depicted as a negative feedback loop as seen in Figure 1 below. Here, MWL is used as the reference for modulating the automation support and interface, where the resulting MWL for the human operator is a function of the task load (i.e., the number of tasks and/or task complexity) and the operators' endogenous factors (i.e., expertise, time pressure, etc.). Hence, when the workload is measured to increase or decrease beyond the specified thresholds, the adaptation module is activated to modulate the operators' task load, which can be done by changing the automation level, task scheduling and/or changing the interface. The operation of CHMS is expected to provide benefits for several aerospace areas apart from OTM UAS operations including ATM [47], Urban Traffic Management (UTM) [48] and Single Pilot Operation (SPO) [5,7]. The operation of the CHMS in all these applications will support the systems to operate at higher levels of autonomy while ensuring that the human operator maintains a central role of the system and the degree of trust with the system is maintained. influenced by the human operator, and the interaction between these three elements are continuously modulated to meet specific objectives.
A key feature of the CHMS, initially described in [4,5], is the real-time physiological sensing of the human operator to infer cognitive states that drive system adaptation. In its fundamental form, the CHMS framework can be depicted as a negative feedback loop as seen in Figure 1 below. Here, MWL is used as the reference for modulating the automation support and interface, where the resulting MWL for the human operator is a function of the task load (i.e., the number of tasks and/or task complexity) and the operators' endogenous factors (i.e., expertise, time pressure, etc.). Hence, when the workload is measured to increase or decrease beyond the specified thresholds, the adaptation module is activated to modulate the operators' task load, which can be done by changing the automation level, task scheduling and/or changing the interface. The operation of CHMS is expected to provide benefits for several aerospace areas apart from OTM UAS operations including ATM [47], Urban Traffic Management (UTM) [48] and Single Pilot Operation (SPO) [5,7]. The operation of the CHMS in all these applications will support the systems to operate at higher levels of autonomy while ensuring that the human operator maintains a central role of the system and the degree of trust with the system is maintained. The CHMS has parallels to a passive Brain Computer Interface (pBCI) [49], however CHMS further expands on pBCI by implementing other physiological parameters apart from brain signal processing and additionally incorporates external environmental/operational factors for estimating the cognitive states. The more detailed CHMS concept is depicted in Figure 2 and requires the adoption of three fundamental modules: sensing, estimation and adaptation. The sensing module includes two sensor networks including the sensors for measuring physiological and external conditions. The physiological sensors include various advanced wearable and remote sensors, such as the EEG and eye tracker. The other network includes for example sensors for measuring weather and measurements about the flight phase. The collected data is then passed to the estimation module, where the data from the networks are passed to respective inference models. This is then combined to make a final estimation of the different levels of the cognitive states. The estimated cognitive states are then compared with the reference cognitive states, and the deviation from these predefined references is what drives the adaptation module, which includes changing the AL, task scheduling, the interface and/or the alerting mode. These alterations thus modify what information and tasks are presented to the human operator, which again alters the cognitive states of the human operator, and the cycle then continues.
Before full implementation of a CHMS in future operational use, an initial training/calibration phase would need to be performed to calibrate the estimation module by generating and validating a cognitive state model of the human operator. Such a calibration phase will define the baseline and thresholds of the cognitive states, which will serve as the reference cognitive state conditions for comparison with the operationally collected and estimated data. The inference method adopted for the CHMS estimation module can include various AI methods, where supervised ML models are among the most promising approaches [12]. With such a method however, the calibration phase The CHMS has parallels to a passive Brain Computer Interface (pBCI) [49], however CHMS further expands on pBCI by implementing other physiological parameters apart from brain signal processing and additionally incorporates external environmental/operational factors for estimating the cognitive states. The more detailed CHMS concept is depicted in Figure 2 and requires the adoption of three fundamental modules: sensing, estimation and adaptation. The sensing module includes two sensor networks including the sensors for measuring physiological and external conditions. The physiological sensors include various advanced wearable and remote sensors, such as the EEG and eye tracker. The other network includes for example sensors for measuring weather and measurements about the flight phase. The collected data is then passed to the estimation module, where the data from the networks are passed to respective inference models. This is then combined to make a final estimation of the different levels of the cognitive states. The estimated cognitive states are then compared with the reference cognitive states, and the deviation from these predefined references is what drives the adaptation module, which includes changing the AL, task scheduling, the interface and/or the alerting mode. These alterations thus modify what information and tasks are presented to the human operator, which again alters the cognitive states of the human operator, and the cycle then continues. resolution, etc.) and sampling frequencies of each sensor. As such a sensor network optimisation scheme is key when designing a reliable CHMS [3]. The adoption of sensor networks is both a natural and necessary evolution to effectively exchange, synchronise and process measurement data within a customisable operational network architecture. In addition, a sensor network is natively suited to exploit data fusion of the physiological measurements to increase the overall inference accuracy and reliability of the estimation module. The remaining sections of this paper outlines the materials and method, results, discussion and conclusion. In section two, materials and methods for this study are described and include details on the task scenario as well as the methods for the post processing analysis implemented. The following section are the results, which comprises of two parts. The first part presents a statistical comparison between the mission phases (Phase 1, 2 and 3) for all the MWL measures, including subjective, performance and physiological measures. The second part of the results section provides a correlation analysis of the continuous physiological measures (EEG and eye tracking) and the continuous performance measures. Furthermore, a method for fusing the physiological measures is implemented and analyzed. Lastly, the results are discussed, before a conclusion is drawn in section five. Before full implementation of a CHMS in future operational use, an initial training/calibration phase would need to be performed to calibrate the estimation module by generating and validating a cognitive state model of the human operator. Such a calibration phase will define the baseline and thresholds of the cognitive states, which will serve as the reference cognitive state conditions for comparison with the operationally collected and estimated data. The inference method adopted for the CHMS estimation module can include various AI methods, where supervised ML models are among the most promising approaches [12]. With such a method however, the calibration phase should be conducted using additional objective measures, such as secondary task performance, task complexity (determined analytically prior to the experiment) and/or controller inputs, which will serve as data labels for model training/calibration.
As mentioned above, the various physiological sensors and their biological processes are prone to distinctly different disturbances. Although multiple sensors are needed to improve reliability, some challenges arise with this including different measurement performance (e.g., accuracy, resolution, etc.) and sampling frequencies of each sensor. As such a sensor network optimisation scheme is key when designing a reliable CHMS [3]. The adoption of sensor networks is both a natural and necessary evolution to effectively exchange, synchronise and process measurement data within a customisable operational network architecture. In addition, a sensor network is natively suited to exploit data fusion of the physiological measurements to increase the overall inference accuracy and reliability of the estimation module.
The remaining sections of this paper outlines the materials and method, results, discussion and conclusion. In Section 2, materials and methods for this study are described and include details on the task scenario as well as the methods for the post processing analysis implemented. The following section Sensors 2020, 20, 5467 7 of 21 are the results, which comprises of two parts. The first part presents a statistical comparison between the mission phases (Phase 1, 2 and 3) for all the MWL measures, including subjective, performance and physiological measures. The second part of the results section provides a correlation analysis of the continuous physiological measures (EEG and eye tracking) and the continuous performance measures. Furthermore, a method for fusing the physiological measures is implemented and analyzed. Lastly, the results are discussed, before a conclusion is drawn in Section 5.
Participants
There were five participants that took part in the experiment comprising of four males and one female. The participants were aerospace students at Royal Melbourne Institute of Technology (RMIT) University and were selected based on their prior experience in aviation and aerospace engineering. None of the participants had prior experience with this OTM scenario, and as such two different familiarization sessions were conducted lasting around an hour each. All participants volunteered for the experiment and were not paid. Informed verbal consent was given prior to the experiment. The corresponding ethics approval code for this research is ASEHAPP 72-16.
Experimental Procedure
The experimental procedure consisted of a briefing, sensor fitting and a rest period, followed by the mission. After the mission was completed, there was a second rest period before a final debrief. The whole procedure took approximately one hour. The refresher briefing was conducted to ensure that participants were familiar with the scenario and the interface. Following that, participants were fitted with the EEG device and the EEG electrodes impedances were checked to ensure they were within acceptable levels, this was then followed by a calibration of the desk-mounted eye tracker. Once both sensors were set-up, physiological data recording started, and data was logged for 5-min while the participant rested. After the resting phase the OTM UAS wildfire scenario commenced, which consisted of three back-to-back 10-min phases designed to provide increasing levels of difficulty. At the end of the scenario, physiological data was logged for another 5-min during a post-mission resting phase. Subsequently, participants provided subjective ratings for their workload and situational awareness in each of the three phases.
Mission Concept
For this scenario the test subjects assume the role of a UAS ground operator tasked with coordinating the actions of multiple UAVs in a wildfire surveillance mission. The primary objective of the mission is to find and localize any wildfires within the Area of Responsibility (AOR). The secondary objectives are to firstly maximize the search area coverage, and secondly to ensure that the UAV fuel levels, as well as navigation and communication (comm) performance are within a serviceable range. Further details about the mission objectives are provided in Table 1.
The sensor payload of the UAV comprises of an active sensor (lidar) and a passive sensor (Infrared (IR) camera). UAVs can be equipped with either one of the two sensors or both sensors. The lidar provides an excellent range but a narrow field of view. To operate the lidar, it must be fired towards a ground receiver to measure the CO 2 concentration of the surrounding atmosphere (i.e., the mean column concentration of CO 2 ), and areas with excessive CO 2 concentration are likely to contain wildfires. There are a limited number of ground receivers within the AOR, which constrains the search area of the lidar. On the other hand, the infrared camera possesses a smaller range but has a larger field of view. Unlike the lidar, the camera does not require the use of a ground receiver and can be used anywhere within the AOR.
The AOR is divided into smaller regions called Team Areas, which can then be assigned to UAV Teams. The division of the AOR into smaller regions allows UAVs to bound from area to area, initially conducting the search in the area closest to the base before searching further out. The concept for this is illustrated in Figure 3 with the AOR denoted in white borders while the Team Areas are depicted as convex polygons of different colors. In Phase 1 of the scenario, 3 UAVs are made available to the human operator to search the area closest to the base (Team Area 1). After the Area has been searched, or when the mission transits to Phase 2 (whichever occurs first), the operator will direct the initial UAVs, originally in Team 1, to move to Area 2 in order to allow the new UAVs to take over the coverage of Area 1. After Area 2 has been searched, the human operator repeats the same strategy with Area 3, moving the UAVs originally in Area 2 into Area 3 and the UAVs originally in Area 1 into Area 2. UAVs assigned to search an area should be assigned to the team associated with that area (i.e., Team 1 for Area 1, Team 2 for Area 2, etc.), as the team structure allows operators to exploit some built-in automation support such as search area designation, path planning and platform allocation. For further detail on the concept of operations and task analysis see the following references [50,51]. Depending on how the scenario evolves the MWL profile during this mission can be different between one participant to another. Nonetheless, although a simpler scenario can generate a more repeatable MWL profile, a more realistic scenario was used to evaluate the feasibility of the OTM concept and to allow for known physiological measures to be tested on a realistic application. Repeatability was maximized by carefully controlling independent variables such as the number of UAVs being controlled and the geographic extent of the AOR over each phase of the mission.
Secondary Task Index
A task index was used to provide an additional objective and continuous measure of MWL during the scenario. The main purpose of the task index was to assess the secondary task performance Depending on how the scenario evolves the MWL profile during this mission can be different between one participant to another. Nonetheless, although a simpler scenario can generate a more repeatable MWL profile, a more realistic scenario was used to evaluate the feasibility of the OTM concept and to allow for known physiological measures to be tested on a realistic application. Repeatability was maximized by carefully controlling independent variables such as the number of UAVs being controlled and the geographic extent of the AOR over each phase of the mission.
Secondary Task Index
A task index was used to provide an additional objective and continuous measure of MWL during the scenario. The main purpose of the task index was to assess the secondary task performance of the participant by providing a weighted count of the number of pending secondary tasks (i.e., system maintenance tasks). The number of pending tasks was calculated from the UAV flight logs as detailed in Table 2 below. Each UAV can thus have up to 6 points at any given time indicating a high level of unsatisfactory secondary task performance. Table 2. Task index calculation for each UAV.
Pending Secondary Tasks Penalty
Poor navigation performance (accuracy above 25 m) +1 Adequate navigation performance (accuracy between 10 and 25 m) +0.5 Excellent navigation performance (accuracy below 10 m) +0 Poor communication performance (comm strength below 50%) +1 Adequate communication performance (comm strength between 50% and 70%) +0.5 Excellent communication performance (comm above 70%) +0 Critically low fuel (fuel needed to return to base less than 1.5× of fuel on board) +1 Low fuel (fuel needed to return to base between 1.5× and 2× of fuel on board) +0.5 Adequate fuel (fuel needed to return to base more than 2× of fuel on board) +0 Autopilot mode in hold +1 Autopilot mode off +0 UAV not assigned into a team +1 UAV is assigned into a team +0 UAV does not have any sensors active +1 UAV does have sensors active +0
Eye Tracker Equipment and Data Processing
The eye tracking data was collected using the Gazepoint GP3, which is a remote sensor positioned at the base of the monitor about 65 cm away from the participant. The raw eye tracking data comprises of the x and y coordinates of the gaze point and the blink rate. The system is setup to take the average x and y coordinates from the left and right pupil. If one pupil is not detected the system takes the x and y coordinates of the remaining pupil. If both are not available an invalid data point is recorded which will not be included in the data analysis. To allow for real-time processing of the scenario parameters and the processing of the eye tracking measurements, all eye-tracking data was routed to a central server. Besides eye tracking data, the server also collects and processes the flight logs of each UAV, each including the position, attitude, task type, autopilot mode, automation mode and performance of the different subsystems. During the scenario, the raw eye tracking data was processed by the server to derive other real-time metrics, including: dwell time on UAVs and UAV teams, attention on UAVs and UAV teams, along with UAV and team visual entropies, calculated from separate transition matrices of UAVs and UAV teams. However, visual entropy for UAVs gave the best indication of workload and was thus solely used for further analysis.
The visual entropy (H) is determined from gaze transitions between different Regions of Interest (ROIs), which are typically represented in a matrix. The cells represent the number (or probability) of transitions between two interfaces. The visual entropy measures the randomness of the scanning patterns, and is given by [45]: where n and m are the rows and columns of the transition matrix respectively, p Y ij X i is the probability of fixation of the present state (i.e., fixation at region Y ij given previous fixation at region X i ) and p(X i ) is the probability of fixation of the prior state (i.e., probability of the previous fixation). A high value of H implies high randomness in the scan path while a low value of H implies an orderly scan pattern, therefore higher values of H indicate periods of higher workload where the operator is unable to maintain a regular scan pattern.
EEG Equipment and Data Processing
For performing the EEG recordings during the experiment, the actiCAP Xpress, from Brain Products GmbH was used. The EEG device utilizes low impedance gold-plated electrodes, which are meant to optimize the connectivity, thus reducing the need for electrode gel. However, from observation it was found that electrode gel was needed to obtain a clear signal. Moreover, the cap is combined with the V-Amp amplifier, and the software Brain Vision Recorder, which is used for visualizing and storing the raw EEG data. The layout of the cap follows the international 10-20 system, where 16 data electrodes were collecting data at the locations F4, Fz, F3, FC1, FC2, C3, C4, CP1, CP2, T7, T8, P3, Pz, P4, O1 and O2. The active reference electrode and passive ground electrode are placed on the earlobes of the participant. While being fitted with the EEG the minimum impedance accepted was below 5 kΩ. To achieve this the unsatisfactory electrode was either jiggled or alcohol and/or gel was applied to the area. The resulting EEG index is as described in the equation below, were it is calculated at 5 s intervals: here θ F4+C4 refers to the average theta power for electrode positions F4 and C4, while α O1+O2 average alpha power for positions O1 and O2. This was achieved by initially processing the individual channels with a bandpass filter between 0.5 and 30 Hz. A five second sample window was then applied for each channel to obtain fixed-length signal samples, which are then preprocessed by applying linear detrending. The Power Spectral Density (PSD) of the filtered sample window is then obtained and then the respective bands are integrated to determine the band power. Once all channels have been processed, the band powers of the respective channels are averaged and then divided to derive the EEG index. After the EEG index was calculated for all the 5 s intervals additional smoothing was performed prior to the data analysis. This was done using a lowpass filter and highlighted the predominant trends in the data. For the EEG data processing, additional data rejection criteria was included where data identified as outliers were removed. Here the isoutlier function in MATBLAB was used, which returned true for all elements that were more than 3 standard deviation from the mean. The function was performed following the calculation of the EEG index. Among the data for all the participants there were identified outliers for one participant. Here 5 outliers were detected, which were then replaced with mean values.
Controller Input Processing
During the scenario the subject controlled and navigated the application by clicking on the screen with the left and right mouse buttons. The mouse clicks were logged by the central server and total number of controller inputs (number of left and right clicks) was counted during 2-min intervals. Additional processing was obtained to discriminate between command inputs and panning/zooming inputs, however these results are not presented here.
Data Analysis
For data analysis the multiple one-way Analyses of Variance (ANOVA) and Pearson correlation coefficient was carried out on the processed data. A 5% significance level was used for all the statistical tests.
ANOVA Analysis
Multiple one-way Analyses of Variance were carried out to determine the statistical significance of the dependent measures in the different phases of the test scenario. The dependent measures comprised of a subjective questionnaire, physiological measures and performance measures. Physiological features and task performance measures were post-processed to obtain the normalized mean values for each participant in each phase of the test, comprising of five phases: Pre-rest, Phase 1, Phase 2, Phase 3 and Post-rest. Values were normalized using the data collected from all five phases of a participants' dataset and were centered to have a mean of 0 and scaled to provide a standard deviation of 1. Additionally, the Tukey's test was further implemented to identify what groups are significantly different from one another.
Correlation between Features
To investigate the linear relationship between the features, the Pearson Correlation Coefficient (CC) was calculated from all combinations of the different measurements. Equation (3) outlines the correlation coefficient: here n is the number of data points while x and y are the two respective features being analyzed. For each participant, pairwise correlations between six features were calculated. These six features include the EEG index, visual entropy, task index, control inputs, fused physiological measure and fused objective measure. The fused physiological measure was made up of a weighted sum of the visual entropy and EEG index while the fused objective measure was made up of a weighted sum of the task index and control inputs. Three different sets of weights were explored including 50/50, 70/30 and 30/70. As each participant had an individual correlation coefficient value for each feature-pair, a single value was obtained by determining the mean and standard deviation of that feature-pair across all participants.
ANOVA Analysis
The ANOVA analysis was conducted to determine if there were significant differences in both dependent measures across the different mission phases, giving an insight into the experimental design of the scenario and if the results are in fact suitable for further analysis and implementation. The dependent measures included in the ANOVA analysis comprised of subjective ratings, physiological features and performance measures. The subjective ratings included the mental workload rating and the situational awareness rating for each mission phase. The performance measures included the average task index value and controller input count across each phase, while the physiological measures included the average value of EEG index and visual entropy in each phase. The EEG index measurement was performed during the pre-and post-mission resting stages and was thus analyzed for all five phases as well as just the three mission phases. The results of the ANOVA analysis are summarized in Table 3 below.
Task Index and Controller Input
The ANOVA test performed on the task index showed it to be significant, with F(2,12) = 88.47, p = 6.56 × 10 −6 , see Table 3 and Figure 5a ANOVA showed the effect of controller input to be significant, with F(2,12) = 22.1, p = 9.47 × 10 −5 , see Table 3 and Figure 5b. Post hoc comparison using the TUKEY HSD test showed that Phase 1 (M = − 0.594, SD = 0.110) was significantly different from both Phases 2 and 3, while controller input in Phases 2 and 3 were not significantly different from each other. The lack of significant difference in Phases 2 and 3 implies that the control input count might not be a suitable proxy for MWL at medium to high workload levels since it tended to saturate at these stages, leading to decreased sensitivity. Performing the ANOVA test for the subjective situational awareness rating demonstrated that the SA rating was significant F(2,12) = 25.82, p = 4.49 × 10 −5 , see Table 3 and Figure 4b. Post hoc comparison using the Tukey HSD test showed that all 3 groups were significantly different from one another, Phase 1 (M = 9.6, SD = 0.555), Phase 2 (M = 6.2, SD = 0.555) and Phase 3 (M = 4, SD = 0.555).
These results indicate that the experimental design for the mission scenario was successful in increasing the task load and mission complexity across the three mission phases, as indicated by an increasing MWL and decreasing SA. Although the subjective measures are prone to bias and the measures are infrequent, these results could serve as an additional reference for the physiological measure and are useful for comparison between the different ANOVA analyses.
Task Index and Controller Input
The ANOVA test performed on the task index showed it to be significant, with F(2,12) = 88.47, p = 6.56 × 10 −6 , see Table 3 and also supported the comparison with the subjective MWL measures that similarly increases between the phases.
ANOVA showed the effect of controller input to be significant, with F(2,12) = 22.1, p = 9.47 × 10 −5 , see Table 3 and Figure 5b. Post hoc comparison using the TUKEY HSD test showed that Phase 1 (M = − 0.594, SD = 0.110) was significantly different from both Phases 2 and 3, while controller input in Phases 2 and 3 were not significantly different from each other. The lack of significant difference in Phases 2 and 3 implies that the control input count might not be a suitable proxy for MWL at medium to high workload levels since it tended to saturate at these stages, leading to decreased sensitivity. ANOVA showed the effect of controller input to be significant, with F(2,12) = 22.1, p = 9.47 × 10 −5 , see Table 3 and Figure 5b. Post hoc comparison using the TUKEY HSD test showed that Phase 1 (M = − 0.594, SD = 0.110) was significantly different from both Phases 2 and 3, while controller input in Phases 2 and 3 were not significantly different from each other. The lack of significant difference in Phases 2 and 3 implies that the control input count might not be a suitable proxy for MWL at medium to high workload levels since it tended to saturate at these stages, leading to decreased sensitivity.
Physiological Measures
ANOVA test showed visual entropy to be significant, F(2,12) = 34.54, p = 1.05 × 10 -5 , see Table 3 and Figure 6a. Further post hoc comparison using the Tukey HSD test showed that Phase 1 (M = − 0.903, SD = 0.136) was significantly different from Phase 2 and 3, while the visual entropy in Phase 2 and 3 was not significantly different from each other. Although the means were increasing in line with subjective MWL measure and the task index measure, these were not statistically significant. One reason for the lack of statistical difference is that the visual entropy measure loses sensitivity between the medium and high workload.
For the EEG index the ANOVA analysis was performed on both the mission Phases 1, 2 and 3, as well as the performing the test on all five phases, where Pre-and Post-rest Phases were included. For the mission Phases 1-3 only, the ANOVA showed that the EEG index was significant, F(2,12) = 19.57, p = 0.0002, see Table 3 and Figure 6b. Further post hoc comparison using the Tukey HSD test showed that all three groups were significantly different from one another, Phase 1 (M = − 0.340, SD = 0.108), Phase 2 (M = 0.198, SD = 0.108) and Phase 3 (M = 0.612, SD = 0.108). For this analysis these results were the best for the physiological measures as all three groups were significantly different from one another. Moreover, this is comparable with the analysis on the subjective MWL measure and the task index measure.
For the ANOVA test performed on the full experiment length showed that the EEG index was significant, F(4,20) = 16.44, p = 4.11 × 10 −6 , see Table 3 For these results the expected response would be that the Pre-and Post-resting Phases are similar (or not statistically different), while Phases 1, 2 and 3 are different. Nonetheless, with the exception of the Post-resting Phase, the means of the phases were statistically different. This can be seen when performing the ANOVA and TUKEY test, while excluding the Post-rest Phase. This could be a consequence of the notion that the protocol for post resting was not well enough enforced. This analysis indicates that EEG index could further discriminate between a Pre-resting Phase and the mission Phases 1-3.
was not significantly different from each other. Although the means were increasing in line with subjective MWL measure and the task index measure, these were not statistically significant. One reason for the lack of statistical difference is that the visual entropy measure loses sensitivity between the medium and high workload.
For the EEG index the ANOVA analysis was performed on both the mission Phases 1, 2 and 3, as well as the performing the test on all five phases, where Pre-and Post-rest Phases were included. For the mission Phases 1-3 only, the ANOVA showed that the EEG index was significant, F(2,12) = 19.57, p = 0.0002, see Table 3 and Figure 6b. Further post hoc comparison using the Tukey HSD test showed that all three groups were significantly different from one another, Phase 1 (M = − 0.340, SD = 0.108), Phase 2 (M = 0.198, SD = 0.108) and Phase 3 (M = 0.612, SD = 0.108). For this analysis these results were the best for the physiological measures as all three groups were significantly different from one another. Moreover, this is comparable with the analysis on the subjective MWL measure and the task index measure. For the ANOVA test performed on the full experiment length showed that the EEG index was significant, F(4,20) = 16.44, p = 4.11 × 10 −6 , see Table 3 and Figure 7. Furthermore, the Tukey HSD test showed that the Pre-rest Phase (M = − 1.01, SD = 0.153) was significantly different from the four other groups. Phase 1 (M = − 0.34, SD = 0.153) was significantly different from Phase 3 and the Pre-rest Phase, while Phase 2 (M = 0.19, SD = 0.153) and Post-rest Phase (M = 0.15, SD = 0.153) were only significantly different to the Pre-rest Phase. Lastly, Phase 3 (M = 0.61, SD = 0.153) was significantly different to Phase 1 and the Pre-rest Phase. For these results the expected response would be that the Pre-and Post-resting Phases are similar (or not statistically different), while Phases 1, 2 and 3 are different. Nonetheless, with the exception of the Post-resting Phase, the means of the phases were statistically different. This can be seen when performing the ANOVA and TUKEY test, while excluding the Post-rest Phase. This could be a consequence of the notion that the protocol for post resting was not well enough enforced. This analysis indicates that EEG index could further discriminate between a Pre-resting Phase and the mission Phases 1-3. The ANOVA results show that controller input and visual entropy analysis can both discriminate Phase 1 from Phases 2 and 3 but failed to be statistically significant between Phases 2 and Phase 3. On the other hand, the subjective MWL measure, task index measure and the EEG index measure show greater statistical significance. These three measures show a similar effect of an increasing mean across the three mission phases, strongly corroborating the three different MWL measures (subjective, task-based and physiological) and showing that the experimental results were in-line with expectations.
Correlation Between Features
Further results include the correlation between the time series of the different features. Figure 8 plots the results for one participant and shows the comparison between the task index (blue) and the two physiological measures visual entropy (yellow) and EEG index (red). The x axis is time in The ANOVA results show that controller input and visual entropy analysis can both discriminate Phase 1 from Phases 2 and 3 but failed to be statistically significant between Phases 2 and Phase 3. On the other hand, the subjective MWL measure, task index measure and the EEG index measure show greater statistical significance. These three measures show a similar effect of an increasing mean across the three mission phases, strongly corroborating the three different MWL measures (subjective, task-based and physiological) and showing that the experimental results were in-line with expectations.
Correlation Between Features
Further results include the correlation between the time series of the different features. Figure 8 plots the results for one participant and shows the comparison between the task index (blue) and the two physiological measures visual entropy (yellow) and EEG index (red). The x axis is time in seconds, while the values are normalized between 0 and 1 for visual comparison and statistical analysis. and Phase 3. On the other hand, the subjective MWL measure, task index measure and the EEG index measure show greater statistical significance. These three measures show a similar effect of an increasing mean across the three mission phases, strongly corroborating the three different MWL measures (subjective, task-based and physiological) and showing that the experimental results were in-line with expectations.
Correlation Between Features
Further results include the correlation between the time series of the different features. Figure 8 plots the results for one participant and shows the comparison between the task index (blue) and the two physiological measures visual entropy (yellow) and EEG index (red). The x axis is time in seconds, while the values are normalized between 0 and 1 for visual comparison and statistical analysis. Table 4 summarizes the correlation coefficient values of the most notable features for each participant. These include the correlation between the task index and (1) the EEG index, (2) visual entropy and (3) the fused weighted sum of the two physiological measurements (weighted 50% each). Additionally, the correlation between the two physiological measurements, EEG index and visual entropy, were compared for all participants. As fulfilled by the data rejection criteria a section of the eye tracking data for participant 2 was excluded from the analysis and excluding this invalid data did improve the pairwise correlation. Table 5 presents the pairwise correlation coefficient values in the matrix form. The values were combined across all participants by taking the mean and standard deviation. The results indicate that there was no correlation between the control input and other features. However, the mean for all the participants shows that the correlation between the task index and fused physiological feature (a 50-50 weighted sum of the EEG index and visual entropy) was highest at CC = 0.726 ± 0.14. The second highest correlation was between the task index and the visual entropy with a CC = 0.648 ± 0.19. The mean correlation between EEG index and the task index gave CC = 0.628 ± 0.17, while the mean correlation between the EEG index and visual entropy was CC = 0.561 ± 0.11. Further analysis to explore the effects of differently weighted ratios showed that when weighting the visual entropy measurement 30% and the EEG index 70% the correlation with the task index and the fused sensors gave CC = 0.710 ± 0.16. When weighting the visual entropy measurement 70% and the EEG index 30% the correlation coefficient was CC = 0.710 ± 0.14.
The correlation of the time series for the different features show that no or poor correlation between the control input and other features was found. However, a good correlation was found between the task index and the fused sensor measurements as well as between the task index and the EEG index/Visual entropy. In addition, the correlation between the two physiological measurements was shown to be good. Weighting the physiological features 70/30 or 30/70 did not have much effect on the result as they remained strong.
Discussion
This study provided insight into the relationship between physiological and objective measures in a OTM UAS operation. However, in addition to this a number of useful insights were provided into the role of automation support in a multi-UAS context. The ground operators' main responsibilities included routine monitoring of UAV system health, analysing sensor data and strategically ensuring that resources were appropriately allocated within the AOR when planning UAV sorties or retasking individual UAVs. While the scenario was relatively manageable when participants were controlling three UAVs, they found it more challenging in the later phases when controlling more than six UAVs. Mission complexity was generally observed to scale exponentially with the number of UAVs, primarily due to the exponentially increasing number of interactions between different platforms in addition to the linearly increasing number of system monitoring and sensor analysis tasks. In this context, the automation support provided was aimed to reduce scenario complexity by taking over some of the tasks associated with managing the interactions between platforms. This was achieved by the UAV Team concept where UAVs were grouped into teams, allowing participants to stay 'on-the-loop' by managing the behaviour of UAV teams instead of remaining 'in-the-loop' by individually micromanaging each UAV. This behaviour was evident during the experiment, as participants tended to maintain better situational awareness when managing UAVs in teams. It was also observed that participants preferred to micromanage a small number of UAVs in the initial phase of the scenario but switched to team management in the latter two phases. Participants who did not make the switch to team management provided feedback that they did not trust the automation support as it was not sufficiently transparent or reliable. Another important observation was that even under team management mode, participants were still required to allocate significant attentional resources to micromanaging individual UAVs at specific instances in the mission (e.g., when troubleshooting system health, retasking the UAV or manually controlling the sensor to localize a fire), effectively transitioning from 'on-the-loop' command to 'in-the-loop' control. It was however observed that participants sometimes failed to assume direct control of UAVs when appropriate (e.g., when user input was required to resolve an issue with the system health), either because they were focused on another task, were overwhelmed by the amount of information/pending tasks that they overlooked the particular UAV, or because they assumed that automation support was capable of resolving the issue. As such the development towards adaptive interfaces are expected to support better transitions between 'on-the-loop' and 'in-the-loop' command, as it is able to infer the users' workload, intention and allocation of attentional resources and subsequently vary the amount of on-screen information to ensure smoother transitions.
As for the statistical analysis in this study, the ANOVA and correlation coefficient were both used to highlight two different factors. The ANOVA test was performed in the initial analysis and served to determine the validity of the experimental design as well as to get an idea of the average values of each measure across the different scenario phases. Following the ANOVA, a more detailed comparison of the time series data was carried out by evaluating the pairwise correlation coefficients between the various performance and physiological measures.
The results for the ANOVA analysis showed that all the measures were statistically significant, however a further Tukey test demonstrated that measures with all three scenario phases statistically differentiating from one another included the subjective responses for MWL and SA, as well as the task index and EEG index. As for the control input count, the results indicated that implementing this as an objective measure of MWL may not be a viable option. However, it can be noted that as the scenario is designed to push MWL to the limit it could be observed that the control input count saturated with high load and lost sensitivity between Phases 2 and 3. This means that further work remains to determine whether there are correlations between physiological measures and control input count to a system. As for the visual entropy, although not statistically different for all phases, the data was invalid for one participant during an extended period of the experiment. This occurred when the participant moved out of range of the camera, causing the eye tracker to lose track of the participants' pupil. Further investigating the CC results of that eye tracking measure showed that when excluding the section where the data was lost (at the start of Phase 3) and then correlating with the other measures the correlation improved. This highlights the importance of having at least two physiological sensors implemented in a CHMS system, since physiological observables can be particularly affected by noise, motion artifacts or susceptible to interference due to participant movement. Multiple sensors can additionally increase the consistency of measurements and reliability of the system. Performing the ANOVA test and corresponding Tukey test demonstrated firstly that the subjective workload and situational awareness ratings, which serves as the best approximation to the ground truth, was consistent with the results for the task index and the EEG index during Phases 1, 2 and 3. Although the task index and EEG index values were averaged across each of the three 10-min phases, it provided an initial assessment to determine what measures are suitable for further analysis.
The correlation between the time series measures using the CC demonstrated how the various performance and physiological measures compared in a complex OTM UAS task scenario. While subjective ratings are currently the best approximation to ground truth, these can only be taken after extended periods of time (e.g., at the end of each phase). However, the actual workload and situational awareness of the participant can fluctuate significantly throughout each of the 10-min phases. For example, sudden spikes in the task index were observed at the start of each phase for most participants since this was a period where new UAVs were released. The task index was thereafter observed to decrease or stabilize and only peak when the participant experienced increased load such as when localizing fires or retasking UAVs. These fluctuations in mission difficulty within each phase cannot be captured by subjective questionnaires. When comparing the task index with the EEG index and visual entropy, a relatively high correlation was expected, as they are supposed to fundamentally measure the similar variation in MWL. The difference being that the EEG index and visual entropy are physiological measures, while the task index is a task-based performance measure. Looking at Figure 8, the graphical comparison illustrated that visual entropy correlated with the task index in certain regions where the EEG index does not, and vice versa. Showing that the two physiological measures, although both gradually increasing, respond differently to the task demand of the scenario in short timeframes. The weighted sum of the two physiological measures (a 50-50 weighted sum of the EEG index and visual entropy) demonstrated a higher correlation with the task index (CC = 0.726 ± 0.14) than each individual physiological measure. This further demonstrates the importance of having more physiological sensor measurements and fusing methods when performing measurements and estimations on MWL in a fully operational CHMS system. Different weighted sums were also explored including weighting the visual entropy measurement and the EEG index 30-70% and 70-30% respectively. However, changing the weights did not show much effect. This can potentially be improved with an optimal weighting strategy that is unique for each individual subject.
The concluding results thus show that a moderate level of correlation was found across all participants between the task index and EEG index CC = 0.628 ± 0.19, as well as task index and visual entropy CC = 0.648 ± 0.17. Additionally, a fusing method demonstrated that fusing the physiological measures produced an improved and a high-level correlation CC = 0.726 ± 0.14. These results indicate that the physiological response of MWL for EEG and eye tracking are consistent with previous studies. This includes the EEGs observation of fluctuation in theta power in frontal and central regions and alpha power fluctuation in parential and occipital regions during increased mental task demand [35,37,41]. Similarly, visual entropy has been shown to correlate with higher mental demand [45,46]. Nonetheless, the measures of the physiological response of MWL were conducted on a new type of mission scenario. The mission specific task index was also introduced to provide an additional baseline for comparing the EEG and visual entropy measures. Henceforth, the significance of this study is the verification of established physiological response measures of MWL, including EEG and eye tracking, as well as the relationship between the physiological and objective measures in a complex OTM UAS wildfire detection scenario. The verification of a multi-sensor fusion method additionally demonstrates that the approach can improve the reliability of cognitive state measurements. Moreover, the demonstration of a highly correlated objective measure can provide useful for potential use as labels for the physiological data when implementing AI techniques such as supervised ML models.
Future research includes exploring different data fusion techniques including further testing an optimal weighting strategy that is calibrated for each individual subject. Additional future research includes testing the objective performance measures (i.e., a secondary task performance measure) as labels for AI techniques such as supervised ML models.
Conclusions
Recent developments in avionics hardware and software for Unmanned Aircraft Systems (UASs) are introducing higher levels of intelligence and autonomy, which in turn facilitate the introduction of new advanced mission concepts such as One-to-Many (OTM) UAS operations. However, the effective implementation of OTM operations in current and likely future UAS missions will have to rely on substantial advances in the field of Human-Machine Interfaces and Interactions (HMI2). Particularly as negative effects arise with the increasingly more complex system automation, such as the human operators' loss of situational awareness and the increase/decrease in Mental Workload (MWL). The Cognitive Human Machine System (CHMS) systems presented in this paper implements an innovative Cyber-Physical-Human (CPH) system architecture that incorporates real-time adaptation in response to the mission complexity and the cognitive load (in particular MWL) of the human operator. This includes dynamic adaptation of the Automation Level (AL) and actual command/control interfaces, while maintaining stable MWL and the highest possible level of situational awareness of the human operator. Nonetheless, with physiological measurements the different methods are prone to various internal and external signal disturbances, which means that it is challenging to identify the true signal of interest from the noise. The comparison with other MWL measures, such as subjective questionnaires and objective task performance measures, are important for cross referencing with the physiological measures, in order to verify that they are correctly and accurately measuring MWL. In addition, the monitoring of multiple parameters in a sensor network is required, as well as data fusion methods, to ensure the accuracy and reliability of the MWL estimation. The additional measures are also promising for use as labels in Artificial Intelligence (AI) techniques such as supervised Machine Learning (ML). Although the measurement of the physiological response and inferring cognitive states (with and without system adaptation) was demonstrated in previous studies, there are still significant research gaps, one of which relates to a universally valid method for determining MWL that can be applied to any operational scenario. Henceforth, in this study we tested and analyzed physiological measures of MWL, including EEG and eye tracking, in a complex OTM UAS wildfire detection mission. Additionally, objective measures were explored, including a secondary task performance and controller inputs, in an analytical comparison with the physiological measures. Although subjective measures are the closest to a ground truth, at the moment they only provide a response at infrequent intervals during the mission and cannot capture the detailed MWL variations during the tasks without being disruptive. Lastly, a fusion approach with the physiological measures was performed and correlated with the task index. The results show that the correlation with the physiological measures and the task index were good for both physiological measures, with the strongest result when fusing the two measures. These results demonstrate the ability of measuring MWL in a complex UAS mission and will be used in further developments of the CHMS. | 15,129.8 | 2020-09-23T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Mid-infrared free-electron laser tuned to the amide I band for converting insoluble amyloid-like protein fibrils into the soluble monomeric form
A mid-infrared free-electron laser (FEL) is operated as a pulsed and linearly polarized laser with tunable wavelengths within infrared region. Although the FEL can ablate soft tissues with minimum collateral damage in surgery, the potential of FEL for dissecting protein aggregates is not fully understood. Protein aggregates such as amyloid fibrils are in some cases involved in serious diseases. In our previous study, we showed that amyloid-like lysozyme fibrils could be disaggregated into the native form with FEL irradiation specifically tuned to the amide I band (1,620 cm−1). Here, we show further evidence for the FEL-mediated disaggregation of amyloid-like fibrils using insulin fibrils. Insulin fibrils were prepared in acidic solution and irradiated by the FEL, which was tuned to either 1,620 or 2,000 cm−1 prior to the experiment. The Fourier transform infrared spectroscopy (FT-IR) spectrum after irradiation with the FEL at 1,620 cm−1 indicated that the broad peak (1,630–1,660 cm−1) became almost a single peak (1,652 cm−1), and the β-sheet content was reduced to 25 from 40 % in the fibrils, while that following the irradiation at 2,000 cm−1 remained at 38 %. The Congo Red assay as well as transmission electron microscopy observation confirmed that the number of fibrils was reduced by FEL irradiation at the amide I band. Size-exclusion chromatography analysis indicated that the disaggregated form of fibrils was the monomeric form. These results confirm that FEL irradiation at the amide I band can dissect amyloid-like protein fibrils into the monomeric form in vitro.
Introduction
A mid-infrared free-electron laser (mid-IR FEL) is operated as a pulsed and linearly polarized laser with tunable wavelength, and it can excite specific bonds within the mid-IR region, accounting for its use in ablation of biological tissues as well as thermodynamic analyses of biomolecules [1][2][3][4][5]. In particular, the surgical therapy of pathological tissues can be facilitated by the FEL, since a feature of the FEL is that it causes less thermal collateral damage than other continuous-mode CO 2 and yttrium aluminum garnet lasers [6,7]. Investigators in Vanderbilt University have performed the laser-induced ablation of corneal tissue using Mark-III FEL, observed secondary structural changes and peptide fragmentation of collagen, and investigated the ablation spot size for the mechanism [8][9][10]. Further, at Duke University, the system was applied to the ablation of rat brain and the examination of laser lesions and histological assessment using laser pulses tuned to the -OH, -CH, and amide I and II bands [11]. Recently, alternative laser systems have been developed for the ablation of biological tissues because the FEL is costly and complex [6,12]. In any case, one conclusion from these studies is that FEL irradiation can induce major changes in the higher-order structure of protein matrices. We are attempting to apply the IR free-electron laser at Tokyo University of Science (FEL-TUS) to biomedical techniques and to supply the FEL beam to biomedical users all over the world; as an application example, the amyloid aggregate was targeted [13,14]. Amyloid proteins reported thus far can be roughly divided into two categories (Tables 1 and 2): those that are related to neurodegenerative diseases ( Table 1) and those that are not (Table 2). These tables also list the frequencies of the amide I bands of those amyloid proteins. The former group includes Aβ [15,16], tau protein [17], polyglutamine [18], transthyretin [19], prion protein [20], S100 protein [21], and α-synuclein [22]. The latter group (Table 2) contains lysozyme [23], calcitonin [24], myoglobin [25], insulin [26], and β 2 -microglobulin [27]. Interestingly, the wave numbers of the amide I of such protein aggregates are around 1,610-1,640 cm −1 , while those of globular proteins containing α-helix-rich structures are around 1,650 cm −1 [28]. These red shifts are considered to be caused by the formation of an anti-parallel β-sheet structure during amyloid fibrillation, although the detailed mechanisms of formation and dissociation have not yet been disclosed. Although a relationship between the toxicity and the structural hierarchy has not been known, the amyloid structure is probably intrinsic in all proteins. Previously, we tested the FEL for dissecting the amyloid structure using lysozyme as it was commercially available and found that the β-sheet content decreased during irradiation tuned to the amide I band (1,620 cm −1 ) [14]. This result indicates that the amyloid-like lysozyme fibril is a flexible structure and can be refolded into the native state under appropriate conditions. In contrast, insulin is smaller in size than lysozyme and is barely soluble in neutral pH solution, quite different from the characteristics of lysozyme. In the present study, we tested if the FEL tuned to the amide I band could dissect the insulin fibrils into the native form similar to lysozyme fibrils.
Materials
Phosphotungstic acid and Congo Red were purchased from Sigma-Aldrich (Tokyo, Japan). Acetic acid, human insulin, and sodium chloride (NaCl) were purchased from Wako Pure Chemical Industries (Osaka, Japan). The KBr mini-plate was purchased from Jasco Engineering Co. (Tokyo, Japan).
Mid-infrared free-electron laser facility at the Tokyo University of Science (FEL-TUS)
The FEL-TUS can generate a laser beam using synchrotron radiation as a seed, with a variable wavelength within the midinfrared region of 5-16 μm (625-2,000 cm −1 ) (Fig. 1a). An electron beam generated by a high-frequency RF electron gun (2,856 MHz) is accelerated by a linear accelerator and injected to an undulator (a periodic magnetic field). The electron beam is forced to oscillate in the undulator to generate synchrotron radiation (SR). Light of a specific wavelength satisfying the following equation is amplified by an interaction between the generated SR and the electron beam: In the equation, λ is the FEL wavelength to be amplified, λu is the periodic length of the undulator, γ is proportional to the acceleration energy of the electron beam, and K is proportional to the strength of the periodic magnetic field. The amplified SR is reflected upstream of the electron beam by a mirror equipped downstream of it and is re-reflected at the upstream mirror to interact with the electron beam again, which produces a coherent laser light. FEL-TUS provides two types of laser pulses, macro-pulse and micro-pulse. The macro-pulse has duration of~2 μs and a repetition rate of 5 Hz throughout the operation, consisting of a train of micro-pulses with durations of 2 ps each. The interval between two consecutive micro-pulses is 350 ps. The energy of the laser pulse used for the current experiment was in the range of 6-8 mJ macro-pulse −1 , which could be measured using an energy meter (SOLO2, Gentec-EO Inc., Quebec, Canada).
Preparation of insulin fibrils and irradiation of IR FEL
Insulin powder was dissolved to a concentration of 2.0 mg/mL in H 2 O (1 mL) containing 20 % acetic acid and 0.5 M NaCl and incubated for 20 h at 37°C. The resulting aggregates were precipitated by centrifugation at 14,000 rpm for 15 min at room temperature, washed by the addition of 0.5 mL of distilled water, and then air dried. The insulin fibrils were resuspended in water containing 20 % acetic acid on a glass slide and were irradiated with the output of the FEL tuned to various wavelengths at 37°C. To avoid the vaporization of water, 10 μL of 20 % acetic acid was periodically added freshly to the suspension during irradiation. After the irradiation was completed, the sample on the glass was dried and subjected to various analyses (Fig. 1b).
Fourier transform infrared spectroscopy (FT-IR) FT-IR spectra were recorded on an FT/IR 615 spectrophotometer (Jasco International Co., Ltd., Tokyo, Japan) using a solid KBr mini-plate. The protein sample was mixed with the KBr pellet and a thin plate was prepared, and the measurements were performed using 16 scans at 4-cm −1 resolution. The secondary structures of the insulin samples were estimated using the bundled protein analysis software (IR-SSE; Jasco Co., Ltd.), which was developed for the evaluation of protein conformational changes in biological tissue [29].
Transmission electron microscopy (TEM) Specimens for TEM observation were prepared as follows.
First, 2 μL of each insulin material was deposited onto copper grids (200 mesh; Nisshin EM Co., Ltd, Tokyo, Japan) covered with collodion film hydrophilized by an electric glow discharge. After 30 s of deposition, any excess material was blotted out using a filter paper, followed by two depositionblotting cycles with 20 μL of water and two additional cycles of phosphotungstic acid (25 μL of 1 % w/v). Prior to sample preparation, the staining solution was filtered using a 0.22-μm membrane to remove large crystals. The TEM observation was performed using a Hitachi H-7650 (Tokyo, Japan) at an accelerating voltage of 120 kV.
Congo Red (CR) assay The absorbance peak of CR is known to shift from 490 to 510 nm in the presence of the fibrils [30]. Aliquots of the insulin solution (30 μL) were added to an equivalent volume of the CR solution (0.2 mM in PBS) and incubated for 10 min at room temperature. The resulting absorbance values were obtained from a 400-600-nm scan using a multi-label counter (PerkinElmer, Tokyo, Japan).
Size-exclusion chromatography (SEC)
To detect the insulin monomer, SEC was performed. The gel for SEC (Toyopearl HW-40C from Tosoh Co., Tokyo, Japan) was filled in the column (bed volume 2 mL) and equilibrated with 20 % acetic acid. The molecular weight exclusion limit of the gel was 2.18×10 3 Da according to the certificate of The FEL beam output is transported through the vacuum tube and directed by means of the goldcoated mirror onto the sample. The insulin sample was placed within the circle of the beam spot on the glass slide and irradiated with the FEL. The sample was air dried or redissolved in water and subsequently analyzed by various methods. c Power monitoring during irradiation. The FEL beam was tuned to the amide I band and was directed onto the sample as described earlier. The beam power was measured at 10-min intervals for 60 min. The triangles, squares, and circles represent the first, second, and third experiments, respectively analysis. Protein samples (100 μL) were centrifuged as described above, and the resulting supernatants were loaded onto the column. Elution was performed using the acidic solution, and the protein concentration of each fraction (100 μL) was measured using an ND-1000 Spectrophotometer (NanoDrop Technologies, Inc., Wilmington, DE, USA). Bovine serum albumin (60 kDa) was used as a calibration marker of molecular weight and eluted in fraction nos. 7-8.
Results
A schematic overview of FEL generation is given in Fig. 1a.
Prior to the oscillation of the FEL, the laser was focused right above the sample using a He/Ne beam. The diameter of the laser beam was~0.5 cm. The materials irradiated by the FEL were analyzed by various methods, as shown in Fig. 1b. The samples were dried for FT-IR analyses and redissolved in water for the CR assay, TEM, and SEC. Irradiation by the FEL tuned to the amide I band (1,620 cm −1 ) was monitored using the power meter (Fig. 1c). The measurements were performed three times, and the power values were found to range from 6.0 to 8.0 mJ/macro-pulse, which resulted in 30.6-40.8 mJ/cm 2 on the sample each time. The standard deviation of the power value was about one tenth of the average at each measurement. However, for long-term irradiation (more than 1 h), the power tended to decrease. It can be considered that the decrease is due to a reduction in the acceleration voltage, which can be caused by an increase in the temperature of the apparatus itself during the operation.
Effect of mid-IR FEL irradiation on the disaggregation of insulin fibrils
Insulin fibrils were prepared in an acidic solution containing a high concentration of salt as in a previous study [14]. In Fig. 2a, the FT-IR spectrum of the fibrils displayed a broad peak (1,630-1,660 cm −1 ) at the amide I band (solid line), although the main peak appeared at 1,656 cm −1 in the native state (dashed line). The β-sheet content in the fibrils was estimated to be around 40 %, whereas it was about 10 % in the native state, by secondary structure analysis (Fig. 2b) that had been developed for the observation of molecular changes of necrotic tissues of murine carcinoma [29]. In contrast, αhelix content in the fibrils was 16 % while that in the native state was 24 %. These secondary structural changes are determined to be caused by the formation of intermolecular β-sheet structures and are consistent with previous data on amyloid fibrils (Tables 1 and 2). Next, the fibrils were placed on the glass slide and irradiated with the FEL which was tuned to 1,620 cm −1 . After 1 h of irradiation, the broad peak of the fibrils resolved into almost a single peak at 1,652 cm −1 (dotted line), and the β-sheet content was reduced to 25 % and α-helix content increased to 21 %. In contrast, irradiation tuned to 2,000 cm −1 retained the β-sheet-rich structure of fibrils (βsheet, 38 %; α-helix, 14 %). These results indicate that a conformational change can occur in insulin fibrils during FEL irradiation at the amide I band, and in particular, the effect of the FEL on the secondary structure is dependent on the output frequency. The CR assay was also performed to clarify the effect of the FEL on the dissociation of insulin fibrils into the native form (Fig. 2c). The dye is known to bind to amyloid fibrils, and the absorbance peak shifts from 490-500 to 500-510 nm upon binding [30]. While one peak was observed at 492 nm in the case of native insulin bound to the dye (solid line), the peak was shifted to 510 nm when the fibrils were bound to the dye (dashed line). When the dye was mixed with the fibrils after FEL irradiation at 1,620 cm −1 , the absorbance peak was shifted to near 492 nm, although a slight peak remained at Fig. 2 Structural changes in insulin. a FT-IR spectra before and after FEL irradiation at 1,620 cm −1 for 1 h. The solid line represents the spectrum of the insulin fibrils before irradiation, the dotted line represents that after irradiation, and the dashed line represents the spectrum of native insulin. b Secondary structure analyses. Relative contents were calculated based on the protein analysis software (IR-SSE). Others indicates the disordered region. c CR assay. The solid line represents the spectrum of CR (0.2 mM) with native insulin (ca. 2.0 mg/mL), the dashed line represents that with insulin fibrils, and the dotted line represents that with the fibrils following FEL irradiation 510 nm (dotted line). This result indicates that non-fibrils were more abundant than the fibrils after FEL irradiation.
Disaggregated structure of insulin fibrils
To elucidate the morphology and structure of the disaggregated form of the insulin fibrils, we analyzed the disaggregated insulin fibrils using TEM and SEC (Fig. 3). Insulin fibrils were prepared and disaggregated on the glass slide as described above (original protein concentration, 2.0 mg/mL in 20 % acetic acid). As shown in Fig. 3a, several thin strings were observed. Each string appears as a helical structure rather than a straight line. The lengths of the fibrils were 100-300 nm and their widths were about 10 nm. These thin strings decreased substantially after disaggregation (Fig. 3b). While short-length helical strings remained, long helical fibrils disappeared. These TEM observations support our observation that the fibril structure is converted into the non-aggregated form by the FEL. The disaggregated material was next analyzed by SEC (Fig. 3c). In this chromatography system, a standard sample of insulin monomer was eluted in fraction no. 9 (dotted line). When the supernatant after fibrillation was loaded on the column, no peaks were detected (triangle). On the other hand, when the supernatant of the disaggregated fibrils was loaded, a monomer peak was detected at fraction no. 9 (circle). The extent of recovery was calculated to be about 20 % of the total protein based on the absorbance (1.0 Abs=1.0 mg/mL). Remarkably, no oligomer forms were detected in the column (large proteins such as BSA with molecular weights greater than 10 kDa must be eluted before the insulin peptide). This result indicates that FEL irradiation can dissociate the insulin fibrils into the monomeric form, without producing any high molecular weight oligomers.
Discussion
Mid-infrared free-electron lasers in the biological and medical fields have been used mainly for tissue ablation in surgery [1][2][3][4][5][6][7][8][9][10][11][12]. Although the effect of the FEL on protein structure has been accepted in the course of those studies, detailed conformational changes of protein folding at the sub-nanometer level have not yet been studied. We have demonstrated that the FEL tuned to the amide band can dissect protein aggregates into the monomeric form, and this result indicates that FEL irradiation of the protein affects the protein folding machinery. A common feature of amyloid fibrils is that they are very stable under physiological conditions. In the case of Aβ, the fibrils can be accumulated in the brain tissue of patients with Alzheimer's disease [31]. Although treatment of the amyloid fibrils does not necessarily lead to the direct therapy of the diseases, exploring the structural changes of fibrils into the globular forms is very important to understand the protein folding mechanism. Booth et al. first showed that the amyloid fibrils of lysozyme could be refolded under denaturation conditions [32]. This is in some ways a landmark result because the robust fibrils are flexible and dynamic in solution. In contrast, we found that lysozyme fibrils could be refolded into the native state in salt-free neutral water and that mid-IR FEL irradiation tuned to amide bands could promote the refolding of the fibrils at mild temperatures (37°C) [14]. Using a similar method, we found that insulin fibrils could be refolded into the monomeric form in this study. The refolding mechanism under FEL irradiation is probably different from that resulting from the use of denaturants. It can be estimated that FEL irradiation at the amide band heats the fibrils and the surrounding water, driving the dissociation of the fibrils and refolding them into the native state. We believe that non-covalent bonds between β-sheet structures can be affected by FEL irradiation. Vaporization of water is also suggested to be a driving force for fibril dissociation. However, the refolding efficiency of insulin was lower than that of lysozyme. That is, the β-sheet content of the lysozyme fibrils recovered almost completely to a level similar to that of the native state after FEL irradiation at the amide I band [14], whereas that of insulin fibrils did not fully recover (Fig. 2b). Such a tough structure for the insulin Fig. 3 Disaggregated form of insulin fibrils. a TEM image of insulin fibrils. Bar, 100 nm. b Image of disaggregated insulin fibrils. The insulin fibrils were disaggregated in water containing 20 % acetic acid, dried, and redissolved in water for negative staining. c Size-exclusion chromatography analysis. The insulin sample was fibrillated, and the supernatant after centrifugation was loaded on the gel and eluted (triangle). The fibrils were disaggregated and centrifuged, and the supernatant was loaded (circle). The dotted line indicates the standard insulin monomer (6 kDa) fibrils was also evident from SEC analysis (Fig. 3c). These results confirm that insulin fibrils have a more robust structure in solution than do lysozyme fibrils.
Amyloid fibrils are formed by diverse polypeptides and are deposited in many tissues of various organs during amyloidosis. However, the mechanism by which amyloid fibrils form and the strategy to treat amyloidosis in diseases such as myeloma remain to be established [33]. It can be expected that the above FEL irradiation system should be applied for treatment of those diseases, although the present system requires a mid-scale photon factory. Instead, a more compact irradiation technology such as an endoscope combining an optical fiber, in which oscillation wavelength should be tuned to the mid-infrared amide region, may be necessary for the clinical application. This technological method is now under study.
In conclusion, the above results confirm that the FEL irradiation yields the monomer from amyloid-like protein fibrils. We believe that not only amyloid fibrils but also other protein aggregates in biological system can be altered by FEL irradiation. Protein fibers have high-order structures containing hydrophobic intermolecular clusters and a hydrogen bond network similar to amyloid fibrils. In the future, FEL will be also applied to the disaggregation of various protein fibrils related to several biological phenomena. | 4,769.8 | 2014-04-24T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Medicine"
] |
Mitochondrial DNA Haplogroup JT is Related to Impaired Glycaemic Control and Renal Function in Type 2 Diabetic Patients
The association between mitochondrial DNA (mtDNA) haplogroup and risk of type 2 diabetes (T2D) is undetermined and controversial. This study aims to evaluate the impact of the main mtDNA haplogroups on glycaemic control and renal function in a Spanish population of 303 T2D patients and 153 healthy controls. Anthropometrical and metabolic parameters were assessed and mtDNA haplogroup was determined in each individual. Distribution of the different haplogroups was similar in diabetic and healthy populations and, as expected, T2D patients showed poorer glycaemic control and renal function than controls. T2D patients belonging to the JT haplogroup (polymorphism m.4216T>C) displayed statistically significant higher levels of fasting glucose and HbA1c than those of the other haplogroups, suggesting a poorer glycaemic control. Furthermore, diabetic patients with the JT haplogroup showed a worse kidney function than those with other haplogroups, evident by higher levels of serum creatinine, lower estimated glomerular filtration rate (eGFR), and slightly higher (although not statistically significant) urinary albumin-to-creatinine ratio. Our results suggest that JT haplogroup (in particular, change at position 4216 of the mtDNA) is associated with poorer glycaemic control in T2D, which can trigger the development of diabetic nephropathy.
Introduction
Type 2 diabetes (T2D) has become one of the most common metabolic diseases, with a rapid increase in its prevalence over recent decades, representing an enormous cost to public health organisms. It is obvious that environmental factors-such as diet and physical activity-play a key role in the pathogenesis of T2D, but an emerging body of evidence suggests that genetic factors can play an important role in the development and severity of T2D [1,2]. Therefore, characterization of new parameters that allow us to identify individuals at a high risk of developing T2D or to predict a poor prognostic of the disease are likely to be of great use in clinical practice, for designing strategies for primary prevention and for personalising treatments according to each specific condition.
Mitochondrial dysfunction is well known to be involved in the pathophysiology of T2D, as it affects not only insulin secretion but also insulin resistance [3,4]. In this sense, genetic factors such as mitochondrial DNA (mtDNA) variations may affect mitochondrial function and lead to the development of diabetes [5]. For example, an mtDNA mutation at nucleotide m.3243A>G has been described to cause maternally inherited diabetes and deafness [6,7]. Another relatively common variant, mtDNA m.16189T>C, has been associated with an enhanced risk of type 2 diabetes in Asian [8] and European [9] populations. Mitochondrial DNA haplogroups are defined by common variants of single nucleotide point (SNP) mutations of the mtDNA that result in a division of the population into discrete groups, each of which shares a common maternal ancestor. Although some studies have suggested mitochondrial haplogroups are involved in the genetic susceptibility of T2D [10][11][12], this connection is not altogether clear, as other authors have reported that haplogroups are unlikely to play a role in the risk of developing this disorder [13,14].
Type 2 diabetes is characterised by inadequate metabolic control associated with subsequent microand macro-vascular complications. Fasting plasma glucose levels indicate how efficiently glucose levels are managed in the absence of dietary glucose, while glycated haemoglobin (HbA 1c ) provides information regarding average blood glucose levels over the previous 8-12 weeks, thus representing an objective measurement of glycaemic control [15]. An association between poor glycaemic control and enhanced risk of microvascular complications, such as nephropathy, has been widely reported in diabetic patients [16,17]. Furthermore, some authors have suggested an effect of mtDNA haplogroups on the risk of developing diabetic complications in T2D [11,18,19]. However, whether or not mitochondrial haplogroups are involved in the glycaemic control of type 2 diabetic patients and the subsequent development of microvascular complications, such as nephropathy, has not yet been studied.
In the present study, we assessed a Spanish population of 303 T2D patients and 153 healthy controls with the aim of investigating differences in metabolic parameters and renal dysfunction markers according to the main mitochondrial macro-haplogroups.
Subjects
Our study population was composed of 303 T2D patients and 153 healthy volunteers recruited at the Endocrinology and Nutrition Service of the University Hospital Dr. Peset (Valencia, Spain). T2D was diagnosed according to the criteria of the American Diabetes Association 2017 [20] (fasting plasma glucose ≥126 mg/dL, or 2-h plasma glucose ≥200 mg/dL after a 75 g oral glucose tolerance test, or HbA 1C ≥6.5%, or random plasma glucose ≥200 mg/dL). Subjects who met any of the following criterion were excluded from the study: history of cardiovascular disease (stroke, ischemic heart disease, peripheral vascular disease, and chronic disease related to cardiovascular risk); severe disease including malignances, autoimmune, inflammatory or infectious diseases; and abnormal haematological profile.
Written informed consent was obtained from all the participants before they participated in the study. The study was conducted in accordance with the Helsinki Declaration, and approved by the Ethics Committee of the University Hospital Dr. Peset (Project identification code: 97/16).
Anthropometric and Biochemical Parameters
During the medical appointment, weight (kg), height (m), systolic and diastolic blood pressure (SBP, DBP; mm Hg), and waist and hip circumference (cm) were measured in all the participants. Body mass index (BMI; kg/m 2 ) and waist-to-hip ratio (WHR) were then calculated.
Venous blood samples were collected in fasting conditions from both control and type 2 diabetic subjects and centrifuged at 1500 × g for 10 min at 4 • C to obtain serum, in which levels of glucose, total cholesterol and triglycerides were determined by means of an automated enzymatic method using a Beckman Synchron LX20 Pro analyzer (Beckman Coulter, Brea, CA, USA). High-density lipoprotein cholesterol (HDL-c) levels were measured using a direct method with a Beckman Synchron LX20 Pro analyzer (Beckman Coulter, Brea, CA, USA), and low-density lipoprotein cholesterol (LDL-c) was calculated with Friedewald's formula. Insulin was measured with an Immulite 1000 automated immunoassay system (Siemens Healthcare SL, Madrid, Spain) and the homeostasis model assessment index of insulin resistance (HOMA-IR) was calculated to estimate insulin resistance using fasting insulin and glucose levels: HOMA = [fasting insulin (µU/mL) × fasting glucose (mg/dL)]/405. HOMA index was only calculated for patients not undergoing insulin therapy. Percentage of HbA 1c was measured by means of an automatic glycohemoglobin analyzer (Arkray Inc., Kyoto, Japan) and high-sensitive C-reactive protein (hs-CRP) levels were assessed with the Dade Behring Nephelometer II Analyzer System using an immunonephelometric assay (Dade Behring, Deerfield, IL, USA).
Creatinine in serum and urine was determined by Jaffe's reaction. Measurements of urinary albumin concentrations were performed by turbidimetry with an Architect c-16000 autoanalyzer (Abbott, Lake Bluff, IL, USA). Estimated glomerular filtration rate (eGFR) was calculated by the CKD-EPI equation from serum creatinine [21].
Haplotyping
Total DNA was extracted from whole blood with the REALPURE "SSS" Kit (Durviz SL, Valencia, Spain) and stored at −20 • C until analysis.
Mitochondrial haplogroups HV, JT and U were defined by the mtDNA polymorphisms m.7028C>T, m.12308A>G, m.4216T>C and m.14766T>C [22]. These haplogroups encompass around 90% of the Spanish population [23]. Samples revealing other haplogroups with low frequencies among the population (those not classified as HV, JT or U) were grouped altogether and referred to as "Others". Custom designed Taqman®SNP genotyping assays (Applied Biosystems, Foster City, CA, USA) were used to analyse mtDNA genetic variants and samples were run in a Step One Plus Real Time PCR System (Applied Biosystems, Foster City, CA, USA). The analysis consisted of a pre-read and post-read step of the plate of 30 s at 60 • C, before and after the PCR cycle. The cycle conditions were 10 min at 95 • C, followed by 40 cycles of 15 s at 95 • C and 1 min at 60 • C. Information on the haplogroups, dyes, probes and primers in each assay is widely explained in Nogales-Gadea et al. [24]. For each genotype analysis, positive and negative controls from different previously characterised mtDNA aliquots were used to ensure an adequate internal control.
Statistical Analysis
Results were processed using SPSS Software version 17.0 (SPSS Statistics Inc., Chicago, IL, USA) for statistical analysis. Data in tables are presented as means ± standard deviation for normally distributed data, medians (25th and 75th quartile) for non-normally distributed data, or percentage for qualitative variables. Figures show mean and standard error of the mean. Potential differences between haplogroups were analysed by ANOVA for normally distributed variables and the Kruskal-Wallis test for non-normally distributed variables. When differences among groups were detected, Student-Newman-Keuls or Dunn's multiple-comparison post hoc test were applied, as appropriate. Frequencies in T2D patients and control subjects were compared using the chi-square test. A Student's t-test was employed to evaluate differences between controls and type 2 diabetic patients. The effect of possible covariates (such as age, sex, BMI or duration of diabetes) was analyzed with a univariate general linear model. For all the tests, a two-tailed p < 0.05 was considered significant.
Clinical Characteristics of the Study Population
Our observational study included 303 type 2 diabetic patients and 153 healthy controls. Haplogroup distribution, and anthropometrical and inflammatory characteristics, as well as lipid profile of the studied participants are shown in Table 1. Normally distributed data are shown as mean ± SD and non-normally distributed data as median (25th-75th quartiles). p-value * when comparing type 2 diabetic patients (Total) vs. healthy controls (Total). Abbreviations: BMI, body-mass index; DBP, diastolic blood pressure; HDL-c, high-density lipoprotein cholesterol; hs-CRP, high-sensitive C-reactive protein; LDL-c, low-density lipoprotein cholesterol; SBP, systolic blood pressure; SD, standard deviation; TC, total cholesterol; WHR, waist-to-hip ratio.
Our cohort of healthy controls showed a haplogroup distribution similar to that reported in a larger Spanish population by Dahmany et al. [23]. No differences were found in the distribution of the different macro-haplogroups between control subjects and diabetic patients (p = 0.68). Although our cohort of T2D patients was characterised by higher age and percentage of men with respect to the control population (p < 0.001), when sub-classified by haplogroup, no differences were found in these parameters among haplogroups in the diabetic population (p = 0.92 for age and p = 0.22 for male percentage) and control subjects (p = 0.68 for age and p = 0.36 for male percentage). As expected, T2D patients in total had higher body mass index (BMI), waist-to-hip ratio (WHR), systolic blood pressure (SBP), diastolic blood pressure (DBP), high-sensitive C-reactive protein (hs-CRP) (p < 0.001) than control subjects. Lipid profile in the diabetic patients showed typical characteristics of atherogenic dyslipidemia, with elevated levels of triglycerides (p < 0.001 vs. control) and low levels of HDL-c (p < 0.001 when compared to control subjects). The lower levels of total cholesterol (p = 0.001) and LDL-c (p = 0.002) found in diabetic patients vs. controls were probably due to the fact that most of the patients were being treated with antihyperlipidemic agents ( Table 2). No statistically significant differences in the studied parameters were detected according to haplogroup in the type 2 diabetic population or the control group (see Table 1 for p-values).
Pharmacologic treatment of the type 2 diabetic patients included in this study is shown in Table 2. No significant differences were observed between the percentages of patients treated with hypolipidemic, antidiabetic, and antihypertensive agents (see Table 2 for p-values).
Glucose Metabolism
First analysis performed was a comparison between T2D patients and controls as a whole, without subdividing by haplogroup (Table A1). As expected, type 2 diabetic patients showed higher levels of fasting glucose, HbA 1c , fasting insulin and HOMA-IR index than control subjects (p < 0.001). Differences between controls and subjects with T2D remained statistically significant after adjustment for age, sex, and BMI (p < 0.001 for glucose, HbA 1c and HOMA; p = 0.01 for insulin). Graphs showing parameters related with glycaemic control and insulin resistance are plotted in Figure 1. Differences in glucose, HbA1c, insulin, and HOMA between control subjects and T2D patients remained significant after subdivided by haplogroup (p < 0.001 for glucose, HbA1c, and HOMA when comparing control vs T2D belonging to haplogroups HV, JT, U, and Others. For insulin levels: p < 0.001 when comparing control vs T2D in haplogroup HV; p < 0.01 in haplogroup JT; and p < 0.05 in haplogroups U and Others. Differences in p-values found in the levels of insulin are attributable to differences in the sample size between haplogroups).
Interestingly, diabetic patients with the JT haplogroup showed significantly higher levels of fasting glucose (p = 0.001) and HbA 1c (p = 0.007) compared to patients belonging to the other haplogroups analysed (grey bars in Figure 1A,B). These differences remained statistically significant despite adjustments for duration of diabetes (p = 0.006 for fasting glucose and p = 0.002 for HbA 1c ). Nevertheless, no differences were found in the levels of fasting insulin (p = 0.50) and HOMA-IR index (p = 0.38) when T2D patients with different haplogroups were compared (grey bars in Figure 1C,D). Control subjects did not reveal differences depending on haplogroup for any of the parameters related with glycaemic control and insulin resistance (white bars in Figure 1; glucose: p = 0.70; HbA 1c : p = 0.81; insulin: p = 0.60; HOMA-IR: p = 0.83). Letters indicate significant differences among type 2 diabetic patients with different haplogroups (p < 0.05) when compared by means of one-way ANOVA followed by Student-Newman-Keuls post-hoc test (i.e., bars tagged with the same letter do not differ significantly from each other, while bars with no letter in common are significantly different from each other (p < 0.05)). Abbreviations: HbA1c, glycated haemoglobin; HOMA-IR, Homeostasis model assessment index of insulin resistance; T2D, Type 2 diabetes.
Renal Function
Type 2 diabetic patients, when analyzed as a whole, exhibited lower kidney function than control subjects, expressed by higher serum creatinine concentrations (p < 0.001) and lower eGFR (p < 0.001) (Appendix Table A1). Differences in eGFR between controls and patients remained when adjusted by age, sex, and BMI (p = 0.04), whereas differences in creatinine levels were not longer statistically significant after adjusting for age, sex, and BMI (p = 0.12). After subdivided by haplogroup, differences between T2D patients and controls remained statistically significant only in haplogroups HV (p < 0.001) and JT p < 0.05 for serum creatinine, and in haplogroups HV (p < 0.01), JT (p < 0.05), and Others (p < 0.05) for eGFR. Differences in p-values found between the different haplogroups are probably due to differences in the sample size.
In the case of T2D patients, those with the JT haplogroup showed a worse renal function than patients belonging to HV, U, and Others haplogroups, manifested as significantly higher levels of serum creatinine (grey bars in Figure 2A, p < 0.001) and lower eGFR (grey bars in Figure 2B, p = 0.01). Differences between the JT group and all the other macro-haplogroups in creatinine levels and eGFR did not change in the diabetic population after adjusting by duration of diabetes (p < 0.001 for creatinine levels and p = 0.003 for eGFR). Control subjects did not reveal statistically significant differences in kidney function according to haplogroups (white bars in Figure 2; p = 0.09 for creatinine and p = 0.27 for eGRF). White bars correspond to controls, while grey bars represent type 2 diabetic patients. * p < 0.05; ** p < 0.01; *** p < 0.001 in controls vs. type 2 diabetic subjects. Letters indicate significant differences among type 2 diabetic patients with different haplogroups (p < 0.05) when compared by means of one-way ANOVA followed by Student-Newman-Keuls post-hoc test (i.e., bars tagged with the same letter do not differ significantly from each other, while bars with no letter in common are significantly different from each other, (p < 0.05)). Abbreviations: eGFR, estimated glomerular filtration rate; T2D, Type 2 diabetes.
In light of the above mentioned results, we also analyzed concentrations of urinary albumin and creatinine in patients in whom said parameters were measured on the same day as blood was extracted; namely, in a total of 106 type 2 diabetic patients (52 from the HV group, 16 from the JT group, 26 from the U group, and 12 from the "Others" group). Urinary albumin-to-creatinine ratio (mg/g) was slightly higher in the JT group (19.46 ± 17.54 mg/g) than in those belonging to the other haplogroups (HV: 12.55 ± 7.92 mg/g; U: 12.54 ± 8.73 mg/g; others: 12.49 ± 7.33 mg/g), although it did not reach statistical significance in the one-way ANOVA test (p = 0.099).
Discussion
In the present study, we have performed a case-control study to explore the possible effects of the main mitochondrial haplogroups on metabolic control and renal function in a Spanish population of 303 type 2 diabetic patients and 153 healthy controls. We have observed that T2D patients belonging to the JT macrohaplogroup showed enhanced levels of fasting plasma glucose, HbA1c, creatinine, and decreased eGFR when compared to patients from the other haplogroups (HV, U, and Others), thus suggesting poorer metabolic control and renal function in T2D patients with the JT haplogroup.
Mitochondria are responsible for the cell's energy supply through oxidative phosphorylation (OXPHOS), and some of the proteins involved in this process are encoded in the mtDNA. Given the importance of OXPHOS in insulin secretion [25,26], different genetic variants are potential candidates for playing a role in the susceptibility to or protection against metabolic defects [13]. Mitochondrial haplogroups are clusters of phylogenetically related mtDNA haplotypes that might have been selected during evolution to permit humans to adapt to famine or cold climates [27]. It has been suggested that these mtDNA variants contribute to energy metabolism and, hence, may be associated with metabolic diseases [28]. Crispim et al. [10] reported that the European-specific JT mitochondrial haplogroup was associated with insulin resistance and type 2 diabetes in Caucasian-Brazilian patients, as patients belonging to the JT cluster exhibited higher levels of HOMA-IR. In addition, the J1 haplogroup is thought to be involved in susceptibility to type 2 diabetes among Caucasian (Jewish) patients depending on family health history [29]. According to several studies performed in Asian populations, individuals carrying haplogroup N9a are less susceptible to type 2 diabetes and metabolic syndrome [30,31]. However, in spite of this evidence, the association between mitochondrial haplogroups and type 2 diabetes is not clear, with many studies providing conflicting results or failing to find significant associations [13,14,18,32]. Our results do not show a direct association of the development of T2D with the main macro-haplogroups, as no differences were found in the frequencies of each haplogroup between our diabetic and control populations. Interestingly, we found that patients belonging to the JT cluster presented poorer glycaemic control and higher levels of fasting glucose and HbA1c, than other patients, thus suggesting that said haplogroup is involved in the metabolism of glucose in patients with T2D. Our findings are in agreement with those reported by Crispim et al. [10] which described higher levels of HOMA-IR in patients with JT haplogroup, although no statistically significant differences were found in our cohort of type 2 diabetic patients, probably because the size of our sample was smaller than the sample size in the cited work.
Type 2 diabetes and inadequate glycaemic control are frequently associated with macro-and micro-vascular complications. Whether or not mitochondrial haplogroups play a role in modulating the development of T2D-related complications is a question that has been widely studied. Achilli et al. [18] found an association of various mitochondrial haplogroups and increased risk of diabetic complications in an Italian population: haplogroup H3 increased the probability of developing neuropathy; haplogroup H was linked to retinopathy; and subjects harbouring V and U3 mtDNA showed enhanced incidence of renal failure and nephropathy. In this context, it is worth pointing out that diabetic nephropathy has been associated with specific mitochondrial haplogroups in several studies; for instance, Feder et al. [33] reported a link with the J1 haplogroup in an Ashkenazi Jewish population, while Niu et al. [11] reported a link with the N9a haplogroup in a Chinese population. Our data are in accordance with an involvement of mitochondrial haplogroup in the development of nephropathy, as our type 2 diabetic patients belonging to the JT haplogroup showed higher levels of serum creatinine and lower levels of eGFR compared to patients belonging to the other haplogroups analyzed. Interestingly, though not statistically significant, T2D patients harbouring the JT haplogroup also presented a higher urinary albumin-to-creatinine ratio. Taken together, these results suggest T2D patients with the JT haplogroup are likely to have impaired kidney function.
The variant m.4216T>C, a key SNP for defining the JT macro-haplogroup [34], leads to a non-synonymous amino acid change in the mtDNA MT-ND1 gene encoding NADH:Ubiquinone oxidoreductase core subunit 1 (p.MT-ND1), one of the components of the mitochondrial respiratory complex I. Electrons coming from glucose metabolism through glycolysis and the Krebs cycle are principally stored in NADH for ATP production and oxygen reduction. It has been proposed that hyperglycaemia can increase the production of the complex I substrate NADH [35]. Overproduction of NADH leads to an electron pressure on the mitochondrial electron transport chain that drives to an increase in electron leakage and the subsequent high production of reactive oxygen species (ROS) [36,37]. As the major enzyme implicated in NADH recycling, mitochondrial complex I impairment can lead to further increased levels of NADH [38], with the following enhancement of ROS. Altogether these mechanisms will induce oxidative stress, which has been widely reported as a central player in the development of insulin resistance, pancreatic β-cells dysfunction, and finally, type 2 diabetes [39][40][41]. A recent study has demonstrated that pharmacological inhibition of complex I of mitochondrial electron transport chain improves glucose homeostasis and ameliorates hyperglycaemia [42]. Our findings bring us to hypothesize that variant m.4216T>C results in aberrant activity of complex I that, under diabetic conditions, leads to poorer glycaemic control. For this reason, we suggest that inhibitors of complex I-such as metformin and thiazolidinediones [43]-may be adequate drugs for the treatment of T2D patients belonging to the JT haplogroup.
Both hyperglycaemia and excessive oxidative stress are well known to be involved in the development of diabetic vascular complications [44,45], including microvascular complications such as nephropathy [46]. The kidney is especially vulnerable to the damage produced by hyperglycaemia-induced oxidative stress; in fact said damage has been suggested as an important mechanism involved in the pathogenesis of tubular and glomerular abnormalities [46].
The present study has some limitations in terms of statistical power. We did not perform a previous sample size estimation due to the lack of studies addressing the role of the haplogroups here studied (HV, JT, U, and others) on metabolic control and renal function. However, we consider that our study has enough power for reaching statistically significant differences among different haplogroups in the diabetic population. Nevertheless, researches in larger populations will serve to confirm these results.
Together, this evidence leads us to hypothesize that the JT haplogroup (in particular, change at position 4216 of the mtDNA) might result in a poorer glycaemic control in type 2 diabetic patients, thus contributing to the development of diabetic nephropathy. Future studies with a larger sample size would help to confirm our results and, if corroborated, haplogroup screening in recently diagnosed T2D patients might be suggested as a way of predicting disease progression and choosing the most adequate clinical treatment for avoiding macro and microvascular-associated complications. | 5,423.6 | 2018-08-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Unrepeatered 256 Gb / s PM-16 QAM transmission over up to 304 km with simple system configurations
We study unrepeatered transmission of 40x256 Gb/s systems with polarization-multiplexed 16-quadrature amplitude modulation (PM16QAM) channels using simple coherent optical system configurations. Three systems are investigated with either a homogeneous fiber span, or simple two-segment hybrid fiber designs. Each system relies primarily on ultra-low loss, very large effective area fiber, while making use of only first-order backward pumped Raman amplification and no remote optically pumped amplifier (ROPA). For the longest span studied, we demonstrate unrepeatered 256 Gb/s transmission over 304 km with the additional aid of nonlinear compensation using digital backpropagation. We find an average performance improvement in terms of the Q-factor of 0.45 dB by using digital backpropagation compared to the case of using chromatic dispersion compensation alone for an unrepeatered span system. ©2014 Optical Society of America OCIS codes: (060.2330) Fiber optics communications; (060.2360) Fiber optics links and subsystems. References and links 1. J. D. Downie, J. Hurley, J. Cartledge, S. Ten, S. Bickham, S. Mishra, X. Zhu, and A. Kobyakov, “40 x 112 Gb/s transmission over an unrepeatered 365 km effective area-managed span comprised of ultra-low loss optical fibre,” in Proceedings of European Conf. Opt. Commun. (2010), paper We.7.C.5. 2. D. Mongardien, P. Bousselet, O. Bertran-Pardo, P. Tran, and H. Bissessur, “2.6Tb/s (26 x 100Gb/s) unrepeatered transmission over 401km using PDM-QPSK with a coherent receiver,” in Proceedings of European Conf. Opt. Commun. (2009), paper 6.4.3. 3. H. Bissessur, P. Bousselet, D. Mongardien, G. Boissy, and J. Lestrade, “4 x 100Gb/s unrepeatered transmission over 462km using coherent PDM-QPSK format and real-time processing,” in Proceedings of European Conf. Opt. Commun. (2011), paper Tu.3.B.3. 4. D. Chang, W. Pelouch, and J. McLaughlin, “8 x 120 Gb/s unrepeatered transmission over 444 km (76.6 dB) using distributed Raman amplification and ROPA without discrete amplification,” in Proceedings of European Conf. Opt. Commun. (2011), paper Tu.3.B.2. 5. S. Oda, T. Tanimura, Y. Cao, T. Hoshida, Y. Akiyama, H. Nakashima, C. Ohshima, K. Sone, Y. Aoki, M. Yan, Z. Tao, J. C. Rasmussen, Y. Yamamoto, and T. Sasaki, “80x224 Gb/s unrepeated transmission over 240 km of large-Aeff pure silica core fibre without remote optical pre-amplifier,” in Proceedings of European Conf. Opt. Commun. (2011), paper Th.13.C.7. 6. D. Mongardien, C. Bastide, B. Lavigne, S. Etienne, and H. Bissessur, “401 km unrepeatered transmission of dual-carrier 400 Gb/s PDM-16QAM mixed with 100 Gb/s channels,” in Proceedings of European Conf. Opt. Commun. (2013), paper Tu.1.D.2. 7. A. H. Gnauck, P. J. Winzer, S. Chandrasekhar, X. Liu, B. Zhu, and D. W. Peckham, “Spectrally efficient longhaul WDM transmission using 224-Gb/s polarization-multiplexed 16-QAM,” J. Lightwave Technol. 29(4), 373– 377 (2011). 8. F. Chang, K. Onohara, and T. Mizuochi, “Forward error correction for 100 G transport networks,” IEEE Commun. Mag. 48(3), S48–S55 (2010). 9. I. Fatadin, D. Ives, and S. J. Savory, “Blind equalization and carrier phase recovery in a 16-QAM optical coherent system,” J. Lightwave Technol. 27(15), 3042–3049 (2009). 10. T. Pfau, S. Hoffmann, and R. Noe, “Hardware-efficient coherent digital receiver concept with feedforward carrier recovery for M-QAM constellations,” J. Lightwave Technol. 27(8), 989–999 (2009). #207821 $15.00 USD Received 6 Mar 2014; revised 3 Apr 2014; accepted 5 Apr 2014; published 21 Apr 2014 (C) 2014 OSA 5 May 2014 | Vol. 22, No. 9 | DOI:10.1364/OE.22.010256 | OPTICS EXPRESS 10256 11. Y. Gao, A. P. T. Lau, C. Lu, Y. Dai, and X. Xu, “Blind cycle-slip detection and correction for coherent communication systems,” in Proceedings of European Conf. Opt. Commun. (2013), paper P.3.16. 12. J. D. Downie, J. Hurley, D. Pikula, S. Ten, and C. Towery, “Study of EDFA and Raman system transmission reach with 256 Gb/s PM-16QAM signals over three optical fibers with 100 km spans,” Opt. Express 21(14), 17372–17378 (2013). 13. E. Ip and J. M. Kahn, “Compensation of dispersion and nonlinear impairments using digital backpropagation,” J. Lightwave Technol. 26(20), 3416–3425 (2008). 14. E. Ip, “Nonlinear compensation using backpropagation for polarization-multiplexed transmission,” J. Lightwave Technol. 28(6), 939–951 (2010). 15. C. Behrens, R. I. Killey, S. J. Savory, M. Chen, and P. Bayvel, “Nonlinear transmission performance of higherorder modulation formats,” IEEE Photon. Technol. Lett. 23(6), 377–379 (2011). 16. G. P. Agrawal, Nonlinear Fiber Optics, 2nd ed. (Academic, 1995). 17. J. Rodrigues Fernandes de Oliveira, U. C. de Moura, G. E. Rodrigues de Paiva, A. Passos de Freitas, L. H. Hecker de Carvalho, V. E. Parahyba, J. C. Rodrigues Fernandes de Oliveira, and M. Araujo Romero, “Hybrid EDFA/Raman amplification topology for repeaterless 4.48 Tb/s (40 x 112 Gb/s DP-QPSK) transmission over 302 Km of G.652 standard single mode fiber,” J. Lightwave Technol. 31(16), 2799–2808 (2013). 18. A. Puc, D. Chang, W. Pelouch, P. Perrier, D. Krishnappa, and S. Burtsev, “Novel design of very long, high capacity unrepeatered Raman links,” in Proceedings of European Conf. Opt. Commun. (2009), paper 6.4.2.
Introduction
Long point-to-point unrepeatered spans of several hundred kilometers are found in submarine and terrestrial networks connecting islands, mainland points in a festoon configuration, or cities with desert or otherwise forbidding landscape between them.In these systems, the primary objective is to achieve a single-span system between the points of interest with no active equipment between terminals.There have been a number of recent research experiments with this type of system using a data rate of 100 Gb/s and polarizationmultiplexed quadrature-phase-shift keying (PM-QPSK) signals.In one experiment, the feasibility of unrepeatered transmission over a span of 365 km was demonstrated with backward pumped Raman amplification and a hybrid fiber span configuration [1].Even longer span lengths have been shown by employing more complex amplification schemes including ROPAs, forward and backward Raman pumping, and higher order Raman pumps [2][3][4].To date, there has been less activity in unrepeatered span transmission at the 200 Gb/s net data rate, but 80x224 Gb/s was demonstrated over 240 km without a ROPA using a large effective area (A eff ) fiber with a silica core [5].More recently, 401 km transmission was demonstrated for one dual-carrier 400 Gb/s channel (2x200 Gb/s), but this system also used a ROPA and higher order forward and backward Raman pumping using high pump powers [6].
In this paper, we investigate 40x256 Gb/s transmission of PM-16QAM signals over unrepeatered spans with minimal complexity comprised solely or primarily of an ultra-low loss, very large effective area fiber.The systems investigated use first-order backward pumped Raman amplification combined with a discrete erbium-doped fiber amplifier (EDFA) at the receiver and no ROPA.We begin with a homogeneous span and demonstrate 286 km transmission for all 40 channels with at least 0.5 dB margin over the forward error correction (FEC) threshold.We next show transmission over a 292 km span with a simple hybrid fiber configuration that allows higher Raman gain and better performance.Finally, we extend the hybrid span length to 304 km and demonstrate successful transmission with the aid of nonlinear compensation using digital backpropagation.To the authors' knowledge, this is the longest unrepeatered span transmission demonstrated for 200 Gb/s data rate, PM-16QAM signals using a simple first-order backward pumped Raman amplification scheme.
Experimental set-up
The general experimental set-up is shown in Fig. 1 the two quadrature arms, each of which were created by combining pairs of shifted (>600 symbols offset) binary 2 15 -1 PRBS signals at 32 Gbaud with 6 dB amplitude difference [7].The resulting bit rate was 256 Gb/s for the PM-16QAM signals, with a 28% total overhead above the nominal 200 Gb/s data rate.The assumed soft-decision FEC (SD-FEC) has a raw BER threshold of 2x10 −2 corresponding to a Q-factor threshold of 6.25 dB [8].After polarization multiplexing with 286 symbols delay between polarizations, the channels passed through a continuously varying polarization scrambler and then a piece of fiber with total chromatic dispersion of 800 ps/nm to de-correlate adjacent channels by more than 10 symbols prior to amplification and launch.The number of channels was limited to about 40 by the gain bandwidth of our high output power fiber amplifier at the span input.In the first experiment, the fiber span was comprised entirely of Corning ® Vascade ® EX3000 fiber.This fiber had average effective area of just under 152 μm 2 and the average fiber attenuation was 0.154 dB/km.The fiber dispersion was about 21 ps/nm/km at 1550 nm.For the second and third experiments, simple hybrid fiber span configurations were employed with some of the Vascade EX3000 fiber replaced at the end of the span by Vascade ® EX2000 fiber, having an effective area of just over 110 μm 2 and attenuation of about 0.158 dB/km.These configurations were investigated to increase the Raman gain in the span and thus increase the achievable span length.
Channels were selected for detection in a polarization-and phase-diverse digital coherent receiver with a free-running local oscillator with 100 kHz linewidth.The four signals from the balanced photodetectors were digitized by analog-to-digital converters operating at 50 Gsamples/s using a real-time sampling oscilloscope with 20 GHz electrical bandwidth.BER values were calculated via direct error counting of at least 2.5x10 6 bits with offline digital signal processing steps including quadrature imbalance compensation, up-sampling to 64 Gsamples/s, chromatic dispersion compensation using a frequency-domain equalizer, digital square-and-filter clock recovery, polarization demultiplexing and equalization using a 21-tap adaptive butterfly structure with filter coefficients determined by a radius-directed constant modulus algorithm (CMA) [9] following pre-convergence using a standard CMA, carrier frequency offset using a spectral domain algorithm, and blind search carrier phase recovery [10].We also employed a blind cycle-slip detection and correction technique that is helpful in conditions with high BER values [11].
The characteristics and back-to-back performance of the PM-16QAM transmitter and coherent receiver were described more fully in a previous publication [12].In ref. 12, we also demonstrated that modulation of all channels with a single modulator has negligible effect on performance since adjacent channels are de-correlated (by more than 10 symbols here) before launch and are then quickly de-correlated further during transmission.Furthermore, the optical spectrum of the 16QAM signal generated was sufficiently narrow such that the transmitter configuration incurred insignificant penalty from linear crosstalk.
Experimental transmission results
As mentioned above, the fiber span in the first experimental configuration was 286 km long and was comprised homogeneously of Vascade EX3000 fiber.The total span loss of this system was about 44.5 dB including 19 splices.The maximum Raman ON/OFF gain achievable with three Raman pump wavelengths at 1427 nm, 1443 nm, and 1461 nm was just over 14 dB.The pumps had polarization diversity and the maximum total pump power available was a little over 1 W.For the second configuration studied, we adopted a simple two-part hybrid fiber span construction in which 50 km of Vascade EX2000 was spliced at the end of 242 km of Vascade EX3000 fiber for a total span length of 292 km.This system had a total span loss of about 45.5 dB including 14 splices.Splice loss between the two fiber types was ≤0.2 dB.With this configuration the maximum Raman gain increased to a little over 21 dB due to the higher Raman gain coefficient of the Vascade EX2000 fiber with smaller effective area on the receiver side of the span.In both spans, the maximum Raman gain was limited by the available pump power.The 50 km length of the second fiber segment was calculated to produce nearly all of the additional Raman gain possible within a fraction of a dB.In general, the Raman gain ratio of two fibers with the same material should scale as the ratio of pump wavelength effective length L eff and the inverse ratio of A eff for the two fibers, respectively.The relative effective areas of the two fiber types accounts for most of the difference in maximum Raman gain observed in the two span configurations.
With maximum Raman gain set in each system, we first measured the BER of a central channel in the middle of the channel plan as a function of channel launch power to find the optimal power.The 40 channels were launched into the span from the high-power amplifier output with a nominally flat spectrum.The optimal channel power for both systems was about 8.5 dBm.With the optimal channel power thus set, we also evaluated the system performance in terms of Q-factor value and the effective equivalent noise figure (NF) of the hybrid Raman/EDFA amplifier combination in the receiver as a function of Raman ON/OFF gain for the central channel.The Q-factor values were calculated from direct BER measurement and the effective equivalent NF was calculated from OSNR measurements.These results are shown in Fig. 2. As expected, the results show that both systems produced best performance operating with the maximum Raman gain available.The lowest effective NF at maximum Raman gain for the homogeneous span was about 0 dB, while it was about 1.4 dB for the hybrid fiber span system.Combined with the additional 1 dB extra loss from the extra 6 km of fiber, the longer span had a Q-factor advantage of about 0.2 dB compared to the shorter span.All 40 channels were measured for both systems with maximum Raman gain.The OSNR and Q-factor values are shown in Fig. 3.The average OSNR and Q-factor values for the 286 km homogeneous span were 21.8 dB and 7.1 dB, respectively, while they were 22.1 dB and 7.3 dB for the 292 hybrid fiber span, respectively.The minimum Q-factor margin over the FEC threshold for the 292 km span system was 0.7 dB.The larger Raman gain afforded by the simple hybrid fiber span produced overall better performance for the longer 292 km span system due to the lower effective NF of the hybrid amplifier which more than offset the higher span loss.Finally, we extended the hybrid fiber span length to 304 km by adding another 12 km of Vascade EX2000 fiber at the span end.This increased the total span loss to about 47.4 dB.The maximum measured Raman ON/OFF gain was less than 0.4 dB higher than for the 292 km span, thus confirming that the second configuration did in fact achieve nearly all of the potential Raman gain possible.Modeling also showed that virtually all of the Raman gain occurred in the 62 km segment of Vascade EX2000 fiber at the span end.
To achieve successful transmission of all 40 channels over 304 km with Q values at least 0.5 dB above the FEC threshold, we employed a digital backpropagation (DBP) technique to mitigate intra-channel nonlinear impairments [13,14].The algorithm was based on solving the Manakov equation in the absence of polarization mode dispersion (PMD) and polarization dependent loss (PDL) [15], using the split-step Fourier method with a symmetrized operator scheme [16].Each WDM channel was processed separately to compensate for chromatic dispersion and self-phase modulation accrued in the transmission fiber.The Q-factor values for a central channel as a function of channel power over this span, with and without the nonlinear compensation, are shown in Fig. 4(a).The optimal channel power with the DBP was close to 10 dBm for this system.Figure 4(b) shows the performance improvement in terms of the Q-factor as a function of the number of spatial steps in the split-step Fourier method for a middle channel.The point at zero steps corresponds to linear compensation alone.We observe that almost all potential improvement can be obtained by solving the Manakov equation with 10 or fewer equal segments, although a single step appears to provide essentially no improvement.
The Q-factor values for all 40 channels launched at the optimal power are shown in Fig. 5 along with the Q-factor improvement (ΔQ) provided by the DBP.The average ΔQ was 0.45 dB.With DBP, the average Q-factor value was 7.0 dB, and the minimum margin above the FEC threshold for all channels was 0.5 dB.The improvement ΔQ from the DBP appears to be greater for shorter wavelengths.This may be due to increasing dispersion and decreasing nonlinear coefficient γ as a function of wavelength.We note that the benefit of using DBP in an unrepeatered coherent optical communication system was briefly studied previously [17].In that particular experiment, 40 WDM channels carrying 112 Gb/s PM-QPSK were transmitted over 302 km of SSMF using hybrid EDFA/Raman amplification but the DBP was not found to improve the average system performance.
Summary and conclusions
We have demonstrated 40x256 Gb/s PM-16QAM transmission over unrepeatered spans of length 286 km (homogeneous span), 292 km (hybrid fiber span), and 304 km (hybrid span and DBP).The simple two-segment hybrid span configurations enabled a reduction in the overall effective noise figure of the Raman/EDFA amplifier combination by about 1.5 dB compared to the homogeneous fiber span.Some further improvement may have been possible with a hybrid span including more fiber segments [1,18], but the focus of these experiments was on system and span simplicity.These reach lengths attained were enabled by advanced optical fiber and used only simple first-order backward-pumped Raman and EDFA amplification.The longest link studied was also enabled by the use of DBP nonlinear compensation in the receiver which produced an average Q improvement of about 0.45 dB.
Fig. 2 .
Fig. 2. (a) 20log(Q) as function of Raman ON/OFF gain for 286 km and 292 km systems.(b) Effective hybrid Raman/EDFA amplifier noise figure as function of Raman ON/OFF gain.
Fig. 4 .Fig. 5 .
Fig. 4. 304 km span transmission: (a) Q-factor vs. channel power for a center channel.(b) Q improvement ΔQ as a function of number of steps in DBP algorithm for a central channel. | 4,009.6 | 2014-05-05T00:00:00.000 | [
"Physics"
] |
Quantification of UV Light-Induced Spectral Response Degradation of CMOS-Based Photodetectors
High-energy radiation is known to potentially impact the optical performance of silicon-based sensors adversely. Nevertheless, a proper characterization and quantification of possible spectral response degradation effects due to UV stress is technically challenging. On one hand, typical illumination methods via UV lamps provide a poorly defined energy spectrum. On the other hand, a standardized measurement methodology is also missing. This work provides an approach where well-defined energy spectrum UV stress conditions are guaranteed via a customized optical set up, including a laser driven light source, a monochromator, and a non-solarizing optical fiber. The test methodology proposed here allows performing a controlled UV stress between 200 nm and 400 nm with well-defined energy conditions and offers a quantitative overview of the impact on the optical performance in CMOS-based photodiodes, along a wavelength range from 200 to 1100 nm and 1 nm step. This is of great importance for the characterization and development of new sensors with a high and stable UV spectral response, as well as for implementation of practical applications such as UV light sensing and UV-based sterilization.
Introduction
A wide number of the CMOS-based photodetectors in use today have no (or very poor) sensitivity in the UV spectral range, mainly because the backend layers are absorbing the UV light.The junction design is also an important factor, but more likely for degradation effects.Nevertheless, UV light sensing has become in recent years a topic of increasing interest, due to the surge of a plethora of new technological applications, such as sterilization, UV spectroscopy, biological analysis, space imaging, UV-based cure processes, and more [1][2][3][4][5][6][7][8].Although, there are still a few major challenges in the field of UV light sensing.One of these challenges is regarding the typical low sensitivity to UV light due to the short penetration depth in Si.This means that most of the photo-generated carriers within the upper atomic layers in Si cannot be detected because of the strong recombination through the interface states.Several approaches dealing, for example, with the photodiode dopant profile [9,10] or the engineering of photodetectors optimizing design and manufacturing processes have made it possible to reach not only high UV spectral response [11,12] but also to address the poor stability or spectral response degradation due to exposure to UV light conditions.Achieving such optical robustness under high-energy UV-light illumination conditions is certainly the second major challenge in UV light sensing.Such degradation mechanisms may be explained due to trap generation [13] or changes in the fixed charges and the interface states at the Si/SiO 2 interface above the photodetector, originating from the very high photon energy (6.2-4.1 eV for wavelengths between 200 and 300 nm).There exists also a plethora of approaches for the development of UV photodetectors, which include the usage of compound semiconductors with a wide bandgap such as SiC or ZnO [14,15], usage of Silicon-in-Insulator (SOI) structures with a shallow surface detection layer [8], the proper tunning of doping levels to reach high concentration surface layers near the Si surface [16], Sensors 2024, 24, 1535 2 of 10 back side illumination structure approaches, or other thin-film or nanostructure-based approaches [17].Our more recent research on the development of new UV photodiodes, using X-FAB's Semiconductors Foundries XS018 technology, shows a significant spectral response improvement especially for wavelengths around 260 nm, which is relevant for applications in sterilization.It also shows a remarkable robustness (<5% degradation) under UV light stress conditions.
The UV instability of Si-based photodetectors is certainly a very complex phenomenon, which may depend on several parameters such as irradiance, duration of UV light exposure, radiant exposure, wavelength of the UV radiation, type of photodetector, etc. [18].Therefore, the robustness of photodetectors must be investigated systematically, with a reliable measurement methodology which covers a wide range of experimental parameters.It is important to precisely quantify the spectral response degradation due to UV light exposure, at any stress wavelength of interest.In addition, it is also very important that the impact on the optical performance of photodetectors is determined not only at the wavelength of exposure but also along the entire spectral range where such photodetectors are expected to be used.It then becomes necessary to count with a reliable, fast, and standardized measurement methodology which allows us to perform such systematic characterizations.Due to the lack of a common standard, the present work intends to propose a systematic measurement methodology for the investigation and quantification of spectral response degradation due to UV light exposure in CMOS-based photodetectors.It is also to determine the overall impact on the optical performance of such photodetectors along the entire spectral range of operation, while maintaining always well-defined energy conditions for the UV exposure.
Materials and Methods
All the photodetectors characterized and presented in this work were fabricated with X-FAB's Semiconductors Foundries XS018 technology [19].Data are shown for photodiodes belonging to three different main modules: module A, which offers a good spectral sensitivity over a broad wavelength range; module B, which provides specially outstanding spectral response in the human eye response range, with its maximum point of sensitivity close to the green region; and module C, which is specially dedicated to performing with a high and stable spectral response in the UV range.Table 1 briefly summarizes the different XS018-based photodetectors for which experimental UV stress and spectral response degradation investigations are shown in this work.Module C, PWell_2/DNWell/p-Sub Enhanced for UV range 1 This new photodetector is soon to be released as dosuv.
Figure 1 shows a conceptual diagram of most of the devices listed in Table 1.All photodetectors are fabricated following X-FAB's XS018 process flow technology.All devices can be fabricated scalable in size, as well as in arrays of photodiodes.Nevertheless, for the investigations performed here, all measurements are performed on single devices.For simplification, all front-end layers which comprise different metallization levels (M2, M3, etc.) and the passivation layer are not shown in the diagrams.Some of these metallization levels may be used as a light shield, for example, to avoid photons reaching other regions instead of the photodiode's pn-junction active region.All devices are built over a bulk p-type substrate, where the electrical connection is realized via a p+ implant and a metal Sensors 2024, 24, 1535 3 of 10 contact at the M1 level.In the case of the doa device, the pn-junction proper depth is realized via an NWell plus a DNWell (deep NWell), which can be contacted via an n+ implant and a metal contact at the M1 level.The dob device conforms its pn-junction also via a DNWell.Although, in this case, the electrical connection is realized via a p+ implant, once the DNWell is pinched to a PWell_1.On the other hand, the doe and doeher devices realize the pn-junction only via an NWell, where electrical contact is possible thanks to an n+ implant and a metal contact at the M1 level.Different from the doe device, the doeher photodetector possesses a special layer at the back end with special optical properties which allows precise optimization of the optical performance, especially in the Visible and NIR spectral ranges.As described in Table 1, the UVC photodetector is realized by a special structure comprised by two pn-junctions, the PWell_2 to DNWell upper junction and the DNWell to p-Sub lower junction.
Sensors 2024, 24, 1535 3 of 10 contact at the M1 level.In the case of the doa device, the pn-junction proper depth is realized via an NWell plus a DNWell (deep NWell), which can be contacted via an n+ implant and a metal contact at the M1 level.The dob device conforms its pn-junction also via a DNWell.Although, in this case, the electrical connection is realized via a p+ implant, once the DNWell is pinched to a PWell_1.On the other hand, the doe and doeher devices realize the pn-junction only via an NWell, where electrical contact is possible thanks to an n+ implant and a metal contact at the M1 level.Different from the doe device, the doeher photodetector possesses a special layer at the back end with special optical properties which allows precise optimization of the optical performance, especially in the Visible and NIR spectral ranges.As described in Table 1, the UVC photodetector is realized by a special structure comprised by two pn-junctions, the PWell_2 to DNWell upper junction and the DNWell to p-Sub lower junction.In accordance with the naming shown in Table 1, top-left corresponds to doa, bottomleft corresponds to dob, top-right corresponds to doe, and bottom-right corresponds to doeher.
Test Setup
The photocurrent measurements were performed in situ, during illumination UV conditions, using a Keithley 4200A DC Parameter Analizer (Tektronix, Beaverton, OR, USA).All measurements have been performed at 27 °C (slightly elevated room temperature, to ensure precise temperature control).Wafers were placed on the chuck of a Summit 200-FA-AP probe station (Form Factor GmbH, Thiendorf, Germany).All photodetectors were operated with standard specification conditions, with VCathode = 0.9 V, VGuard = 3.3 V and VAnode = 0.0 V (substrate/bulk/chuck).
UV Stress Methodology
Conventional artificial UV light sources such as UV lamps were found to be inconvenient for UV degradation wafer-level investigations of photodetectors.On the one hand, the global illumination may affect other light-sensitive test structures and the close electrical contact via probe-wedge may induce undesired light reflections, when the illumination is performed in situ.On the other hand, such light sources provide a poorly defined energy spectrum, making it challenging to define the specific wavelength of illumination and consequently quantifying at which wavelength (and intensity) the UV stress is applied.Despite the points mentioned here, in the case of UV lamps, it is always possible to devise a way to normalize the illumination area and dose onto the photodetector under test.In fact, in most research investigations about UV degradation, often an intensity value per area is given, but without a complete spectrum.So, it remains unknown at which wavelength (and intensity) the UV stress is provided.Due to the typical broadband nature of a UV lamp as well as variations on the spectral output, a UV lamp does not provide stress at all wavelengths of the UV spectrum.Other light sources such as UV LEDs could also be used, but such LEDs provide non-uniform light, which brings other technical challenges to improve the illumination uniformity.Nevertheless, in this case, the illumination is also global, which may also induce undesired light reflections.1, top-left corresponds to doa, bottom-left corresponds to dob, top-right corresponds to doe, and bottom-right corresponds to doeher.
Test Setup
The photocurrent measurements were performed in situ, during illumination UV conditions, using a Keithley 4200A DC Parameter Analizer (Tektronix, Beaverton, OR, USA).All measurements have been performed at 27 • C (slightly elevated room temperature, to ensure precise temperature control).Wafers were placed on the chuck of a Summit 200-FA-AP probe station (Form Factor GmbH, Thiendorf, Germany).All photodetectors were operated with standard specification conditions, with V Cathode = 0.9 V, V Guard = 3.3 V and V Anode = 0.0 V (substrate/bulk/chuck).
UV Stress Methodology
Conventional artificial UV light sources such as UV lamps were found to be inconvenient for UV degradation wafer-level investigations of photodetectors.On the one hand, the global illumination may affect other light-sensitive test structures and the close electrical contact via probe-wedge may induce undesired light reflections, when the illumination is performed in situ.On the other hand, such light sources provide a poorly defined energy spectrum, making it challenging to define the specific wavelength of illumination and consequently quantifying at which wavelength (and intensity) the UV stress is applied.Despite the points mentioned here, in the case of UV lamps, it is always possible to devise a way to normalize the illumination area and dose onto the photodetector under test.In fact, in most research investigations about UV degradation, often an intensity value per area is given, but without a complete spectrum.So, it remains unknown at which wavelength (and intensity) the UV stress is provided.Due to the typical broadband nature of a UV lamp as well as variations on the spectral output, a UV lamp does not provide stress at all wavelengths of the UV spectrum.Other light sources such as UV LEDs could also be used, but such LEDs provide non-uniform light, which brings other technical challenges to improve the illumination uniformity.Nevertheless, in this case, the illumination is also global, which may also induce undesired light reflections.
To perform the optical characterization of the photodetectors and to quantify the impact on their optical performance due to UV stress, the spectral response is measured before and after a specially devised UV stress step.Here, a laser-driven light source (Model EQ-99X LDLS, Energetiq (Wilmington, MA, USA), a monochromator (Hyperchromator, Mountain Photonics GmbH, Landsberg am Lech, Germany), and a non-solarizing optical fiber (Multimode Solarization Resistant Optical Fiber, 0.22 NA, Ø105 µm core, Thorlabs, Newton, NJ, USA) are used in a customized manner.A reference detector is also used before or after every wavelength sweep, as indicated in Figure 2.This allows for a precise estimation of the light power intensity brought via the optical fiber onto the photodetector.In addition, a second control detector continuously monitors the output at the light source to counter for possible fluctuations or variations due to aging and other factors.For every conventional wavelength sweep, a dark current correction is always performed.The monochromator receives the white light output of the laser-driven light source and splits the light into every wavelength between 200 nm and 1100 nm (with a smaller step possible of 1 nm), thanks to a special optical set up which comprises the usage of proper filters, mirrors, and gratings.This allows bringing, via the non-solarizing fiber, any desired wavelength only onto the photodetector of interest, avoiding the illumination (or stress) of other devices as well as undesired light reflections.
of other devices as well as undesired light reflections.
The spectral response measurements before (step A) and after (step C) the UV stress step (step B) correspond to standard monitoring of the photocurrent level as the wavelength is swept between 1100 nm and 300 nm, with 1 nm steps (2 nm FWHM).Such measurement before the UV stress (step A) includes a wavelength sweep on a reference calibrated diode (which allows the determination of the light power intensity) and a subsequent wavelength sweep on the photodetector of interest (wafer-level).
For the UV stress step (step B), it is ensured that the UV stress methodology is realized under well-defined energy conditions.The wavelength is fixed at different values between 400 nm and 200 nm, with a step as small as 1 nm (2 nm FWHM).The stress time at each wavelength is defined in such a way that the exact illumination conditions (fluence rate) are maintained.It is important to mention that the UV stress is applied locally and is delimited only by the optical fiber diameter (~105 nm), which means that not the entire photodetector has been stressed during the illumination.Also, to perform a reliable UV stress and subsequent quantification of the photodetector's degradation, the spectral response measurement after the UV stress (step C) must be performed in a reversed order with respect to the initial measurement (step A).This means that first, the wavelength sweep is performed on the photodiode of interest (wafer level), and subsequently, the wavelength sweep on the reference diode is performed.In this way, it is ensured that the electrical contact always remains the same during the entire process and that the optical fiber also does not change its position over the photodetector.These in situ pre/post optical photocurrent measurements, as well as the UV stress step, are summarized in detail in Figure 2. The spectral response measurements before (step A) and after (step C) the UV stress step (step B) correspond to standard monitoring of the photocurrent level as the wavelength is swept between 1100 nm and 300 nm, with 1 nm steps (2 nm FWHM).Such measurement before the UV stress (step A) includes a wavelength sweep on a reference calibrated diode (which allows the determination of the light power intensity) and a subsequent wavelength sweep on the photodetector of interest (wafer-level).
For the UV stress step (step B), it is ensured that the UV stress methodology is realized under well-defined energy conditions.The wavelength is fixed at different values between 400 nm and 200 nm, with a step as small as 1 nm (2 nm FWHM).The stress time at each wavelength is defined in such a way that the exact illumination conditions (fluence rate) are maintained.It is important to mention that the UV stress is applied locally and is delimited only by the optical fiber diameter (~105 nm), which means that not the entire photodetector has been stressed during the illumination.Also, to perform a reliable UV stress and subsequent quantification of the photodetector's degradation, the spectral response measurement after the UV stress (step C) must be performed in a reversed order with respect to the initial measurement (step A).This means that first, the wavelength sweep is performed on the photodiode of interest (wafer level), and subsequently, the wavelength sweep on the reference diode is performed.In this way, it is ensured that the electrical contact always remains the same during the entire process and that the optical fiber also does not change its position over the photodetector.These in situ pre/post optical photocurrent measurements, as well as the UV stress step, are summarized in detail in Figure 2.
UV Stress Methodology Applied on CMOS-Based Photodetectors
As mentioned previously, the UV stress is carried out at pre-defined fixed stressing wavelengths, between 400 nm and 200 nm (see Figure 3 for some wavelength examples between 200 nm and 350 nm).At such wavelength values, the photocurrent is monitored over a specific stress time (t stress = t final − t initial ).A dark current correction is also performed by subtracting the average dark current which is measured for about 10 s (t dark ) before the shutter is opened (without light exposure).The degradation of the spectral response (shown in % in Figure 3b) is calculated from the variation in photocurrent measured at the final measurement time (t final ) and the initial measurement time (t initial ).For all the photodiodes tested in this work, it is observed that shorter wavelengths (therefore more photon energetic) have a higher degradation potential.In the device case shown in Figure 3, such degradation reaches some level of saturation at around 90% close to 200 nm.Also, for all the photodiodes tested in this work, for wavelengths > 300 nm, there is no evident spectral response degradation noticed due to UV light stress.
UV Stress Methodology Applied on CMOS-Based Photodetectors
As mentioned previously, the UV stress is carried out at pre-defined fixed stressing wavelengths, between 400 nm and 200 nm (see Figure 3 for some wavelength examples between 200 nm and 350 nm).At such wavelength values, the photocurrent is monitored over a specific stress time (tstress = tfinal − tinitial).A dark current correction is also performed by subtracting the average dark current which is measured for about 10 s (tdark) before the shutter is opened (without light exposure).The degradation of the spectral response (shown in % in Figure 3b) is calculated from the variation in photocurrent measured at the final measurement time (tfinal) and the initial measurement time (tinitial).For all the photodiodes tested in this work, it is observed that shorter wavelengths (therefore more photon energetic) have a higher degradation potential.In the device case shown in Figure 3, such degradation reaches some level of saturation at around 90% close to 200 nm.Also, for all the photodiodes tested in this work, for wavelengths > 300 nm, there is no evident spectral response degradation noticed due to UV light stress.The light power per area reaching the photodetector varies with the stress wavelength used, as shown in Figure 4. Also, as shown in Figure 3, different stress levels (and therefore UV degradation) can be induced at different stress wavelengths.Therefore, to ensure a fair comparison and evaluation in terms of UV degradation of different photodetectors, as well as the stress induced at every stressing wavelength, it is necessary to properly control the dosage of UV light reaching the photodetector at every stress wavelength.The approach in this work is to expose the photodetectors to a wide range of stress wavelengths while always maintaining the same light illumination conditions.In this way, no UV wavelength is favored, and each stress wavelength will induce the same stress over the photodetector.
Therefore, we employ a constant light fluence over the light-exposed surface of the photodetector.This is known as the fluence rate or radiant exposure and is defined as the product of the power of the UV electromagnetic radiation incident on a surface per unit surface area and the duration of the light exposure or simply the product of the effective light irradiance and the UV stress time [20].For simplicity, and in accordance with some experimental measurement requirements, such as reasonable UV stress and measurement The light power per area reaching the photodetector varies with the stress wavelength used, as shown in Figure 4. Also, as shown in Figure 3, different stress levels (and therefore UV degradation) can be induced at different stress wavelengths.Therefore, to ensure a fair comparison and evaluation in terms of UV degradation of different photodetectors, as well as the stress induced at every stressing wavelength, it is necessary to properly control the dosage of UV light reaching the photodetector at every stress wavelength.The approach in this work is to expose the photodetectors to a wide range of stress wavelengths while always maintaining the same light illumination conditions.In this way, no UV wavelength is favored, and each stress wavelength will induce the same stress over the photodetector.
Therefore, we employ a constant light fluence over the light-exposed surface of the photodetector.This is known as the fluence rate or radiant exposure and is defined as the product of the power of the UV electromagnetic radiation incident on a surface per unit surface area and the duration of the light exposure or simply the product of the effective light irradiance and the UV stress time [20].For simplicity, and in accordance with some experimental measurement requirements, such as reasonable UV stress and measurement times, in this work, we defined a fluence rate of 3550.24W•min/m 2 .Consequently, the ex-posure times at every stress wavelength must be adjusted accordingly.Table 2 summarizes the stress wavelengths used during the UV stress procedure (step B in Figure 2), as well as their corresponding exposure times.This is also displayed in Figure 4. times, in this work, we defined a fluence rate of 3550.24W•min/m 2 .Consequently, the exposure times at every stress wavelength must be adjusted accordingly.Table 2 summarizes the stress wavelengths used during the UV stress procedure (step B in Figure 2), as well as their corresponding exposure times.This is also displayed in Figure 4.
Quantification of Optical Degradation Due to UV Light Exposure
Once an adequate fluence rate is defined, it is then required to re-normalize the estimations of UV degradation shown in Figure 3 with precise UV exposure times to guarantee the same UV light exposure conditions at each stress wavelength.The results of such normalization are shown in Figure 5, considering the case of the doa device (previously shown in Figure 3), as well as other examples of X-FAB's photodetectors (doe, doeher, and UVC) designed for operation in a variety of spectral ranges (see Table 1 for further details).In general, these results confirm that non-UV photodetectors should not be exposed to stress wavelengths below 300 nm.Degradation effects become more evident around 260 nm, growing exponentially as the photo energy increases and reaching almost a total loss of optical sensitivity at 200 nm.Due to some structural characteristics, some non-UV
Quantification of Optical Degradation Due to UV Light Exposure
Once an adequate fluence rate is defined, it is then required to re-normalize the estimations of UV degradation shown in Figure 3 with precise UV exposure times to guarantee the same UV light exposure conditions at each stress wavelength.The results of such normalization are shown in Figure 5, considering the case of the doa device (previously shown in Figure 3), as well as other examples of X-FAB's photodetectors (doe, doeher, and UVC) designed for operation in a variety of spectral ranges (see Table 1 for further details).In general, these results confirm that non-UV photodetectors should not be exposed to stress wavelengths below 300 nm.Degradation effects become more evident around 260 nm, growing exponentially as the photo energy increases and reaching almost a total loss of optical sensitivity at 200 nm.Due to some structural characteristics, some non-UV photodetectors (see doeher type in Figure 5) appear to be slightly more resistant to UV light exposure.Nevertheless, these devices also suffer almost a total loss of optical sensitivity as the stress wavelength approaches 200 nm.A remarkable highlight is the robustness of the UV photodetectors under UV stress conditions, where a spectral response degradation below 5% is observed for the more energetic wavelength applied in this work.Due to technical limitations, and to ensure good reliability of the experimental results presented here, 200 nm is the lower limit investigated in this work.For wavelengths below 200 nm, not only the stability of the light source system may be compromised, but also strong solarization effects are observed in the optical fiber.This is caused by the formation of absorbing centers due to the intense UV light flux.Therefore, it is no longer possible to properly bring the light into the photodetectors without avoiding undesired significant light loss.
Sensors 2024, 24, 1535 7 of 10 photodetectors (see doeher type in Figure 5) appear to be slightly more resistant to UV light exposure.Nevertheless, these devices also suffer almost a total loss of optical sensitivity as the stress wavelength approaches 200 nm.A remarkable highlight is the robustness of the UV photodetectors under UV stress conditions, where a spectral response degradation below 5% is observed for the more energetic wavelength applied in this work.Due to technical limitations, and to ensure good reliability of the experimental results presented here, 200 nm is the lower limit investigated in this work.For wavelengths below 200 nm, not only the stability of the light source system may be compromised, but also strong solarization effects are observed in the optical fiber.This is caused by the formation of absorbing centers due to the intense UV light flux.Therefore, it is no longer possible to properly bring the light into the photodetectors without avoiding undesired significant light loss.
Overall Impact of Optical Performance for an Extended Spectral Range
Once a robust methodology has been laid down to quantify the degradation of sensitivity suffered by CMOS-based photodetectors due to UV stress, it is of great interest to determine the impact that such degradation has on the overall optical performance of such devices.Particularly, when such an impact may possibly be irreversible (to the best of the author's knowledge, there is no clear evidence that a UV degradation mechanism can be reversed).The undesired or unproper exposure to UV light may adversely compromise the safe and successful outcome of the specific technological application in which a siliconbased photodetector may be implemented.Under UV light exposure, the optical performance of a photodetector is impacted, not only in the UV wavelength range but certainly in the entire spectral range.This would turn the device unresponsive, even in specific spectral ranges where such a device is expected to offer its maximum light sensing capabilities.
Figure 6 shows the example of three photodetectors which were submitted to the UV stress methodology proposed in Section 2.
Overall Impact of Optical Performance for an Extended Spectral Range
Once a robust methodology has been laid down to quantify the degradation of sensitivity suffered by CMOS-based photodetectors due to UV stress, it is of great interest to determine the impact that such degradation has on the overall optical performance of such devices.Particularly, when such an impact may possibly be irreversible (to the best of the author's knowledge, there is no clear evidence that a UV degradation mechanism can be reversed).The undesired or unproper exposure to UV light may adversely compromise the safe and successful outcome of the specific technological application in which a silicon-based photodetector may be implemented.Under UV light exposure, the optical performance of a photodetector is impacted, not only in the UV wavelength range but certainly in the entire spectral range.This would turn the device unresponsive, even in specific spectral ranges where such a device is expected to offer its maximum light sensing capabilities.Figure 6 shows the example of three photodetectors which were submitted to the UV stress methodology proposed in Section 2.3 and following the test sequence shown in Figure 2. Solid data represent conventional wavelength sweep measurements before UV stress (step A, Figure 2); meanwhile, dotted data represent conventional wavelength sweep measurements after UV stress (step C, Figure 2).Dashed back data indicate the ideal spectral response of a photodetector, which would be the ideal case when its quantum efficiency is equal to one.Each photodetector in Figure 6 was selected to showcase three main particular cases.The first case corresponds to a non-UV photodiode (doe), which is Sensors 2024, 24, 1535 8 of 10 severely affected under UV stress illumination conditions (especially for wavelengths below 260 nm), as shown also in Figure 5.After UV stress, the optical performance of the device is severely impacted along the entire spectral range measured (1100 nm to 300 nm).The strongest impact is clearly in the UV range, where a degradation of about 88% is observed.Note that the UV range for this device corresponds to wavelengths only from 400 nm to 300 nm, once such non-UV photodiodes are not specified to be operated below 300 nm.Nevertheless, the responsivity of the device is also affected beyond the UV range, about 23% in the Visible range and about 5% in the NIR range.The second case corresponds also to a non-UV photodiode, which is especially designed to be responsive for the Red and NIR spectral range (dob).This device is designed to be non-responsive in the UV spectral range.Therefore, no impact on its optical performance is expected to appear due to UV stress light illumination, as clearly shown in Figure 6.The third case considered here is a newly developed UV photodetector (UVC), which shows high UV response down to 200 nm.This is the lower wavelength operation limit specified for these kinds of photodetectors.Such devices also show a strong robustness under UV stress conditions, with a maximum responsivity degradation of 5-6% in the UV spectral range.Such degradation is almost unnoticeable in the Visible and NIR spectral ranges (below 3% in both cases).For a fair comparison, it should be noted that the result shown in Figure 6b does not imply that a photodetector like the dob is more robust to UV light conditions compared to the UVC photodetector.The low degradation for dob (in %) is because such a device is specially designed to be unresponsive for wavelengths below ~450 nm.Therefore, the photocurrent levels and variations before and after UV stress illumination conditions are extremely small in this case.
Sensors 2024, 24, 1535 8 of 10 ideal spectral response of a photodetector, which would be the ideal case when its quantum efficiency is equal to one.Each photodetector in Figure 6 was selected to showcase three main particular cases.The first case corresponds to a non-UV photodiode (doe), which is severely affected under UV stress illumination conditions (especially for wavelengths below 260 nm), as shown also in Figure 5.After UV stress, the optical performance of the device is severely impacted along the entire spectral range measured (1100 nm to 300 nm).The strongest impact is clearly in the UV range, where a degradation of about 88% is observed.Note that the UV range for this device corresponds to wavelengths only from 400 nm to 300 nm, once such non-UV photodiodes are not specified to be operated below 300 nm.Nevertheless, the responsivity of the device is also affected beyond the UV range, about 23% in the Visible range and about 5% in the NIR range.The second case corresponds also to a non-UV photodiode, which is especially designed to be responsive for the Red and NIR spectral range (dob).This device is designed to be non-responsive in the UV spectral range.Therefore, no impact on its optical performance is expected to appear due to UV stress light illumination, as clearly shown in Figure 6.The third case considered here is a newly developed UV photodetector (UVC), which shows high UV response down to 200 nm.This is the lower wavelength operation limit specified for these kinds of photodetectors.Such devices also show a strong robustness under UV stress conditions, with a maximum responsivity degradation of 5-6% in the UV spectral range.Such degradation is almost unnoticeable in the Visible and NIR spectral ranges (below 3% in both cases).For a fair comparison, it should be noted that the result shown in Figure 6b does not imply that a photodetector like the dob is more robust to UV light conditions compared to the UVC photodetector.The low degradation for dob (in %) is because such a device is specially designed to be unresponsive for wavelengths below ~450 nm.Therefore, the photocurrent levels and variations before and after UV stress illumination conditions are extremely small in this case.
(a) (b) As a further comparison of the improved optical performance shown by the UVC photodetector, a comparison is made with other previous X-FAB's technologies such as the XH018 technology-based UV detector.Figure 7 shows the clear improvement in As a further comparison of the improved optical performance shown by the UVC photodetector, a comparison is made with other previous X-FAB's technologies such as the XH018 technology-based UV detector.Figure 7 shows the clear improvement in optical performance in the UV range for the UVC detector (red solid data), in comparison to the XH018 UV detector (blue solid data).Such improvement starts to be more evident below 300 nm, and especially of interest is the performance around 260 nm, which opens up important possibilities for applications of UV sterilization.As a comparison, the case of a commer-Sensors 2024, 24, 1535 9 of 10 cially available discrete back side illuminated UV photodetector (gray dotted data) is also shown.Nevertheless, it must be pointed out that such discrete devices are based on a back side illumination technology, which is very different from the technology approach for the case of the UVC photodetector, based on X-FAB's CMOS XS018 technology.Also, the data for this case (at 25 • C) are obtained from available public documentation and such a device has not been directly measured with the measurement methodology approach proposed in this work.Therefore, this serves solely as a first general performance comparison.
optical performance in the UV range for the UVC detector (red solid data), in comparison to the XH018 UV detector (blue solid data).Such improvement starts to be more evident below 300 nm, and especially of interest is the performance around 260 nm, which opens up important possibilities for applications of UV sterilization.As a comparison, the case of a commercially available discrete back side illuminated UV photodetector (gray dotted data) is also shown.Nevertheless, it must be pointed out that such discrete devices are based on a back side illumination technology, which is very different from the technology approach for the case of the UVC photodetector, based on X-FAB's CMOS XS018 technology.Also, the data for this case (at 25 °C) are obtained from available public documentation and such a device has not been directly measured with the measurement methodology approach proposed in this work.Therefore, this serves solely as a first general performance comparison.
Conclusions
As expected, different photodetectors are impacted differently under UV stress illumination conditions, depending on the specific technical application for which they are designed.This also defines aspects such as photodiode structure, design or layout characteristics and even adequate operation conditions.Therefore, it becomes very important to develop a robust measurement methodology which allows a reliable characterization of CMOS-based photodetectors in terms of UV stress and degradation.The work presented here is the result of several optimization loops in terms of electrical and optical measurements methodologies which are easily applied to any kind of front-side illuminated photodetector in a fully automated manner.In the case of the UV photodetectors shown here, designs and process flavors have been optimized to achieve high spectral responsivity and robustness under UV light illumination conditions.Compared with X-FAB's technologies (XH018) and even other discrete backside illuminated devices, we have observed a significant spectral response improvement, especially for wavelengths around 260 nm, which is of great interest for UV-based sterilization applications.Our planned research on UV light sensing applications continues and will also include the influence of operation temperature on spectral response degradation performance due to UV stress conditions, as well as the impact on other important photodetector parameters such as dark current (leakage current) and capacitance.
Conclusions
As expected, different photodetectors are impacted differently under UV stress illumination conditions, depending on the specific technical application for which they are designed.This also defines aspects such as photodiode structure, design or layout characteristics and even adequate operation conditions.Therefore, it becomes very important to develop a robust measurement methodology which allows a reliable characterization of CMOS-based photodetectors in terms of UV stress and degradation.The work presented here is the result of several optimization loops in terms of electrical and optical measurements methodologies which are easily applied to any kind of front-side illuminated photodetector in a fully automated manner.In the case of the UV photodetectors shown here, designs and process flavors have been optimized to achieve high spectral responsivity and robustness under UV light illumination conditions.Compared with X-FAB's technologies (XH018) and even other discrete backside illuminated devices, we have observed a significant spectral response improvement, especially for wavelengths around 260 nm, which is of great interest for UV-based sterilization applications.Our planned research on UV light sensing applications continues and will also include the influence of operation temperature on spectral response degradation performance due to UV stress conditions, as well as the impact on other important photodetector parameters such as dark current (leakage current) and capacitance.
Patents
Four patents related to UV light sensing are pending to protect the IP provided by X-FAB regarding the development and realization of silicon-based photodetectors with an exceptionally high UV spectral response.
Figure 1 .
Figure 1.Conceptual cross-section diagram of the different CMOS-based photodiodes investigated in this work.In accordance with the naming shown in Table 1, top-left corresponds to doa, bottomleft corresponds to dob, top-right corresponds to doe, and bottom-right corresponds to doeher.
Figure 1 .
Figure 1.Conceptual cross-section diagram of the different CMOS-based photodiodes investigated in this work.In accordance with the naming shown in Table 1, top-left corresponds to doa, bottom-left corresponds to dob, top-right corresponds to doe, and bottom-right corresponds to doeher.
Figure 2 .
Figure 2. General scheme for photocurrent measurements before (A) and after (C) the UV stress methodology (B).Differently colored lines indicate different stress wavelengths.
Figure 2 .
Figure 2. General scheme for photocurrent measurements before (A) and after (C) the UV stress methodology (B).Differently colored lines indicate different stress wavelengths.
Figure 3 .
Figure 3. UV stress methodology for CMOS photodetectors.Data are shown for the case of module A, doa photodiode: (a) over-time photocurrent monitoring under UV stress, shown for some predefined wavelength values; (b) estimation of the absolute spectral response degradation suffered at each stressing wavelength.Dotted line serves merely as a guide-to-the-eye to follow the tendency of the UV degradation behavior.
Figure 3 .
Figure 3. UV stress methodology for CMOS photodetectors.Data are shown for the case of module A, doa photodiode: (a) over-time photocurrent monitoring under UV stress, shown for some pre-defined wavelength values; (b) estimation of the absolute spectral response degradation suffered at each stressing wavelength.Dotted line serves merely as a guide-to-the-eye to follow the tendency of the UV degradation behavior.
Figure 4 .
Figure 4. Light power per area dependency on wavelength and determination of UV stress exposure times for a constant fluence rate of 3550.24W•min/m 2 .The colored arrows indicate the corresponding proper axis of light power per area (black) and time (red).
Figure 4 .
Figure 4. Light power per area dependency on wavelength and determination of UV stress exposure times for a constant fluence rate of 3550.24W•min/m 2 .The colored arrows indicate the corresponding proper axis of light power per area (black) and time (red).
Figure 5 .
Figure 5. Quantification of the degradation induced due to UV light exposure, considering stress wavelength between 400 nm and 200 nm.Data are shown for some representative examples including photodetectors with a wide range of spectral applications.
3 and following the test sequence shown in Figure 2. Solid data represent conventional wavelength sweep measurements before UV stress (step A, Figure 2); meanwhile, dotted data represent conventional wavelength sweep measurements after UV stress (step C, Figure 2).Dashed back data indicate the
Figure 5 .
Figure 5. Quantification of the degradation induced due to UV light exposure, considering stress wavelength between 400 nm and 200 nm.Data are shown for some representative examples including photodetectors with a wide range of spectral applications.
Figure 6 .
Figure 6.Overall impact on the optical performance of CMOS-based photodetectors due to UV stress light exposure: (a) Photodetectors responsivity before (solid data) and after (dotted data) UV stress conditions.Dashed lines correspond to the ideal responsivity (Qeff = 1).(b) Estimation of the spectral response degradation suffered along different spectral ranges of interest.(*) For the case of the UVC photodetector the labels for spectral ranges should be read as full spectrum (200-1100 nm) and UV range (200-400 nm), once the experimental data collection starts at 200 nm instead of 300 nm.
Figure 6 .
Figure 6.Overall impact on the optical performance of CMOS-based photodetectors due to UV stress light exposure: (a) Photodetectors responsivity before (solid data) and after (dotted data) UV stress conditions.Dashed lines correspond to the ideal responsivity (Q eff = 1).(b) Estimation of the spectral response degradation suffered along different spectral ranges of interest.(*) For the case of the UVC photodetector the labels for spectral ranges should be read as full spectrum (200-1100 nm) and UV range (200-400 nm), once the experimental data collection starts at 200 nm instead of 300 nm.
Figure 7 .
Figure 7. Optical performance in the UV spectral range for the XS018 technology-based UVC photodetector, in comparison to other X-FAB's technologies (XH018) as well as other discrete back side illuminated commercially available photodetectors.
Figure 7 .
Figure 7. Optical performance in the UV spectral range for the XS018 technology-based UVC photodetector, in comparison to other X-FAB's technologies (XH018) as well as other discrete back side illuminated commercially available photodetectors.
Table 1 .
X-FAB's XS018 technology-based photodetectors investigated in this work.
Table 2 .
UV light exposure stress times applied at each stress wavelength.
1 This is precisely controlled by an automated optical shutter mechanism.
Table 2 .
UV light exposure stress times applied at each stress wavelength.
1This is precisely controlled by an automated optical shutter mechanism. | 9,906.4 | 2024-02-27T00:00:00.000 | [
"Engineering",
"Physics"
] |
Identification of six Cytospora species on Chinese chestnut in China
Abstract Chinese chestnut (Castanea mollissima) is an important crop tree species in China. In the present study, Cytospora specimens were collected from Chinese chestnut trees and identified using molecular data of combined ITS, LSU, ACT and RPB2 loci, as well as morphological features. As a result, two new Cytospora species and four new host records were confirmed, viz. C. kuanchengensissp. nov., C. xinglongensissp. nov., C. ceratospermopsis, C. leucostoma, C. myrtagena and C. schulzeri.
Cytospora (Cytosporaceae, Diaporthales) is a widely distributed genus worldwide, occurring on a broad range of hosts (Sarma and Hyde 2001, Yang et al. 2015, Lawrence et al. 2017, Norphanphoun et al. 2017, 2018, Wijayawardene et al. 2018, Jayawardena et al. 2019, Phookamsak et al. 2019, Fan et al. 2020. Some species can cause severe canker diseases on woody trees, such as Cytospora chrysosperma, which is a commom pathogen on the commercial tree genera, Populus and Salix (Fan et al. 2014b, Zhang et al. 2014, Kepley et al. 2015, Wang et al. 2015. Host affiliation was considerd as the main evidence for separating species in Cytospora before DNA sequences were used; however, morphology combined with phylogeny has revealed many cryptic species. For example, 28 Cytospora species were discovered from Eucalyptus from South Africa (Adams et al. 2005) and six from apple trees in Iran (Mehrabi et al. 2011), three from Chinese scholar tree (Fan et al. 2014a), four from walnut tree (Fan et al. 2015a), six from anti-desertification plants in China (Fan et al. 2015b) and two from grapevine in North America (Lawrence et al. 2017). Several recent studies discovered new species of Cytospora using multiphasic analyses (Lawrence et al. 2018, Norphanphoun et al. 2017, 2018, Senanayake et al. 2017, 2018, Pan et al. 2018, Zhang et al. 2019. During our investigations of chestnut disease in China from 2016 to 2019, diseased branches with typical Cytospora fruiting bodies were discovered and collected ( Fig. 1). In the present study, Cytospora species from Castanea mollissima were identified using a combined method of morphology and phylogeny.
Sample collections and isolations
Chinese chestnut has a wide distribution in China. In the present study, we surveyed Hebei, Shaanxi and Shandong Provinces from 2016 to 2019. Dead and dying branches with typical Cytospora fruiting bodies were collected and packed in paper bags. Isolates were obtained by removing the ascospores or conidial masses from the fruiting bodies on to clean PDA plates and incubating at 25 °C until spores germinated. Single germinated spores were transferred on to the new PDA plates and incubated at 25 °C in the dark. Specimens were deposited in the Museum of the Beijing Forestry University (BJFC) and axenic cultures are maintained in the China Forestry Culture Collection Centre (CFCC).
Morphological analysis
Observation and description of Cytospora species from Castanea mollissima was based on fruiting bodies formed on tree barks. Ascomata and conidiomata from tree barks were sec- tioned by hand using a double-edged blade and strctures were observed under a dissecting microscope. At least 10 conidiostromata/ascostromata, 10 asci and 50 conidia/ascospores were measured to calculate the mean size and standard deviation. Measurements are reported as maximum and minimum in parentheses and the range representing the mean plus and minus the standard deviation of the number of measurements is given in parentheses (Voglmayr et al. 2017). Microscopy photographs were captured with a Nikon Eclipse 80i compound microscope equipped with a Nikon digital sight DS-Ri2 high definition colour camera, using differential interference contrast illumination. Introduction of the new species, based on molecular data, follow the recommendations of Jeewon and Hyde (2016).
DNA extraction, PCR amplification and sequencing
Genomic DNA was extracted from young mycelium growing on PDA plates following Doyle and Doyle (1990). PCR amplifications were performed in a DNA Engine Pelti-er Thermal Cycler (PTC-200; Bio-Rad Laboratories, Hercules, CA, USA). The primer pair ITS1/ITS4 (White et al. 1990) was used to amplify the ITS region. The primer pair LR0R/LR5 (Vilgalys and Hester 1990) was used to amplify the LSU region. The primer pair ACT512F/ACT783R (Carbone and Kohn 1999) was used to amplify ACT gene. The primer pair dRPB2-5f/dRPB2-7r (Voglmayr et al. 2016) was used to amplify the RPB2 gene. The polymerase chain reaction (PCR) assay was conducted as described in Fan et al. (2020). PCR amplification products were assayed via electrophoresis in 2% agarose gels. DNA sequencing was performed using an ABI PRISM 3730XL DNA Analyzer with a BigDye Terminater Kit v.3.1 (Invitrogen, USA) at the Shanghai Invitrogen Biological Technology Company Limited (Beijing, China).
Phylogenetic analyses
The preliminary identities of the isolates sequenced were obtained by conducting a standard nucleotide BLAST search using ITS, LSU, ACT and RPB2. Then all Cytospora isolates were selected to conduct phylogenetic analyses, based on sequence datasets from Fan et al. (2020). Diaporthe vaccinia (CBS 160.32) in Diaporthaceae was selected as the outgroup taxon. All sequences were aligned using MAFFT v. 6 (Katoh and Toh 2010) and edited manually using MEGA v. 6 (Tamura et al. 2013). Phylogenetic analyses were performed using PAUP v. 4.0b10 for Maximum Parsimony (MP) analysis (Swofford 2003) and PhyML v. 3.0 for Maximum Likelihood (ML) analysis (Guindon et al. 2010).
MP analysis was run using a heuristic search option of 1000 search replicates with random-additions of sequences with a tree bisection and reconnection algorithm. Maxtrees were set to 5000, branches of zero length were collapsed and all equally parsimonious trees were saved. Other calculated parsimony scores were tree length (TL), consistency index (CI), retention index (RI) and rescaled consistency (RC). ML analysis was performed using a GTR site substitution model including a gamma-distributed rate heterogeneity and a proportion of invariant sites (Guindon et al. 2010). The branch support was evaluated using a bootstrapping method of 1000 replicates (Hillis and Bull 1993). Phylograms were shown using FigTree v. 1.4.3 (Rambaut 2016). Novel sequences, generated in the current study, were deposited in GenBank (Table 1) and the aligned matrices used for phylogenetic analyses in TreeBASE (accession number: S25160).
Culture characters. On PDA at 25 °C in darkness. Cultures are initially white, becoming olivaceous buff in centre after 7 d and finally olivaceous at 30 d. The colony is flat, thin with a felt and tight texture in centre. Pycnidia distributed irregularly on medium surface.
Culture characters. On PDA at 25 °C in darkness. Cultures are white. The colony is flat, thin with a uniform texture, lacking aerial mycelium. Pycnidia distributed uniformly on medium surface.
Discussion
In the present study, an important fruit tree species, Castanea mollissima was investigated and Cytospora canker was found as a commom disease in plantations in Hebei Province. Identification was conducted based on 13 isolates from fruiting bodies using both morphological and molecular methods. As a result, six Cytospora species were confirmed. Cytospora kuanchengensis and C. xinglongensis are introduced as new species, C. ceratospermopsis, C. leucostoma, C. myrtagena and C. schulzeri are firstly reported on Castanea mollissima.
These six chestnut Cytospora species can be easily distinguished using DNA sequences of single ITS sequence or combined sequences of ITS, LSU, ACT and RPB2 ( Fig. 2; Suppl. material 1: Fig. S1). In addition, colonies on PDA and MEA of these six species are also different (Fig. 9). Cytospora xinglongensis never produce fruiting bodies on PDA or MEA, while the other five species form conidiomata in one month (Fig. 9). Morphologically, Cytospora xinglongensis has obviously longer conidia than others. However, the conidial dimension can hardly distinguish C. ceratospermopsis, C. kuanchengensis, C. leucostoma, C. myrtagena and C. schulzeri.
Dar and Rai reported Cytospora diseases on Castanea sativa in India, causing perennial cankers on stems and branches (Dar and Rai 2014). The Cytospora isolates were identified mainly based on ITS sequence data, which were introduced as a new species named Cytospora castaneae (wrongly wrriten as Cytospora castanae in the original paper) (Dar and Rai 2014). However, further study is required to confirm the species position within the genus, including detailed morphogical features and sequences of high quality. Cytospora canker is a common disease on chestnut trees, but there are few formal reports. In China, this disease is known amongst phytopathologists, but no-one conducted accurate identifications. Hence, this paper is the first formal report of Cytospora chestnut canker in China. From our investigations of chestnut diseases in China, Cytospora species are closely associated with canker diseases in chestnut plantations. In most cases, they infect twigs or small branches, causing necrotic lesions (Fig. 1A), finially forming fruiting bodies on dead tissues (Fig. 1D). However, Cytospora myrtagena was discovered on stems of a 15-year-old chestnut tree, causing typical Cytospora canker symptoms. More works should be conducted on the newly emerging pathogens from several aspects.
As the species concept of Cytospora has been improved a lot by using molecular data (Yang et al. 2015, Lawrence et al. 2017, Norphanphoun et al. 2017, 2018, Jaya-wardena et al. 2019, Fan et al. 2020), many Cytospora canker diseases and new species have been discovered and reported in recent years. Further studies are, however, now required to confirm their pathogenicity. | 2,098.6 | 2020-01-13T00:00:00.000 | [
"Biology"
] |