text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Holistic Privacy-Preserving Identity Management System for the Internet of Things
Security and privacy concerns are becoming an important barrier for large scale adoption and deployment of the Internet ofThings. To address this issue, the identity management system defined herein provides a novel holistic and privacy-preserving solution aiming to cope with heterogeneous scenarios that requires both traditional online access control and authentication, along with claim-based approach for M2M (machine to machine) interactions required in IoT. It combines a cryptographic approach for claim-based authentication using the Idemix anonymous credential system, together with classic IdM mechanisms by relying on the FIWARE IdM (Keyrock). This symbiosis endows the IdM system with advanced features such as privacy-preserving, minimal disclosure, zero-knowledge proofs, unlikability, confidentiality, pseudonymity, strong authentication, user consent, and offline M2M transactions. The IdM system has been specially tailored for the Internet of Things bearing in mind the management of both users’ and smart objects’ identity. Moreover, the IdM system has been successfully implemented, deployed, and tested in the scope of SocIoTal European research project.
Introduction
Nowadays, a plethora of embedded and mobile devices can be accessed ubiquitously in different scenarios, such as transport systems, critical infrastructures, or smart cities.In order to deal with these applications, the Internet of Things (IoT) [1] is based on the notion of global connectivity to generate, process, and exchange large amounts of sensitive and critical data, which makes them appealing for attackers.In IoT, billions of interconnected "things" distributed across remote areas serve as a baseline for providing innovative services, which can be accessed not only through the Cloud, but also in a Machine to Machine (M2M) fashion [2].M2M is considered as a key aspect for a broad adoption of the IoT, since M2M enables a direct communication among such smart objects [3] in an autonomous way.In such a distributed and dynamic environment, devices and services are exposed to additional threats that can compromise their data and, ultimately, the personal and private identity of the involved end users.Consequently, there is a strong need for not only adapting identity management (IdM) mechanisms to deal with user's identities, as it has been studied so far, but also allowing the management of smart objects' identities.In this sense, smart objects should be autonomous and independent entities with their own attributes and identity management mechanisms, which will allow them to preserve its owner's privacy during their operation.
Traditional privacy-preserving identity management solutions allow end users to manage their personal data for accessing certain services, by providing user consent mechanisms.Indeed, minimizing the disclosure of Personally Identifiable Information (PII) [4] is a basic requirement to realize the Privacy by Design (PbD) notions [5].However, in IoT, a huge amount of smart objects are enabled to interact with each other, so an explicit user consent for each interaction is not feasible, due to scalability reasons.Furthermore, such smart objects could lack user interface, and consequently, human interaction should be maintained at the minimum.Additionally, while technologies such as the Security Assertion Markup Language (SAML) or OpenID [6] allow a selective disclosure of PII, these approaches are based on the presence of a Trusted Third Party (TTP) that other components.Furthermore, in order to demonstrate its applicability in different scenarios, the proposed solution has been designed, developed, and deployed in the scope of the European project SocIoTal (http://sociotal.eu/).To the best of our knowledge, the proposed IdM system is the first approach that aims to provide a holistic privacy-preserving identity management system for IoT, by providing a unified solution to heterogeneous scenarios in which privacy must be ensured.
The rest of the paper is structured as follows.Section 2 aims to provide a description of IdM related works for the IoT and the most predominant IdM systems that are used currently.Section 3 presents some of the main challenges related to the extension of identity for smart objects, as well as an overview of the proposed approach.Section 4 describes the SocIoTal security and privacy framework in which the IdM system's functionality is framed, and a detailed description of the proposed system's actors and interactions is given in Section 5. Section 6 provides a set of experimental results by considering different functionality of the IdM system, and Section 7 compares qualitatively such system with other more established IdM approaches.Finally, Section 8 concludes the papers and provides an outlook of our future work in this area.
Background and Related Work
2.1.Related Work.The realization of a real Internet of Things requires the adoption of identity management models that can be adapted to identify any smart object connected to the Internet.Furthermore, such model must take into account IoT's inherent security and privacy requirements, providing a high degree of scalability (to deal with crowded scenarios) and flexibility (by using advanced schemes beyond the use of simple identifiers or IP addresses).These issues have attracted a growing interest from academia [14][15][16] during last years.In this sense, while there are already widely deployed identity management approaches in Web or Cloud environments, their applicability in IoT scenarios has not been demonstrated.Indeed, traditional identity management systems, such as SAML [17] or OpenID [6] based approaches, require a typical IdP (acting as a TTP) entity, which is responsible for managing and generating on-demand access tokens to enable a secure and trusted communication between interacting entities.However, this TTP hampers the adoption of security and privacy-preserving mechanisms for M2M scenarios that are required in IoT, since the IdP is required to be online to complete the transaction between the parties.
Moreover, traditional IdM systems do not provide means to their users to deal with the data minimization principle, which is a core aspect of the recent General Data Protection Regulation (GDPR) (Article 5. Principles relating to processing of personal data) (http://ec.europa.eu/justice/data-protection/reform/files/regulation oj en.pdf).This basic privacy principle is directly related to the partial identity notion.Some EU projects, such as SWIFT [18], already delved into this concept.In particular, they proposed an instantiation by defining an infrastructure based on the use of SAML and XACML technologies.Other approaches [19] use X.509 certificates as credentials to model the real identities and SAML security tokens to encode partial identities to be used afterward to prove the possession of certain attributes.Thus, users have control over the attributes which are disclosed to each service.The usage of a certificate avoids the involvement of the IdP in each communication, and therefore, different transactions cannot be linked by this entity.However, the usage of certificates does not preserve anonymity since entities are unequivocally identified in the certificate that is entirely disclosed to the other party.
In contrast to these approaches, Anonymous Credential Systems (ACS) [20], such as Idemix [7] or U-Prove [21], allow users to present cryptographic proofs, instead of the whole credential, proving the possession of certain attributes or claims.These systems enable a selective disclosure of identity attributes to achieve a privacy-preserving identity management approach.Indeed, a user or entity can prove a specific set of properties associated with a subset of identity attributes, without disclosing the content of such attributes itself.Even though some important initiatives, such as PrimeLife (http://primelife.ercim.eu/)or ABC4Trust [22] have analyzed the use of ACS-based and privacy-preserving identity management systems, the IoT paradigm demands adapted solutions to ensure a seamless development of an IP-based approach [23].Unlike in current Internet scenarios, common IoT use cases are based on a huge amount of heterogeneous smart objects interacting, and consequently, communications protocols and implementation need to be adapted.Current solutions must evolve to enable an IdM approach for IoT without the requirement of an online TTP, while security and privacy (including data minimization and unlikability principles) are met.
In this direction, [24] proposes a set of trust-enhancing security functional components for the IoT resolution infrastructure, in order to provide basic security and privacy aspects in IoT communications.Furthermore, [25] describes a PKI authentication scheme and discusses different practical solutions for realizing users' anonymity, delving into the complexity related to the design of privacy-preserving solutions based on such schemes.Moreover, [26] presents a distributed target-driven anonymous authentication protocol for IoT applications.This proposal is based on a multishow credentials system, which is used by users to authenticate anonymously.In addition, in [27] provides an authorization framework based on SAML assertions that are obtained from decisions made by an engine based on XACML.These assertions are used to get access to an IoT device through the use of symmetric key cryptography.However, such a work lacks evaluation details.
A common aspect of these proposals is that they are based on the use of traditional symmetric and public key cryptography to identify users and smart objects.In contrast, our approach is based on the concept of virtual/partial identity to identify any entity that is intended to participate in the IoT ecosystem.In this way, users and smart objects are enabled to use a subset of their whole identity for accessing to IoT services.The instantiation of such concept has been materialized through the use of Idemix as the most ACS representative example and deployed on the European platform FIWARE, as will be described in Section 5. [6] is an open standard that enables users with a single identity for being used across disparate Internet services, without the need to hold and remember multiple identifiers and login information.Users can be authenticated by certain cooperating sites (i.e., Relying Parties) using a third-party service, reducing the need of Service Providers (SPs) to employ their own systems.It uses HTTP cookies to support SSO across different SPs.The OpenID identifier is in the form of a URI that is issued by OpenID providers (IdP).Major companies act as OpenID providers for their customers and provide them with an OpenID identifier, but OpenID accounts are not limited to commercial IdPs; independent IdPs are also available.OpenID requires end users to provide the same OpenID identifier in every SP, so they can be tracked.It means that users have to create and manage different OpenID accounts to preserve their privacy.
Background on Traditional IdM Systems. OpenID
SAML [17] is an OASIS standard for conveying security information across entities.It is foremost employed to transfer user authentication or authorization information within assertions from one communication partner to another.It also defines four XML-based mechanisms: assertions, protocols, bindings, and profiles.SAML is currently mostly used for SSO and attribute-based authorization.In these cases, authentication assertions are used to transfer authentication information from the authenticating entity (e.g., an IdP) to the requesting entity (usually a SP) to avoid a new authentication of the user at the SP.For attributebased authorization, attribute assertions are used to send identity attributes information from an attribute authority, so authorization decisions can be made by using such attributes.However, in both cases, the SP redirects the user to be authenticated by the IdP, and the user has no control over which PII from the IdP is disclosed to the SP.
OAuth [28] is an authorization protocol that allows third parties applications or clients to access resources owned by a user (hosted in trusted applications) without the need to give or know the user's credentials.That is, third-party applications can access resources owned by the user, but these applications do not know the authentication credentials.In this way, through the OAuth 2.0 authorization framework, an application can obtain limited access to a resource, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service or by allowing the third-party application to obtain access on its own behalf.OAuth 2.0 introduces an authorization layer and separates the role of the client from that of the resource owner.In this way, the client requests access to resources which are managed by the resource owner and hosted by the resource server and issued a different set of credentials compared to those of the resource owner.
Nowadays, these technologies are used as the basis to build common and widely deployed IdM systems, such as Shibboleth or Keyrock.
Shibboleth [29] is an open source project that provides a federated identity solution aimed at allowing sites to make authorization decisions to protected online resources in a privacy-preserving way.As other approaches, Shibboleth defines the IdP and SP actors, so users are redirected to the IdP of the organization where the user belongs to, in order to get an authentication assertion, which is in turn used to gain access to a resource in the SP.It provides a HTTP-based SSO approach, in which each organization can use a different authentication mechanism.Shibboleth is a well-known IdM system that is commonly used by academic organizations to deal with identity federation aspects.However, currently it is not under consideration as a candidate IdM approach to be deployed in IoT scenarios.
Keyrock IdM (https://catalogue.fiware.org/enablers/identity-management-keyrock) is an open source implementation of the IdM system defined in FIWARE, which is a European middleware platform that promotes the development and global deployment of applications for the Future Internet.FIWARE delivers reference architecture, as well as the specification and implementation of different open interfaces called Generic Enablers (GEs).Among the FIWARE GEs, the identity management GE relies on standard protocols, such as SAML and OAuth, to provide authentication and authorization features, which allows managing users' access to networks, services, and applications.The IdM GE is also responsible for the user profile management, as well as SSO and identity federation across different service domains.Keyrock relies on the OpenStack IdM implementation called Keystone (https://docs.openstack.org/developer/keystone/),extending Keystone by providing an implementation of the SCIM standard.SCIM is intended to reduce the cost and complexity of user management operations through a common user schema, extension model, and REST API with a rich, but simple set of operations.
These features from Keyrock have been leveraged by our proposed IdM system.In particular, the Keyrock implementation has been used as a repository of users and smart objects, so security credentials can be associated with the SCIM identity attributes defined in it.In fact, Keyrock has been used as a standalone IdM system and at the same time as a complementary approach for the definition of our privacypreserving Idemix-based IdM system.This integrative solution is intended to provide an open and holistic IdM system for the IoT by using a reference platform at European level, which fosters its application and adoption in different IoT environments.[20] allow an IdP to issue a credential to a user, to be used afterwards, to authenticate against a SP.The credential may contain user's identity attributes (e.g., address and age) or be related to a smart object (e.g., hardware properties or manufacturer).Within the credential, the user or smart object can later prove to a third party that he possesses a credential containing a given attribute without revealing any other information stored in the credential.This proving process is usually carried out through the use of a cryptographic proof, which can be derived from the credential.
Background on Anonymous Credential Systems. Anonymous Credentials Systems (ACS)
U-Prove [21] is an anonymous credential system developed by Microsoft for claims-based identity management.It provides strong security, scalability, and privacy and allows for claims to be efficiently tied to the use of smart cards.A U-Prove token is a digitally signed container of attribute information of any type.Each token is issued to a Prover generated by an Issuer during the issuance protocol.When using a U-Prove token, the Prover applies the token's private key to a message to create a presentation proof for the Verifier.Such proof represents a proof of possession of the private key as well as a digital signature of the Prover on the message.
Idemix [7] is an ACS developed by IBM that allows users to minimize the personal data they have to reveal in electronic communications.Idemix defines the roles of Issuer, Recipient, Prover, and Verifier.Credential issuance is carried out between an Issuer and a Recipient (user), following a protocol which ends up with the latter having a credential.This credential consists of a set of attribute values, as well as cryptographic information that allows the credential's owner to create a proof of possession.In this way, when creating a proof, the user (acting as a Prover) can prove the possession of a certain credential to a SP (acting as a Verifier).Several zero-knowledge proofs are performed to convince the SP about the possession of such credential by making use of the CL signature scheme [30].Furthermore, Idemix technology also allows proving the possession of several credentials at the same time and stating different relationships among the attributes contained in such credentials.
Although the main purpose of U-Prove is similar to Idemix, and selective disclosure can be done similarly, there are different cryptographic differences between both approaches.Unlike U-Prove, users in Idemix need only one secret key (known as the master key) corresponding to all credentials.Whereas U-Prove credentials are revealed every time they are used, an Idemix credential is never disclosed to a verifier.This means that unlinkability in U-Prove can be achieved only using different credentials, whereas Idemix supports multishow unlikability against the SP.Regarding performance comparison, U-Prove is more efficient than Idemix when it comes to selective disclose of attributes.The multishow unlinkability property of the Idemix technology leads to a reduction of his performance when it is compared to U-Prove.The issuer signature is partly randomized and the remaining part is proved by using a zero-knowledge proof to achieve unlinkability.This process is not carried out in U-Prove that only requires computation to hide the undisclosed attributes.
Idemix technology has been widely used in different European projects, such as ABC4Trust or PrimeLife.Due to its privacy properties and extensive acceptance, our proposed IdM system is based on Idemix to enable privacy-preserving access to IoT services.Specifically, as an instantiation of these services, Idemix has been employed to obtain ondemand access credentials, specifically by using DCapBAC and CP-ABE approaches.The design, implementation, and deployment have been carried out under the umbrella of the SocIoTal project and will be explained in Section 5.
IoT Identities
Unlike current Web or Cloud environments, identity management must cope with unique challenges in the IoT paradigm.On the one hand, any smart object that is connected to the Internet must be identifiable to be able for interacting with other entities.This identification scheme must go beyond the use of simple networking identifiers or IP address, but also through the use of specific features or attributes that identify the object, such as its manufacturer, owner, or hardware properties.Moreover, smart objects should be able to identify themselves through temporal identities to be used according to the context, such as its location.On the other hand, smart objects could have to act on behalf of their owner.In this sense, delegation schemes are important to authorize devices to perform actions on behalf of the user making use of a certain partial identity according to the context.
The identity management approach should be distributed in order to authenticate smart objects among each other but, at the same time, centralized enough to be able to establish a hierarchical approach where identities credentials can be issued and authenticated securely enabling a global digital trust environment.At the same time, users and objects can constitute groups or communities with some interest in common.In these scenarios, they could also act with different partial identities according to the context, as well as to use different enforced authorization rules for each of the group they belong to.These issues are exacerbated when privacy concerns are also to be addressed, in order to enable a privacy-preserving but accountable IdM system for the IoT.
3.1.Object Naming, Addressing, and Discovery.An essential feature for the IoT is the inclusion of an infrastructure to make smart objects addressable, named, and finally discovered.Unlike in small-scale application silos making up an Intranet of Things [31], applications and services cannot be configured with respect to a fixed set of services.This is mainly due to the inherent dynamism of the IoT ecosystem resulting from the mobility of smart object, as well as the changing availability of services due to constraints of the underlying resources and devices.Therefore, a real need exists for a suitable infrastructure to be in place that allows addressing, naming, and discovery of IoT services: (i) IoT addressing: an IoT address refers to an identifier of a smart object and/or its virtual representation.This feature entails the assignment and management of addresses/identifiers for smart objects.(ii) IoT naming: it refers to mechanisms and techniques for assigning names to objects and supporting their resolution/mapping to IoT addresses.IoT naming provides the means to identify smart objects through a name resolution mechanism according to a naming system.Additionally, names can be organized according to taxonomies or classifications in a hierarchical fashion and, consequently, to be used for groups of smart objects.(iii) IoT discovery: it refers to the process of locating and retrieving IoT resources in the scope of a large and complex space of smart objects.
The three previous concepts are closely related, given that the adherence to certain choices and solutions (e.g., standards, mechanisms, algorithms, and tools) for one area (e.g., choice of addresses/identifiers) can directly affect the respective choices and solutions in the other areas (e.g., naming system used).As a consequence, the consideration of solutions for one area cannot be seen as isolated from the others.
Although several proposals have been introduced for all these areas, there are still significant challenges to be addressed.On the one hand, regarding identification, a globally unique identification solution, is needed, supporting the requirement for unique namespaces and the description of the wide diversity of smart objects.
On the other hand, for discovery issues, it is necessary to address some requirements regarding the visibility of smart objects' properties across different granularities, as well as the addition of semantic and contextual information for the discovery process.This information would enable to cope with mobility issues during the process by providing a more expressive and rich discovery process of IoT resources.
In this context, the Handle System [32] is getting attraction for IoT naming, resolution, and discovery.The unique identifier (UID) given to a smart object can be represented by a Handle ID (HID).In Handle, the identifier consists of a prefix and a suffix as depicted in Figure 1.The prefix is a unique integer managed globally by the Handle Global Registry, whereas the suffix represents a string associated with the local domain where the smart object is deployed.The suffix can be based on an URI, URN, Electronic Products Code (EPC), Digital Object Identifier (DOI), MAC address, IMEI, HashValue, and so on.Indeed, the suffix can be a combination of them, for instance, a URI that contains a DOI or a MAC address as part of the URI.Having HIDs as identifiers allows IPv6 address resolution.Indeed, Handle provides a mechanism for linking multiple network addresses with HID for the same smart object.Thus, during the bootstrapping or setup phase, each smart object can be assigned a HID algorithmically.For instance, in a smart building the local prefix of the HID could be assigned according to hierarchical URIs for each floor, room, and smart object.Our IdM allows representing the UID (for users and smart objects) as any string, so that the HID is a suitable option that can be used as identifier in our IdM system.
Privacy-Preserving Attribute-Based Credentials for IoT.
Traditional identity management systems usually assign a unique number to identify the identities.Although it makes easier the management, this unique number also leads to a lack of privacy for user that can be traced easily.The service provider (SP) could save all the tokens and user credentials that are presented to it and then link them altogether.The user's public keys as well as the issuer signature are left behind at each service provider that is accessed, leaving a digital trail that can be then used to link its behavior and make up user's profiles such as dossiers for each individual with his or movements, habits, or preferences.
As already mentioned, ACS try to solve these issues.A credential can be defined as a container of attributes that is employed as a certificate to convince others about the facts attested by it.This credential is issued by a TTP (i.e., an issuer) to the Recipient entity (either a user or smart object).When using an ACS, the Recipient can employ their credentials to derive partial identities as proofs of possessing a particular subset of the encoded attributes in the credential.These partial identities are, in fact, cryptographic proofs that can be presented to a SP or directly to another smart object (acting as a verifier) maintaining the credential itself undisclosed.Therefore, data minimization is provided for the Recipient to access a service, while the SP cannot impersonate it, since the credential is not disclosed.A credential is defined by means of a specification structure that specifies the list of attribute types to be encoded in the credential.In IoT, this structure should be defined preferable in JSON rather than in XML.There is a separation of the credential structure that is the public part and the credential data which is private to the Recipient.
Figure 2 shows a graphical representation of these concepts.Our IdM system manages the user's attributes as it is done in any traditional IdM.In addition, the system can manage directly smart objects' identities and their attributes.At the same time, the IdM system acts as a Credential Issuer, in charge of generating the Idemix credentials for both the users and their associated smart objects, according to the attributes hold in the IdM.Each user has associated one or more attributes among those defined in the SCIM standard (e.g., ID, name, address, domain, and email), whereas a smart object has its particular set of attributes, such as ID, vendor, manufacturer, date, and model.The association between the smart object and its owner is done by means of the attribute owner maintained in the smart object.It should be noted that the ID attribute is represented by the Handle ID (HID), which was previously described.In this way, when a user or smart object tries to get access to a particular IoT service, he makes use of its partial identity to demonstrate the required attributes, as will be explained in Section 5.
Security and Privacy Framework Overview
The proposed IdM system has been designed under an IoT security and privacy framework based on the Architectural Reference Model (ARM) [13], which was proposed by the IoT-A project.In this sense, such framework represents a simplified view of our security and privacy architecture for the lifecycle of smart objects (ARMY) [12] that is intended to provide a comprehensive view of security and privacy implications in IoT.ARMY represents an instantiation of the Functional View of the Reference Architecture (RA), and an extension through the addition of two new functional components: the Context Manager and the Group Manager.Both elements have been included in order to complement the functionality that is already provided by other components within the security functional group.The framework has been designed, developed, and deployed in the scope of the SocIoTal project.Figure 3 shows the main components of the security framework, which are explained below.
The Authentication component is intended to ensure only legitimate users and smart objects are able to access a specific IoT service.In the context of the SocIoTal project, this component's functionality has been instantiated through the use of certificate-based public key cryptography to be used in scenarios and use cases in which privacy is not addressed.In particular, it has been based on the use of Elliptic Curve Cryptography (ECC) and integrated with Transport Layer Security (TLS) [33], as well as DTLS [34] in case of communications based on the Constrained Application Protocol (CoAP) [35].
The Authorization component is responsible for making and enforcing authorization decisions by integrating different access control technologies and approaches.On the one hand, it follows a policy-based scheme to define access control policies that are evaluated to come up with a decision.On the other hand, it follows a token-based approach generating authorization tokens according to the decision.In particular, such functionality has been instantiated through the use of the XACML standard to define authorization policies and the DCapBAC approach [9], which is proposed as a flexible and lightweight to be used in different heterogeneous IoT scenarios.
The Group Manager component aims to address security and privacy concerns in common crowded IoT scenarios, where data needs to be shared or outsourced to groups of users and smart objects.This component's functionality has been realized through the use of the CP-ABE scheme [11] together with signatures schemes to guarantee data integrity.In this way, data is encrypted under a policy of attributes, while secret keys are associated with sets of identity attributes.Only those users or smart objects that satisfy the encryption policy will be able to access such data.Consequently, a data producer can exert full control over how the information is disseminated to other entities, while a consumer's identity can be intuitively reflected by a certain secret key.
The Key Exchange and Management (KEM) component is focused on assisting interacting entities with key management tasks (e.g., to establish a security context) to enable secure communications among them.Additional functionality includes generating and delivering cryptographic keys, including symmetric and public keys, as well as CP-ABE keys that are used by the Group Manager component The Context Manager component is one of the key components in the framework.It aims to realize an adaptive security and privacy approach for the IoT.In particular, it is intended to maintain contextual data so other components can adapt their behavior accordingly.For example, a user or smart object can be identified with a different partial identity according to context, or authorization decisions can change dynamically depending on the context detected by a smart object.
The Trust and Reputation component aims to define suitable trust and reputation models that are able to be integrated in the IoT ecosystem, in order to complement the cryptographic approach that is used by other security components.The set of trust scores obtained from such models are then used by other security components to provide an additional fine-grained level of security and privacy.Trust scores are also used to manage and share data within groups of users or smart objects.This component's functionality has been specifically instantiated by using a multidimensional trust model based on fuzzy logic that was also integrated with the DCapBAC approach for authorization purposes [36].
The Identity Management (IdM) component is responsible for managing the identity of users and smart objects.In particular, it aims to complement the functionality that is provided through the Authentication component, by enabling a scalable and privacy-preserving identity management mechanism based on the partial identity concept.Its functionality has been realized by using the Idemix technology to endow users and smart objects with mechanisms to ensure anonymity, which is done mainly issuing pseudonyms and proof of credentials for data minimization.The IdM component is the cornerstone of the SocIoTal security and privacy framework and the core element of the proposed IdM system.It represents the integration of Idemix technology with a reference European IoT platform, FIWARE, to manage users and smart objects identity to access IoT services in a privacy-preserving way.Furthermore, it has been integrated with DCapBAC and CP-ABE approach, so entities are enabled to get authorization tokens and group keys while their privacy is still preserved.A more detailed description of the proposed IdM system is provided in the next section.
Holistic Identity Management System
In traditional IdM scenarios, organizations offer services that are managed and protected by SPs.In such scenarios, users' authentication is done by means of IdPs that assert identities to end users.Then, users present such assertions (e.g., based on SAML or OAuth) to obtain access to a resource at the SP, meaning that a trust relationships between SP and IdP is required.Unlike this classical web approach, in the IoT, the user role can be also played by a smart object, and the SP can be represented directly by another IoT device holding the resource being accessed.
The following subsections define the proposed holistic IdM system, including a detailed description of required deployment entities and interactions to accomplish such functionality.
IdM Actors.
Figure 4 shows the main deployment entities involved in the different processes and interactions that are defined in our IdM system.These deployment entities instantiate and implement the IdM functional component of the SocIoTal security and privacy framework.As shown, in addition to more typical user management operations, the proposed IdM system aims to provide functionality related to authentication, authorization, and attribute management, as well as credential and cryptographic keys provisioning.The following subsections define each of these deployment elements including their features and interactions.
Subject.
The subject represents a user or smart object that requires to access an IoT service in a privacy-preserving way, by ensuring the data minimization principle.The subject can get Idemix credentials from different issuers, so it is able to decide which particular information from which credential should be presented to a certain target service (acting as a verifier).In traditional web scenarios, the subject represents a user, but in IoT it represents a broader concept that refers to any kind of smart object.A subject plays the Idemix Recipient role, since it asks for credentials from an issuer.When the subject obtains the credential, it can derive cryptographic proofs to prove certain attributes (or attributes statements) against an IoT service, playing the role of prover.In this case, the IoT service acts as verifier of the attributes and statements that are included in such a proof.
IdM Service.
As already mentioned, our IdM system integrates FIWARE Keyrock IdM (supporting classic IdM features, such as SSO) and provides additional functionality by including new privacy-preserving features, which are built on top of Idemix technology.Towards this end, unlike in Keyrock, our IdM system supports additional attributes to manage smart objects' identity that are not covered in the SCIM model.While the IdM Service plays the traditional role of IdP and Attribute provider, it also acts as an Idemix Issuer.The issuer is in charge of issuing Idemix credentials to subjects, after checking the correctness and veracity of the information provided during a previous authentication process.In this sense, the IdM support different ways of authentication.
The Idemix credential structure is composed of the following attributes, which are taken from the SCIM standard.In Idemix, an attribute is defined by a tuple that consists of a name, value, and type = (, V, ), where name can be Id, userName, domainId, name, active, email, nickName, country, locality, postalCode, streetAddress, department, organization, and phone and type can be integer, string, date, and enumerations, among others.
It should be noted that current Keyrock implementation does not support the management of all these attributes, so our IdM system extends Keyrock implementation.In addition, since SCIM does not include attributes to manage smart objects, our approach extends the SCIM data model to deal with smart object attributes, such as owner (i.e., the ID of the device's owner), vendor, manufacturer, manufacturing date, and model.
Authorization
Service.In our IdM system, authorization decisions are not directly made by the SP or target entity that is accessed.Instead, this task is delegated to a separate Authorization Service that can manage the access control to different entities.The Authorization Service is in charge of generating DCapBAC tokens, that is, authorization assertions containing the access rights that a subject entity has over a resource, which is hosted by a target entity.These tokens are obtained by the subject to access directly to the target, which only needs to validate the token, since the decision was previously made when it was generated.
The Authorization Service provides a Policy Decision Point (PDP) to evaluate access requests against authorization policies that are previously defined in the Web User Environment.It is based on the XACML standard, so access control policies are handled and evaluated upon an access request by the PDP to make the corresponding authorization decision.Such decisions are based on the identity attributes of the requesting entity.These attributes can be proved directly by this entity during the token request through the use of Idemix, or they could be obtained from the IdM Service in case of Keyrock authentication.
Target.
The target represents the IoT service that is accessed by a subject, and it acts as an Idemix Verifier.It protects access to an IoT service data by requiring the subject to satisfy certain identity attributes in their credentials.To this aim, the verifier presents a policy to the subject indicating which data from the credential needs to be disclosed, in order to obtain access to the IoT service.Indeed, it includes which credentials and attributes from which issuers are required or even which conditions have to be fulfilled by the attributes.Based on this policy, the subject generates a proof from its Idemix credential containing the required attributes as well as the cryptographic evidence to be sent to the verifier.
In IoT, unlike in the current Internet, the required process to discover which particular attributes are needed could require autonomous IoT operation, without user intervention.Devices should be given flexible and scalable mechanisms to discover these attributes, and the CoRe Link Format [37] can be used for this purpose.In this way, the subject can perform a multicast CoAP request to/.well-known/core and the Resource Directory can answer with information about the devices resources.The information format is compliant with the Link Format [38], but extending it in order to describe the attributes that are going to be needed to access to such a resource.
Web User Environment.
Keyrock is split into two components, the web-based front-end (Horizon) and the RESTful back-end (Keystone), which is the open source OpenStack IdM.Our IdM system does not relies on Horizon, as SocIoTal project already defines its own Web User Environment.This component is intended to provide graphical interfaces to manage user attributes.In addition, the Web User Environment acts as Policy Administration Point (PAP), allowing users to define and manage XACML authorization policies, which are used by the Authorization Service to make authorization decisions.
Key Manager Service.
The Key Manager Service is responsible for generating cryptographic keys associated with legitimate and authenticated users or smart objects.In particular, this has been used as a service for the generation and delivery of CP-ABE keys, in order to obtain confidentiality when the information needs to be outsourced to groups of entities.In CP-ABE, data is encrypted under a combination or policy of identity attributes, while a CP-ABE key is associated with a set of attributes.Therefore, similar to the Authorization Service, when a subject tries to obtain a CP-ABE key, the Key Manager Service can obtain the subject's attributes from the IdM Service or directly from the cryptographic proof that is contained in the request, in case of using Idemix for a privacy-preserving authentication.In this way, users and smart objects can obtain a CP-ABE key associated with a partial identity, which is demonstrated by Idemix technology.
Revocation Authority.
Finally, although it is not shown in Figure 4 for the sake of clarity, the IdM system is also equipped with a revocation service that is responsible for revoking issued credential, preventing their further usage to generate a presentation token in case the attributes are no longer valid.This revocation entity is also needed to deal with credential termination at the end of the identity lifecycle.In conventional credential systems, revocation status of a credential is verified just by simple lookup of a revealed credential specific identifier in a list, for example, Certificate Revocation List (CRL) [39].In Idemix the revocation is done based on accumulators which uses the membership proof for white-listing.The revocation authority accumulates all valid revocations into a single value and publishes that value.Then, users can prove that their credential is still valid, with a zero-knowledge proof that demonstrates that the credential's revocation is contained in the published whitelist accumulator.For further information about revocation, the reader is referred to Idemix library specification [40].
IdM Interactions.
After describing the main actors of our IdM system, this subsection aims to provide a detailed explanation of the required interactions among such entities, in order to realize the proposed functionality.In this way, Figure 4 provides an overview of the main processes (by using different colors), which are explained below.
Basic Authentication and Authorization Process.
As already mentioned, our IdM system supports different authentication mechanisms, including the simplest (but still used) password-based approach together with the use of authentication bearer tokens.In this case, any subject in possession of a bearer token can employ it to obtain access for the IdM Service or a target resource, without the need to prove the possession of a cryptographic key.In order to avoid the abuse or misuse of tokens, the tokens can be sent securely using the DTLS protocol.To perform this basic authentication mechanism, the IdM Service contacts the Keyrock IdM, which in turn relies on Keytone.
Furthermore, the IdM system allows a subject to authenticate against the IdM Service by using certificate-based public key cryptography.This feature is provided by the IdM Service that verifies the certificate against the one that is already stored for the entity registered in Keyrock.In case the subject is not registered previously in the IdM Service, it can be done through either the API or the Web User Environment.
Once authenticated, the IdM Service provides the subject with a Keystone authentication bearer token that can be used to perform actions against the IdM Service (such as user management) or access to other target services by attaching the bearer token as a header in the HTTP request.In such a case, the target must contact the IdM Service to verify that the token provided by the subject is valid 5.2.2.Credential Issuance Process.In addition to the traditional password-based authentication mechanism, our IdM system enables a privacy-preserving authentication approach, which is built on top of Idemix technology.Idemix is based on two main protocols: credential issuance and proving.The former is used by the subject to obtain a credential, while the latter is employed to prove the possession of a certain credential when accessing a target's resource in a privacypreserving way.According to the required interactions that are shown in Figure 4, below we provide a description of them: (1) The subject requests a credential to the IdM Service, which acts as an Idemix Issuer.For this purpose, it presents its certificated to identify again the issuer.
(2) The IdM Service contacts the Keyrock IdM in order to validate the attributes for which the subject applies for a credential.It should be noted that Keyrock is responsible for validating the authenticity of subject's attributes prior the credential issuance.
(3) To obtain a credential, the subject needs a credential structure definition.This credential structure, which defines the attribute structure of the credential, is provided by the IdM Service, based on the attributes defined in Section 5.1.2and managed by the Keyrock IdM.Once both parties have finished the initialization and they share the same credential definition, the issuer starts the Idemix proving protocol and computes a random value called nonce (to prove freshness) that is sent to the subject.Then, the subject computes a cryptographic message (also known as crypto token) according to the credential structure.The issuance message with the token is sent to the issuer, which creates a cryptographic part of the credential signing the attributes with his secret key.It also creates a proof of correctness.Furthermore, the issuer can save the pseudonym and the context for accountability purposes.
(4) Finally, the issuer replies to the subject by sending a cryptographic message with the proof of correctness and the attributes signature.The subject verifies this cryptographic material, and based on this message, it generates and stores the credential.
Once the subject has the Idemix credential, it can use such a credential to derive partial identities for a privacypreserving authentication mechanism when accessing other IoT services.
M2M Claim-Based Authentication.
The M2M authentication relies on the Idemix proving protocol, which is the main operation provided by the IdM system.Figure 4 shows the involved interactions that are required between a subject, which wants to prove the possession of certain attributes of its credential, when accessing a target's resource.Such interactions are explained as follows: (1) The subject makes a request to an IoT service, which is hosted in a target entity (acting as a verifier).Then, this entity requires the subject to present a certain cryptographic proof to demonstrate the possession of a certain credential or specific identity attributes.
(2) The verifier computes a random value called nonce that is sent to the subject.Based on context, the Identity Selector component of the subject makes use of the Credential Manager module to select the most appropriate partial identity or pseudonym to be used against the verifier among the ones it already has available in its database.Optionally, if supported by the underlying cryptographic engine, in case the subject does know the proof specification that is required by the IoT service, the verifier can send a presentation policy stating which data must be disclosed by the subject to gain access to the requested IoT service.In other words, the presentation policy defines which credentials and attributes are required and/or which conditions have to be fulfilled by the attributes.The subject (acting as prover) defines the proof specification from the selected credential(s) to be used against the verifier.This proof includes the nonce and attributes, as well as statements about these attributes.Then, the prover builds a cryptographic object as proof and sends it along with the specification to the verifier.
(3) The verifier validates the incoming proof specification using the cryptographic proof.It computes the verifying protocols to check that the attributes statements and pseudonyms are valid.The verifier, depending on the validation result, can send an affirmative or negative response to the subject.
Furthermore, Idemix allows an entity to generate pseudonyms that demonstrate the possession of its master secret during the proving protocol explained above, enhancing unlikability.The proof generated by the subject proves to the target the knowledge of the secret associated with a pseudonym, as well as the fact that such a pseudonym is based on the subject's master secret key.That is, ← 1 ℎ where and ℎ are public common group parameters, is the master secret key, and is a randomization exponent.Then, the subject (as a prover) computes a challenge using a hash function, by considering the context and the computed proofs, that is, the CL proof and the pseudonym proof, as an input.Specifically, the prover computes random -values of the form fl , it get a challenge fl (⋅ ⋅ ⋅ ‖ ), and finally it computes a response -values of the form fl − to prove knowledge of .Then, the target (as a verifier) computes the -values that are hashed and compared against .It should be noticed that although it is not required in our proposal, Idemix also allows proving more complex kind of proofs, which deal with inequalities and logic operators (AND, OR, and NOT) over the attributes contained in a credential.
Nonetheless, there are some cases in which the verifier requires from the prover the presentation of nonanonymizable attributes, such as national ID number, which undermines user's privacy against the verifier.In this particular case, the usage of pseudonyms is useless.In any case, it is up to the subject to consent this disclosure with a partial identity (Idemix proof) that reveals such attributes, ensuring the minimal disclosure principle.
Using the IdM System for Privacy-Preserving Credential
Provisioning.The main goal of the proposed IdM system is to enable a privacy-preserving IdM approach for the IoT, in which users and smart objects can access to IoT services while their privacy is preserved.As an instantiation of an IoT service, the proposed IdM system has been used to allow an entity to obtain security credentials in a privacypreserving way, by disclosing only a subset of its identity attributes.On the one hand, DCapBAC AuthZ tokens, which are generated from XACML decisions, can be requested by using Idemix to prove some identity attributes that are used for the authorization decision process.On the other hand, CP-ABE keys can be generated according to such proof and consequently associated with specific partial identities.
DCapBAC Integration for Access Control. Our IdM has been integrated with our capability-based access control
DCapBac [9] that has been specially tailored for M2M interactions in IoT.In a basic scenario, the capability token can be directly validated by the target device that authenticates the subject with public key cryptography, without the intervention of the IdM Service.During the token generation process, the Authorization Service includes the subject's public key (associated with its X.509 certificate) in the token.Then, during the access request, the subject is forced to provide the token and its certificate to the target, which authenticates the subject and checks that its certificate's public key matches the one included in the token.This feature strengthens security, avoiding impersonation, which is one of the main disadvantages of bearer tokens that are currently employed by OAuth-based approaches.
Nonetheless, since the capability token contains the subject's public key, its privacy is not preserved.To cope with this issue, our IdM system allows users and smart objects to ask the Authorization Service for capability tokens in a privacy-preserving way, by providing its Idemix credentials instead of using their X.509 certificates.The Idemix proving process enables the subject to demonstrate that its identity is linked to the pseudonym included in the capability token.Figure 5 shows the three main stages required to carry out the anonymous authorization process based on the usage of the IdM system and DCapBAC.
In the first stage, the subject performs the Idemix Credential Issuance (messages (1)-( 6)) protocol against the Idemix Issuer that was explained in Section 5.2.2.To this aim, the IdM Service authenticates the subject in the Keyrock IdM using the subject X.509 certificate.Once authenticated, the IdM verifies that the attributes included in the credential match the attributes held in Keyrock IdM for such an entity.
If successful, the Idemix Issuer generates the cryptographic credential based on a particular structure that follows the attributes available in Keyrock.Notice that the issuance protocol requires a common understanding of certain group of parameters, as well as the credential specification structure, which are publicly available in the issuer.
Then, in the AuthZ Token Request, the subject performs the Idemix proving protocol against the Authorization Service, thereby demonstrating the necessary identity attributes required to get capability tokens in a privacy-preserving way.For this purpose, the verifier submodule of the Authorization Service verifies the cryptographic proof generated by the subject.The proof aims to prove the possession of a specific combination of identity attributes, along with the cryptographic proof of the pseudonym (steps ( 8)-( 12)).Then, the Token Manager initiates the authorization stage to check whether the subject's attributes fulfill the XACML policy for a given action and resource.Thus, it performs a XACML request to the Policy Decision Point (PDP), which evaluates the policies and makes the authorization decision.If the PDP reports a Permit decision, the Token Manager generates the capability token, which includes the rights to access the target along with the pseudonym for the requesting subject.Although it is not shown in the diagram for the sake of clarity, at this stage, the authorization decision can be leveraged by our Trust-aware Access Control for IoT (TacIoT) [36] solution, which takes into account a multidimensional approach to quantify trust values about the subject reliability prior making the final access control decision.
Afterward, during the access request stage (message 18), the subject uses the capability token previously obtained to access to a particular resource hosted by the target.Thus, it generates an initial request with a session key SKsession, which, in turn, is encrypted with the public key PK of the target; that is, Then, the target gets the pseudonym included in the token and challenges the subject to prove, by means of Idemix, that it is the entity linked to the token.Namely, the subject starts the Idemix proving protocol to prove that it has a valid credential issued by the issuer, ensuring the data minimization principle.As in the Idemix issuance process, the proving protocol requires a common agreement between the parties, about certain system parameters.The cryptographic proof generated by the subject is derived from its credential and contains a pseudonym proof and the CL signature.Such a proof (and its specification) is sent (message (22)) to the target for verification.In case the pseudonym is valid and linked to the subject, it is authenticated, and therefore, the access is granted.This last interaction is, in turn, (2)
CP-ABE Integration for Confidential Data Outsourcing.
Following a similar approach, our IdM system has been integrated with the CP-ABE scheme, so users and smart objects are enabled to apply for CP-ABE keys in a privacy-preserving way.In this way, the proposed IdM system allows entities to demonstrate their attributes against the Key Manager Service in order to obtain necessary cryptographic material associated with the attributes.Under CP-ABE scheme [11], a piece of data can be encrypted under a policy of attributes such that only those target entities satisfying the policy (and therefore holding the associated identity attributes and keys) are able to decrypt the shared information.In our approach, CP-ABE keys are managed by a Trusted Third Party (TTP), called key manager (KEM).A basic CP-ABE scenario includes three main entities: a data producer, one or more data consumers, and the Key Manager Service that is responsible for generating the appropriate keys for all participants.The role of these components is determined by their participation in the algorithms of a CP-ABE scheme [11], which defines four main protocols: of the system, as well as a master secret key that is used to generate CP-ABE keys.(ii) Key Generation (Key Manager Service): after an entity proves that it has a certain set of attributes by using the Idemix protocol, the algorithm takes as input the master key and these attributes.The result is a CP-ABE key for decryption.(iii) Encrypt (producer): it takes the message, public parameters and an encryption policy representing a combination of attributes that must be satisfied to decrypt the message.The result of this algorithm is a ciphertext containing such policy.(iv) Decrypt (consumer): the decryption algorithm takes the public parameters, the ciphertext, and a CP-ABE key as an input.If the attribute set (associated with the CP-ABE key) satisfies the policy, the consumer will be able to decrypt the ciphertext.
Figure 6 depicts a publish/subscribe scenario for IoT based on CP-ABE.As, in addition to smart objects acting as producers and consumers, as well as the Key Manager Service, this communication model is based on the use of a broker in charge of receiving data from producers to be published to consumers.The broker functionality may be provided by another smart object or deployed on a central data platform.
Firstly, during an offline stage, the involved producers and consumers entities obtain the cryptographic material required for a secure information exchange based on CP-ABE.To this end, they authenticate themselves against the Key Manager Services (acting as an Idemix Verifier) and demonstrate their attributes by using the Idemix proving protocol (messages (2)-( 6)).Then, the Key Manager Service distributes the CP-ABE public parameters (common for all participants), as well as the corresponding CP-ABE private key, which is associated with the proved attributes (messages ( 7)-( 8)).After the initialization, subscriber entities perform the subscription to a certain topic (message ( 9)).
Secondly, during an online stage, a producer decides to publish certain information, by sending a message that consisting of a ciphertext that is the result of the execution of encrypt algorithm (message (10)-( 11)).Thus, only those entities (i.e., a group) with CP-ABE keys satisfying the CP-ABE policy used to encrypt data will be able to decrypt the information (message (13)).Notice that the broker entity cannot decrypt the information unless it is endowed also with the proper keys.In addition, the broker is a transparent 3rd party that cannot trace user's behavior as users do not need to be authenticated against the broker, avoiding linkability.The authentication-authorization is done directly by the fact of being able to decrypt the data.And the key manager is only consulted once to get the keys.
Implementation and Performance
After having described the actors and interactions related to the proposed IdM system, this section aims to provide a set of experimental results associated with the main processes of our approach.
6.1.Implementation.Our IdM system is composed of seven main components, which have been integrated and evaluated as part of our testbed.Below, we provide a description of each component and its relationship with entities that are defined in the previous section.
(i) IdM-Android-Client: it is a new implemented Android app that allows obtaining Idemix credentials from the issuer server.It interacts with the verifier, which can validate the partial identity (Idemix proof) derived from the Idemix credential.A user-friendly interface is given to the user to select the attributes that she wants to add to each partial identity.The implementation uses the already existing Idemix Java library (Idemix version 2.3.0)ported from Java to Android by [41].It is worth mentioning that the testbed uses Android SDK 1.7 and the RSA secret key length is established to 1024 bits (i.e., 80-bits security level).Some screen shots of our app are depicted in Figure 7.It also allows obtaining a capability token from the Authorization Service using the partial identity.(ii) Issuer Server: it is a web application implemented with Java servlets and XML-RPC that allows generating Idemix credentials for clients.The credentials are generated according to the user (or smart object) profile in Keyrock.Communications are done by HTTPS.The client must be authenticated against the issuer using a valid certificate or he can be authenticated by the Keyrock server.
(iii) Verifier-Server: it is a web application, also implemented with Java servlets and XML-RPC, which is able to validate partial identities (implementing verifier functionality) presented by the client application.
(iv) Authorization Service: it is a web application that allows users to obtain capability tokens using their partial identities.In other words, it allows authenticating and demonstrating their attributes by means of Idemix proofs of having a valid credential issued by the issuer.Additionally, this entity acts as a HTTP client requesting authorization decisions to the PDP service.It also plays the role of SocIoTal-Verifier-Server.Furthermore, the service also incorporates a web-based PAP service, where users are enabled to define access control policies for those smart objects.
(v) IdM Keyrock Client API: the SocIoTal IdM Keyrock Client Java library provides a basic API for identity management by implementing a client to interact with the Keyrock server.To carry out such communication, the SCIM 2.0 and Identity API v3 interfaces provided by this IdM system are used.
(vi) Keyrock server: this is a modified deployment of the FIWARE Keyrock IdM system.In particular, the Keyrock server has been extended to also cope with smart objects' identities, in addition to users.It has been also extended to implement attributes of the SCIM standard [8] that was not included in Keyrock.
(vii) Web User Environment: it is a front-end dashboard that integrates various server side SocIoTal components (as client APIs) and provides a userfriendly GUI for end users.It interacts with Keyrock (viii) Key Manager Service: it is a HTTPS server implemented in Java servlets accepting requests for CP-ABE keys generation.These keys, as already mentioned, are associated with the attributes stored in the Keyrock server.It also plays the role of SocIoTal-Verifier-Server.A client library has been also developed to interact with the Key Manager Service.
The aforementioned software modules are open source, and they are available in the SocIoTal repository in GitHub (https://github.com/sociotal),where a wiki with a complete documentation of the software is also provided.
Performance Evaluation.
The testbed consists on evaluating the performance of the IdM system when a subject tries to get DCapBAC tokens and CP-ABE keys (Section 5.3) by using different authentication methods.Specifically, the subject makes use of the Android application to get such credentials through classical password-based authentication (Section 5.2.1), as well as the privacy-preserving Idemixbased approach (Section 5.2.2).
Figure 8 shows two graphs that sum up the performance obtained in the testbed.The charts show the time required by the subject to obtain DCapBac tokens and CP-ABE keys.On the one hand, Figure 8(a) shows the time required when the privacy-preserving authentication method is employed (i.e., using the Idemix-enabled Android application).On the other hand, Figure 8(b) shows the time required to obtain the credentials, but relying on the traditional authentication method (i.e., by means of basic online authentication using our API that interacts with the Keyrock server).
The -axis in both charts represents the number of attributes that are disclosed to the target, that is, the Authorization Service that generates the DCapBAC token or the Key Manager Service that generates the CP-ABE keys.In the left chart, the Idemix authentication operations are depicted by the vertical bars with -axis in the left, while the times for the generation of DCapBAC token and CP-ABE keys are depicted in line charts with -axis on the right.Notice that in this case the time scale is different, since the credentials generation takes more time.
The first series identifies the time required by the Android app to build the Idemix CL proof buildCL, which is the heaviest task in the proving protocol.Secondly, the verifyProof operation shows the time required by the server side (either Authorization Service or Key Manager Service) to verify the Idemix cryptographic proof, including the time required to verify CL signature.
Notice that, in the Idemix authentication case, all the proofs contain the 10 attributes to comply with the credential structure, and the different lies in the amount of those 10 attributes disclosed.The time required to build and validate such a proof depends on the number of attributes revealed.As more attributes are revealed, that is, not encrypted in the proof, Idemix library requires less time to handle it.
As it can be seen in Figure 8(b), the traditional authentication is faster than Idemix authentication in Figure 8(a).Despite the fact that traditional authentication requires an additional request to Keyrock server to obtain the user attributes, the whole time required to perform the authentication plus obtaining the required attributes plus validating the AuthZ token is less than the time required in the proving protocol carried out through Idemix.Notice that the traditional authentication (in Figure 8(b)) is fully delegated to the server (where are taken the times), while in Idemix case the cryptography operations of builCL and buildProof are performed in the device.In both charts, the operations "Generate CP-ABE Key" and "Generate DCapBAC Token" are performed in the server.
The times required to obtain the authorization decision and generate the capability token are usually steady across the different tests.It should be noted that the results do not include request or network delays.Besides, the time required to generate the DCapBAC token includes two main tasks: (1) the time required by the PDP to make the decision-around 65% of the time-and (2) the time required to build up the token.
In addition, the testbed has evaluated the memory consumption required by each of the processes described previously.The memory results are shown in Figure 9. Again, two different -axes are used in the figures as the credentials generation operations depicted in line charts required considerably more memory to accomplish their task.As it can be seen, the memory consumption trends behave in most of the cases according to computation times evaluated in Figure 8.In addition, the obtained performance results follow a linear evolution trend, which indicates the feasibility of the proposal.
Security Analysis and IdMs Comparison
This section provides a security analysis as well as a comparison of our IdM system against the most relevant IdM technologies in the current state of the art.The comparison analyzes different features that an ideal IdM solution should meet and compares them with our proposal.Most of these features have been taken from previous IdM analysis such as [42,43].Table 1 summarizes this comparative, which analyzes SAML, OpenID, OAuth, U-Prove, Idemix, and Keyrock IdM against the IdM system presented in this paper.In the table, the symbol + means that the feature is partially satisfied by the analyzed IdM technology, the symbol ++ means that the feature is fully supported, and the symbol − indicates that such a functionality is not supported.
An IdM system must guarantee that the information is shared and accessible only by appropriate entities, securing communications to ensure that confidentiality and integrity principles are met.This feature is fulfilled by all the analyzed IdM solutions.Nonetheless, in SAML, OpenID, and OAuth the user is redirected to his home domain for authentication, so that there is a higher risk of impersonation in case the user is redirected to a malicious IdP.Besides, these solutions do not directly support user control of identity data, meaning that the IdP is responsible for dealing with this aspect.Moreover, Idemix and U-Prove allow end users to manage their attributes by themselves without IdP intervention.
Single Sign-On (SSO) property allows entities to have a unique account when accessing across different service providers, by using a single authentication assertion for the session.Although Anonymous Credential Systems (ACS), such as Idemix and U-Prove, do not make use of these Mobile Information Systems assertions, this feature can be easily achieved thanks to the usage of presentation protocol processes.Nonetheless, they are less efficient since they require a higher computation time to evaluate the cryptographic proofs for each service request.Usability property is also only partially fulfilled by ACS, which usually requires advanced skills for its usage and maintenance.
The IdM system should provide transparency so that privacy-relevant data processing can be understood and reconstructed anytime.In this sense mechanisms for logging and auditing are needed in order to trace event occurring in the system, ensuring that any entity cannot deny having accomplished any action.In this regard, in SAML, OpenID, OAuth, and Keyrock, the IdP server can easily log the transactions and end users access.However, in a pure ACS like Idemix, the auditing feature is more difficult to fulfill because of their unlinkability properties.In our IdM system, auditing can be performed when the access is done using traditional authentication through Keyrock, but the feature cannot be easily achieved when the entity interacts directly with the target entity by means of Idemix.
Nowadays, authentication based on shared secrets (e.g., username-password) is being replaced by stronger security mechanisms, such as authentication based on digital certificates and biometric or attribute-based credentials.In SAML, OpenID, OAuth, and Keyrock, the authentication is left to the IdP, whereas ACS already provide strong authentication per se, by linking attributes to cryptographic keys.SAML, OpenID, and OAuth can be used to enable identity federation so that different IdP can establish trust between each other.Indeed, Shibboleth exploits SAML to provide a complete federated identity management solution.Keystone, Keyrock, and in turn our IdM support this feature as they can be configured to install and work with Shibboleth.However, federation capabilities in ACS are still not properly addressed yet.
Intervenability is defined as "the property that intervention is possible concerning all ongoing or planned privacyrelevant data processing."It is implemented by mechanisms such as the end user consent feature, which is achieved by most of the analyzed solutions (except SAML).It can be done either by redirecting and asking users to the IdP or by allowing users to specify the particular set of attributes employed in a partial identity, as it is done in ACS.Indeed, end users in ACS are given full control of their identity data, while in traditional IdMs, the IdP is in charge of managing the identity information.
Minimal disclosure of information refers to the ability to selectively reveal as less information as possible in the credentials presented to a target service.U-Prove, Idemix, and our IdM system support this feature since a user or smart objects is enabled to select which particular attributes want to disclose.In traditional IdM systems, such as OpenID, the minimal disclosure feature can be achieved due to the user consent functionality.However, in SAML, this functionality cannot be easily achieved since, once authenticated, the SP is able to request any attribute allowed by the IdP.
However, ACS are currently difficult to manage, as final users are supposed to have advanced technical knowledge in order to manage their identities and interact with users.In this sense, our IdM system provides user-friendly apps and tools over Idemix (as can be shown in Figure 7), which aid users to manage their partial identities against different IoT services.
Users might lose their right to carry a particular credential, or their attributes may change over time, so that the IdM system should be able to revoke user's existing attributes.At the same time, SPs should be able to check, at anytime, whether a subject's attribute is still valid or not.This feature is easily achieved in traditional IdM systems, since the IdP (or Attribute Provider) maintaining the attributes can stop releasing or validating a revoked attribute or credential.However, in ACS, the user might try to use invalid credential after revocation, so that a third revocation entity is needed.For instance, in Idemix the revocation entity publishes a white-listing that is employed by a revocation scheme based on accumulators.In addition, Idemix also supports shortterm claims meaning that they have to be renovated so often.
The nonreputation property guarantees that an end user or entity cannot deny having made an action.As the assertions can be digitally signed and the anonymous credentials are also tight to the users this security principle is met by all the IdM solutions analyzed.
One of the advantages provided by ACS and our IdM system and not supported by traditional IdM approaches is that they allow performing offline M2M authentication.It can be done since the verifier can validate the cryptographic proof without the need of an online TTP in charge of verifying the correctness and validity of the assertions.This feature is of utmost importance in M2M IoT scenarios, where devices are not necessarily connected to the Internet.
In SAML, the issued assertions allow using pseudonyms to preserve the end user's privacy.In other words, target accessed entities do not know the end user's real identity.Nonetheless, the IdP could trace the user's access, since the IdP has to generate a different assertion each time they need to access to the target.In contrast, Idemix and, in turn, our IdM system support unlinkable pseudonyms.Unlinkability is defined as "the property that privacy-relevant data cannot be linked across domains that are constituted by a common purpose and context."In Idemix, pseudonyms are scopedexclusive, so that it can generate a cryptographically unique pseudonym for given "scope string," limiting, for instance, the number of pseudonyms for a particular domain.
Zero-knowledge proof means that the user can create a cryptographic proof to convince the verifier that he has or "knows" a valid credential obtained from the issuer, while the user can repeat such a proof as many time as he wants without linking the individuals proofs and without revealing the secret information.ACS allow making claims about an attribute without revealing the attribute and prove inequalities; for instance, Idemix allows proving that the value of an attribute is within a certain range or certain attributes have the same value without revealing such a value.
The user can prove the possession of different credentials obtained from different issuers in one single proof, meaning that different attributes can be aggregated in one partial identity.This attribute aggregation feature is not supported
Figure 3 :
Figure 3: Security IoT framework based on ARM.
Figure 6 :
Figure 6: IdM in an Attribute-Based Encryption scenario.
Figure 7 :
Figure 7: Screenshots of the IdM Android app.
Table 1 :
Identity management systems comparison. | 16,444.4 | 2017-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Smart Agriculture Cloud Using AI Based Techniques
: This research proposes a generic smart cloud-based system in order to accommodate multiple scenarios where agriculture farms using Internet of Things (IoTs) need to be monitored remotely. The real-time and stored data are analyzed by specialists and farmers. The cloud acts as a central digital data store where information is collected from diverse sources in huge volumes and variety, such as audio, video, image, text, and digital maps. Artificial Intelligence (AI) based machine learning models such as Support Vector Machine (SVM), which is one of many classification types, are used to accurately classify the data. The classified data are assigned to the virtual machines where these data are processed and finally available to the end-users via underlying datacenters. This processed form of digital information is then used by the farmers to improve their farming skills and to update them as pre-disaster recovery for smart agri-food. Furthermore, it will provide general and specific information about international markets relating to their crops. This proposed system discovers the feasibility of the developed digital agri-farm using IoT-based cloud and provides solutions to problems. Overall, the approach works well and achieved performance efficiency in terms of execution time by 14%, throughput time by 5%, overhead time by 9%, and energy efficiency by 13.2% in the presence of competing smart farming baselines.
Introduction
The introduction should briefly place the study in a broad context and highlight why it is important. Food requirements are heavily dependent on agriculture and its quality of production. There is a large number of countries in the world where agriculture makes a large contribution to their economies. The populous countries such as Pakistan, India, China, etc., have a major portion of their land devoted to agriculture which not only fulfills their own needs but also provide handsome foreign return [1]. Currently, the integration of agriculture with internet technologies has contributed value addition to traditional farming. Similarly, the advent of the Internet of Things (IoT) in the agriculture sector has changed traditional farming to smart farming. Using cloud computing service models for storing and accessing agriculture data from multiple sources has greatly impacted the performance in the offline and real-time environment [2,3]. This performance in the cloud is measured by multiple factors such as response time, execution time, overhead time, migration time, optimization time, and energy consumption by computing resources [4]. Similarly, smart farming, with the help of IoTs, is using powerful machine learning models to access and disseminate the data in an efficient manner from and to the end-users such as farmers, advisors, and researchers. There is a strong need to process real-time information accurately so that an appropriate decision can be made in a timely manner by the end-users. This is only possible with the inclusion of strong machine learning models, such as the Support Vector Machine (SVM), in the cloud environment for efficient information processing. Some machine learning algorithms such as Cat Swarm Optimization (CSO) are more suitable for small population sizes with a minimum number of iterations and, hence, do not provide good solutions in a situation where the processing involves a large number of complex tasks. This drawback eventually leads CSO to fall into local optimum, which takes more iterations in finding solution spaces and, hence, renders CSO computationally complex. Therefore, we used SVM by introducing a new grouping phase process that takes the data files into four groups (audio, video, image, and text) taken from the SVM, keeping in view the properties associated with each group.
Cloud computing is a type of internet computing in which users may access programs via a browser while the application (together with the data) is installed and kept on a server. This is a whole new type of computing that allows thousands of farmers all over the world to access information without having to download or install anything on their computers or mobile phones. It enables mobility in the same way that most networks allow users to connect even if they do not have their computers, allowing them to work from anywhere in the world as long as they have an internet connection and access to a computer. Sensor technologies, personal mobile devices, wireless broadband connections, and cloud computing are allowing agriculturists and regular farmers to collect and disseminate agricultural data in real-time from anywhere. Personal mobile devices, such as Personal Digital Assistant (PDA)s and cellphones, are becoming more capable in terms of processing and information management, and they play an increasingly important part in people's everyday lives. This technical progress has led us to develop a scalable and cost-effective real-time agricultural monitoring and analysis system for those who need to monitor their farms regularly [5]. The focus will be on the design features of an autonomous cloud environment that collects farmer's agricultural data and distributes it to a cloud-based information repository, as well as facilitating data analysis by utilizing cloud-based software applications.
The consideration of this research work is to create a cloud computing infrastructure that will connect different agriculture resources to a central data store. The services may be in the form of information/responses to different queries raised by the farmers; automatic suggestions to farmers from time to time to improve their farming skills; provide specific slots in lands where the problem exists in multiple formats; automatically update the farmers on which crop is suitable in their area based on various factors; provide special features to farmers where they can also interact with local and world-leading agriculturists; update the farmers as pre-disaster recovery; and provide general and specific information about international markets relating to their crops. The data collected by the concerned providers will be sent to the data cloud store via an underlying computerized system, where it will be accessible to all connected farmers, advisors, researchers, and support centers. There are several activities to be performed by the developed system such as acquiring, managing, and using agricultural information from farms to enhance the quality and efficiency of farms and to respond to widespread public farmers queries. In particular, at the automated farm level, agriculture envisions that "a new system of distributed computing tools will collect authorized data about farm illness and store it securely within a network designed to help deliver quick and efficient care". Furthermore, the concepts, such as wearable agriculture devices, Farm Area Networks (FANs), pervasive wireless broadband communications, and cloud computing, are enabling advanced mobile farmcare services that benefit both farmers and agriculture professionals [6]. This enables the development of a system to perform remote real-time collection, dissemination, and analysis of farm data to manage chronic conditions and to detect farm emergencies. Our goal is to offer an architecturally generic cloud-based system that can be used in a variety of scenarios where farms need to be remotely monitored and recorded data have to be processed by a computer system and made available for professionals or farmers to access. Despite the fact that our designs and prototypes are flexible enough to suit a variety of use cases, we focus on one particular motivated case: The monitoring of farms affected by chemicals and sickness, which necessitates constant episode detection. Electrocardiogram data from commonly available wearable sensors are collected in real-time and utilized to detect and classify episodes.
This research study develops crop stress indicator models that provide data filter and alert capabilities for monitoring local agricultural conditions and provides an immediate solution to farmers as a remedy. Secondly, by taking the information service demand of "Three Rural Issues" in our country into consideration, we propose a cloud service platform for the information service in the countryside which can enable data sharing, remote data storage, interaction with farmers, agriculture expert consultation, and peasant household management. We want to provide a platform for the farmers with the answers to the following generalized question: How do our agricultural activities increase the yield or deliver higher efficiency with quality? As agriculture goes high-tech, our proposed agricultural technology revolution will not require farmers to put down their hoes and pick up a mouse. Instead, we predict the growth of specialist project groups with expertise in agricultural and associated disciplines, such as ecosystems [7]. Those who demand continual monitoring but live distantly from their service provider and find it impossible to attend frequent pesticides sessions will appreciate the value of a ubiquitous agricultural information system. Accordingly, it has been declared that the implemented approach has no adverse effect on the natural environment, social security, damage to human health, safety to biodiversity, pollution to physical, biological, hydrological resources, damage to public comfort, vegetal canopy, and other adverse environmental effects.
The following are the contributions of this research: • Agricultural data received in the cloud are classified into different file formats such as audio, video, text, images, and maps files; • File format classification is an innovative approach in smart farming that has never been adopted and, hence, contributed to the theory of science; • Multi-format files are processed by using SVM one-to-many types of machine learning methods in order to achieve efficiency in the cloud; • An AGRICLOUD (Agriculture Cloud) approach has achieved maximum accuracy with less response time, low overhead, and high throughput in the presence of a certain state of the art; • Importantly, the farmer achieves the supervisory level so that decisions can easily be taken by him.
The rest of this research study is organized as follows: A literature review is presented in Section 2, the proposed methodology is discussed in Section 3, results are explored in Section 4, and discussions are elaborated in Section 5. Finally, Section 6 concludes the work and identifies the future directions.
Materials Literature Review
Cloud computing is a new paradigm of computing in which dynamically scalable and often virtualized resources are referred to as clouds and are provided as a service over the internet [8]. Users of these services do not need any knowledge to construct them, maintain them, or to control the technology infrastructure used to construct these "clouds" that support them. Cloud computing technology is forcing a paradigm shift from owing the information and communication infrastructure to leasing it, allowing "cloud systems" to penetrate the market as an infrastructure in which services can be orchestrated. Cloud computing has emerged as a novel way for large-scale aggregation of various IT services provided through fast digital networks, similar to the electrical grid of public utilities [9,10]. The rise of cloud computing is rapidly changing the landscape of information technology and ultimately turning the long-held promise of utility computing into a reality [11,12].
Cloud computing's emergence has had a significant impact on the Information Technology (IT) industry in recent years, with companies such as Google, Amazon, and Microsoft contending to provide more powerful, reliable, and cost-effective cloud platforms. More businesses are reshaping their business models to take advantage of this evolving field [13,14]. There are several advantages of the cloud computing paradigm over traditional IT services on the following basis.
Minimum infrastructure investment cost: A cloud service user can simply rent the resources from different clouds, and the users only pays for the resources they have used according to some pricing model.
Minimum operating cost: A cloud service provider can dynamically allocate and de-allocate resources according to system usage situation at peak loads, which allows significant savings relative to the operational cost of resources when the usage of the system is low.
Scalability: A cloud service provider can easily extend its services by provisioning a larger amount of resources and making them accessible. The current IT technologies are not matured enough to realize the full potential of cloud computing technology [15]. Major challenges that arise in the implementation of cloud computing technology, including automatic resource provisioning, security management, and power management, are recieving attention from the research community. Therefore, researchers in this area still have tremendous opportunities to make novel contributions and bring about significant impact with respect to IT Industry.
Keeping in view the importance of cloud computing in the agriculture sector, several innovations have made inevitable the transition from traditional farming to smart farming in the world. Developing countries, such as Pakistan, need massive efforts to uplift their major source of revenue in the form of smart agriculture. To meet the food needs of Pakistan, continued population growth in Pakistan and the growing reliance on agriculture to provide replacements for fossil fuels mean that there are no alternatives but to begin smart and high-tech farming; all this simply amounts to using cloud computing to improve agricultural reforms.
Subsequently, farmers in rural and remote areas do not have similar access to intensive specialist support as in urban areas. Thus, there is a need to develop such a system that provides farmers with equal access to specialist support. Higher temperatures, increased agricultural water demand, more variable rainfall, and extreme climatic events such as heatwaves, floods, and droughts all have an impact on agriculture [16]. Farmers are the most vulnerable to climate change, but they can also play an important part in combating it by seizing opportunities.
Our proposed solution will investigate the above-mentioned problems for the benefit of the community at large. It would assist the farmers by providing a variety of possible options and also recommending the best solution based on many factors stored in a cloud of agricultural records in a timely and more efficient manner. The agricultural record includes a variety of types of "notes" entered over time by agriculturists and professionals from all over the world, including recording observations and the administration of pesticides, orders for the drugs, test results, scanning farms, reports, futuristic issues after natural disasters to lands, etc. The maintenance of a complete and accurate agricultural record is a fundamental requirement for any agriculturist and is generally enforced as a licensing or certification prerequisite.
Cloud Computing technology provides a pervasive means of efficiently sharing educational material and knowledge among institutions in a relatively low-cost manner in order to leverage the cooperative and sharing culture of higher education [17]. Collaboration between researchers, students, degree programs, and even administrative services is both a requirement and a driver for a true knowledge-based economy [18]. Cloud computing enables cost-effective collaboration across research-intensive institutions throughout the country and around the world, especially in the sciences, where large-scale equipment has become the standard. Scientists from diverse universities can share supercomputers, libraries can share digital humanities collections, astronomers can share galactic pictures, and network engineers may share strands of fiber in the same physical cable thanks to cloud computing technologies. In the same way, cloud computing is making a shifting paradigm by strengthening its roots in the agriculture sector. By establishing collaboration of scientists all over the world in the diverse field of agriculture, the end users can obtain maximum advantage in solving the number of problems with efficient solutions. This is only possible with the adoption of a cloud that relies not only on scientists but also takes into account other stakeholders such as farmers, advisors, local agriculture counselors, and students researching the same domain.
The current research study in smart agriculture is more focused on processing relevant information in a much shorter period of time. This includes the shortest response time in accessing the relevant information, the low overhead of the communication, lightweight protocol development for the execution time, and secure transmission of the critical information. To cater to such multiple QoS mentioned above, cloud computing is taking advantage of various machine learning models due to their established authenticity in many applications. One of the widely used machine learning models is SVM for making accurate classifications of data files. In the proposed agriculture model, the cloud would have a huge volume and variety of data taken from multiple sources such as IoTs, Zigbee, Wireless Sensor Network (WSN)s, etc. These data are securely stored in the cloud and will be updated continuously upon receiving information from farmers, scientists, advisors, and researchers. A variety of data in multi-formats such as audio, video, image, text, and maps are taken in real-time and stored in the cloud where SVM is used to classify them in respective classes. This would help in the accurate classification of the data files, which will be fed into Virtual Machines (VMs) for processing. In order to further improve efficient processing, one-to-many classification types of SVM are used. The process would be made radially available for the end-users for a prompt decision about multiple cases. An SVM model represents training data as points in space which are then separated into categories by a wider gap. The classifier then finds the hyperplane that differentiates two classes, and then it is categorized according to the gap they belong to. Furthermore, SVMs are effective in high dimensional spaces and in instances where the amount of dimensions is larger than the number of samples, and it is also memory efficient because it uses a subset of training data in the decision function which are called the support vectors. SVM treats one class at a time (e.g., text class) as a positive class and three other classes (audio, video, and image) as negative classes. Therefore, other classes are taken separately, in the same manner, using one-to-many classification properties of SVM. Some recent research studies taken from the literature are listed in Table 1, with some detail of their technique, their advantages, and disadvantages. Our designed method tries to minimize the limitations of the following research and proposes a comprehensive solution for an efficient agriculture cloud. In addition to the multi-format data classification approach using SVM, the proposed solution would curtail execution time in executing the query and generating the response to be received by the end-user. Another parameter is the reduction in throughput time of the system and minimization of the overhead of the proposed system. The developed system is then tested against some state-of-the-art baselines, and it is found that AGRICLOUD is performing better in a number of scenarios. The system proposes classification method for the detection of plant diseases Basic functionality is achieved A number of parameters need to be added for better classification
Methodology
This section may be divided into subheadings. The data stored in the cloud computing environment are available to all agriculture advisors, researchers, and farmers. The advisors can extract agriculture data about farms applying the same pesticides and analyze the treatment suggested by other agriculturists and experts that helps improve decisions. Furthermore, it is observed that sudden pest attacks are always referred to as uncontrollable when expert treatments are obtained. However, to reach out towards experts when in critical condition is not easy, and as a result the whole crop becomes destroyed. The proposed approach facilitates the advisors/farmers working in rural areas to discuss problem cases with experts in real-time and to treat these advisors/farmers in a better way. The overall functionality of the analysis and monitoring system includes the steps below:
1.
Farms are trained with a wireless sensor attached to specific points and a cellular device that is efficient enough to communicate through the Internet; 2.
The module of the wireless sensor collects the farm's data and sends it to the mobile device via Bluetooth without user intervention; 3.
A software of the client in the mobile device exhibits and transmits data to the analysis web service that is hosted by a cloud computing-based software. This interaction can occur with the mobile's data connectivity (e.g., mobile 4G/5G network), a home wireless gateway, or directly; 4.
The analysis software carries out several computations over the collected data by taking reference from the current demographic data and the farm's historic data.
Computations concern comparison, classification, and systematic diagnoses of farm lines, which can be time-consuming especially when it is performed for longer periods and for a large number of farmers; 5.
The software then extends the latest results to the historic record provided in private and secure cloud-based storage so that verified users can gain access to it anytime and from any place. Agriculturists then understand and translate the features extracted from the data, and a decision is made accordingly; 6.
The results in the form of advice are disseminated to the farmer's mobile device; 7.
The computing and monitoring processes are repeated daily/hourly, corresponding to the user's choice.
The proposed architecture is divided into three layers, such as the acquisition layer, the application layer, and the information exchange layer, as shown below in Figure 1.
device via Bluetooth without user intervention; 3. A software of the client in the mobile device exhibits and transmits data to the analysis web service that is hosted by a cloud computing-based software. This interaction can occur with the mobile's data connectivity (e.g., mobile 4G/5G network), a home wireless gateway, or directly; 4. The analysis software carries out several computations over the collected data by taking reference from the current demographic data and the farm's historic data. Computations concern comparison, classification, and systematic diagnoses of farm lines, which can be time-consuming especially when it is performed for longer periods and for a large number of farmers; 5. The software then extends the latest results to the historic record provided in private and secure cloud-based storage so that verified users can gain access to it anytime and from any place. Agriculturists then understand and translate the features extracted from the data, and a decision is made accordingly; 6. The results in the form of advice are disseminated to the farmer's mobile device; 7. The computing and monitoring processes are repeated daily/hourly, corresponding to the user's choice.
The proposed architecture is divided into three layers, such as the acquisition layer, the application layer, and the information exchange layer, as shown below in Figure 1.
Acquisition Layer
The acquisition layer comprises various sensors used to collect data installed at different locations. The data received in data collecting devices are of numerous formats such
Acquisition Layer
The acquisition layer comprises various sensors used to collect data installed at different locations. The data received in data collecting devices are of numerous formats such as audio, video, images, maps, texts, etc. In order to perform timely computation and to reduced overhead, there is a need to implement the classification of data of those types.
For the development of cloud platform of the agriculture sector, it is important to implement steps such as the creation of an information model, understanding the ontologies, understanding the data, cataloging the data, building information model based on collection information, building service model, creating service model, understanding services, building service model, creating process model, understanding processes, converting services to processes, building process model, creating a governance model, define-design-implement policies, testing cloud architecture, creating a test plan with black and white box testing, selecting platform-deploy-processes-services-data, and assigning candidate data-services-processes.
Information Exchange Layer
The information exchange layer is responsible for sharing the data from the acquisition layer to the application layer through the datacenter and runtime information.
The role of a classifier is to establish which class, among a set of different classes, an unknown object belongs. A classifier has to find the best boundaries between the different classes. The optimal classifier, called SVM, minimizes the total cost. They are based on the separating hyperplane. If the data have high enough dimensionality, the different classes, which constitute different clusters, are linearly separable by hyperplanes.
An SVM model represents training data as points in space, which are then separated into categories by a wider gap. The classifier then finds the hyperplane that differentiates two classes and then it is categorized according to the gap they belong to. Furthermore, SVMs are effective in high dimensional spaces and in instances where the amount of dimensions is larger than the number of samples, and it is also memory efficient because it uses a subset of training data in the decision function which are called the support vectors. The kernel trick permits dealing with non-linear separating surfaces since the data, initially in the input space, are transposed in a higher dimensional space where a separating hyperplane is found. Figure 1 illustrates this principle.
We have divided the VMs into four types of sets, such as Audio VM, Video VM, Text VM, and Image VM based on input data. Each set of VM has different processing and storage resources in a cloud environment. More precisely, each machine (VM) is assigned a task based on task requirements. For example, video tasks require 1000 floating-point operations and 16 Giga Byte (GB) memory, audio tasks require 800 floating-point operations and 12 GB memory, image tasks require 800 floating-point operations, and 8 GB memory, textual tasks require 400 floating-point operations, and 4 GB memory. For video data classification, we extracted feature vectors of sequences of 40 frames extracted from four different video classes, where we have a 40 × 4096 matrix and where each row refers to features of one frame (one frame per row). Thus, we classified videos between these four different classes. We preprocessed a new video to limit its number of frames and then extracted features from this video to classify it. Assume that we have four video classes (ci, i = 1, . . . , 4). Each video has 40 (n = 1, . . . , 40) frames and from each frame we extracted 4096 features (1 × 4096). Since each frame has enough information to predict the video class (ci), we used 40 frames from each video as training/test samples, which creates an input matrix of (160 × 4096) dimensions, with 160 samples and each sample have 4096 features. Additionally, we have created an output vector (160 × 1) that contains the label of each class ci = i, where i = 1, . . . , 4. The feature sets include low-level signal properties, mel-frequency spectral coefficients, and two new sets based on perceptual models of hearing. For image classification, we have considered 256 × 256 pixels (total of 65,536 pixels). We used each pixel as a feature in the SVM classifier. For text classification, there are text documents of about 6 GB which are extracted in the form of unstructured text. We performed stemming and stopped word removal and extracted the words in the form of features. We then used these features for text data classification using SVM. The deep learning models require more time to train and their convergence time is greater, as compared to SVM; thus, so this is why we have adapted the traditional machine learning approach [30].
One of the problems of cloud task scheduling is assigning jobs to various VMs to accomplish load balancing in the shortest time. The benefit is that cloud resources are better used and users' requests are met more quickly. One of the major machine learning techniques is the Support Vector Machine (SVM), proved as a powerful model for achieving accurate classifications if carefully performed. In this case, we are proposing one-to-many types of SVM so that one type of data is taken as the positive class at a time and the other three classes are taken as the negative class [15]. For example, in the first case, audio data are classified from the rest of the data classes, then video data, image, map, and video data are classified separately. This preprocessing is performed within the cloud, which results in saving computations later in the information exchange phase. In one-to-many types of classification using SVM, multiple classes are separated from one another. A few of the examples include an email folder containing different categories such as work, family, and students. We have a classification problem in which four classes are used with assigning numbers, such as workshop x1 = 1, family matters x2 = 2, customers orientation x3 = 3, and students behavior x4 = 4. In binary classification, one class is taken as positive, and the rest of the classes are taken as negative. Here, the idea of the one vs. all classification is taken to obtain the best accuracy results since this study involves four training sets with class 1 video data type as z1 = 1, class 2 text data type as z2 = 2, class 3 audio data type as z3 = 3, and class 4 image data type as z4 = 4. The proposed method is to take one class at a time and to turn it into four separate binary classifications. In this case, class 1 is taken as a positive class with value 1 and the three other classes as a negative class with value 0. When we train the datasets, this will provide us with a decision boundary of one class with the rest of the classes. In simple words, in the first instance, video datasets are taken as positive classes and all others as negative classes. In the second instance, text datasets are taken as a positive class, and the rest are negative classes. In the third instance, audio datasets are taken as positive classes, whereas others are negative classes. Finally, in the last instance, images belong to the positive class, whereas all other datasets are taken as negative classes. To make the communication secure, the AES lightweight version of the protocol is suggested so that information at the time remains secure and cannot be changed by the attacker in any type of security active attack. This helps in achieving confidentiality, integrity, and authentication of the exchanged information even within the layers until the end-users.
Application Layer
The application layer is responsible for displaying the results to the end-users in the form of GUI. These results consist of schedule data.
Algorithm 1 describes the steps involved in the data partition and data scheduling process.
Algorithm 1. AGRIClOUD
Input: video, text, audio, digital maps from different cloud sources, number of virtual machines (VM) Output: Data class, Scheduled data 1.
for data classification do 2.
for each P(u, v) do 3.
for each Classification accuracy = M do 5.
Evaluate data accuracy 6.
if Number of iterations = N then 7.
return data class 9.
for load balancing do 10.
schedule data ← Assign data to each group (cluster) w.r.t virtual machine 11. end for 12.
return schedule data We used one of the simplest kernels. The space X corresponds to R N , and we assume patterns x in which most information can be represented by the dth order products of entries [x]j of x. These products are called monomials. Here, the vector of monomials is used to represent the feature space H, and the mapping itself represents Φ. One can easily show that the dot product in the feature space is simply the square of the dot product in the input space. The hyperfunction is as follows: where u i . is used for support vector, ∝ i is represented as Lagrange multiplier, and u j is known as the label of membership class (+1, −1) where n = 1,2,3, . . . N.
Results
To check the functionality of the proposed system, the CloudSim4.0 simulator developed by University of Melbourne, Australia is used at the cloud server. The CloudSim4.0 simulator [31] is devised to perform the working of the systems in the simulated form under different scenarios. We have set the following parameters for our proposed system.
Overall, 100,000 files are collected at run time in various formats such as audio, video, image, text, and maps. These files are of varying sizes from 0.1 MB to 1 GB. The configuration of data centers, their size, number of hosts, Random Access Memory (RAM) size, task size on disk, storage capacity, bandwidth, and power are shown in Table 2. The simulation is performed on a Core i9 Intel Quad-core system (by Intel Corporation, USA) with 16 GB RAM and 4 TB HDD. Data files numbering 100,000 after preprocessing using SVM classification have been divided into an equal number of audio, video, image, and texts formats shown in Table 3. This helps in further processing the data with low computation cost and high accuracy as mentioned in Table 4. Cloud computing is making significant contributions in extracting useful information when combined with data distribution techniques. There could be another choice, such as Spark in data distribution, which makes it easy for huge volumes and a variety of data with the help of load balancing approaches. However, we have designed a model with minimum hardware cost and information processing cost for a large amount of data. Although Apache Spark (developed by UC Berkeley, USA) may provide better running time, our basic objective was to design an optimized model using machine learning algorithms. The confusion matrix in terms of accuracy, precision, recall, F-measure, G-mean, and (Area Under Curve) AUC is calculated for the baselines classifiers such as SVM-GA [32], MSVM [33], MULTICLASS (Multiple Class) [34], and ACOSVM (Ant Colony Optimization using Support Vector Machine) [35]. All these baselines have proved their performance against the number of algorithms in the literature. However, after performing preprocessing using SVM one-to-many types, our AGRICLOUD has performed extremely well and has shown better classification performance. This will help in the further minimization of execution time later in the completion phase that results in benefiting the end-user such as a farmer, researcher, adviser, or expert.
Discussion
The developed algorithm AGRICLOUD is compared with CLAYMIST (cloud enabled CMM index for smart agriculture monitoring system) [36] and MISSENARD [37] baselines in terms of execution time, throughput time, and overhead time. In the first case, execution time is demonstrated for all algorithms in Figure 2. The number of tasks in the form of audio, video, images, texts, and maps is executed gradually over 5, 10, 20, 50, 100, 500, and 1000 VMs. It can be observed that, from the very start, AGRICLOUD remains very stable in processing the tasks. However, at 100 and more VMs, more execution time is required, showing less performance, but its performance is still better than CLAYMIST and MISENARD. These two algorithms performed somehow equal to 500 VMs but gradually started to require more execution time, rendering them the worse choice. This shows that the proposed algorithm has better classification performance and scalability than others. Figure 4 shows the overhead time taken by all baselines over a set of VMs. The lower the overhead, the faster the response time and the higher the execution time will be. All algorithms start well at the beginning with low tasks and a smaller number of VMs. The algorithms abruptly started to take more overhead time when it reaches 100. However, due to the earlier performance of the INDIGSOL for better execution and throughput time, the overhead time remains slow, showing better performance of the proposed algorithm. Overall, this would help in processing the data in the form of audio, video, text, images, and maps much faster for the end-users. The end-user would be able to receive critical information such as weather, temperature, climate change, pesticide information, crop production, water level, soil moisture, and several other factors timely and precisely and with accuracy. This information is stored in the cloud in the form of historical data that would be updated continuously with every packet of information. Furthermore, the information is beneficial to the advisors, experts, and especially to the farmers at their doorstep without any additional cost. Table 5 shows the statistical data of AGRICLOUD with different baselines. Student's t-test is used to find out the significant difference between AGRICLOUD and two baselines. This test finds out the difference between two independent groups that help to determine whether the difference is actual or due to chance. It is shown that the p-value for execution time, overhead time, and throughput time for all baselines is greater than 0.01 significance level. Similarly, the t-value for all baselines is greater than 0.01, showing that a significant difference exists. Therefore, statistically, our proposed algorithm is better than other baselines as well.
Conclusions and Future Work
Pakistan, being an agricultural country with 70% land, needs to pay much focus on the adoption of new technologies concerning global practices. The motivation assists our farmers in becoming advanced at the supervisory level so that they can manage things appropriately with an understanding of the farms. There is a need to build the capacity of the farmers so that they interact with one another in addition to experts. Currently, no such mature work exists both in practice and at the infrastructure level. The cost of management is the most important consideration here. As a result, the scope has been limited to computer resources, such as storage, processing, and data transport.
A farm health monitoring system now uses a mobile phone's processing capability, integration with farm sensors, and internet access to alert farmers when the crop requires attention or even perform an automatic intervention, such as triggering the automatic release of drugs into the farm when necessary. We designed a cloud computing-based real-time agricultural monitoring and analysis system capable of assisting farmers in better managing their farms by decreasing or eliminating on-site consultations.
An AGRICLOUD is proposed to solve most of the aforementioned issues currently present in agriculture. The main goal of this research is to propose an innovative solution that performs file formatting in the cloud so that pre-processing of a huge volume and variety of files will be performed initially. This pre-processing will save time when resources are assigned to the virtual machines resulting in much-reduced execution time and energy consumption. This approach makes the solution viable, robust, and scalable as it has been tested with gradually more and more resources. It is important to mention that file formatting in the cloud for agricultural data is an innovative step in smart farming that contributes to the body of science. It is observed that in some current baselines, the proposed system performed well for a few parameters such as execution time, throughput time, and overhead time while reducing the energy.
The authors are residents of Pakistan where most of the land is agricultural and belongs to rural areas. Here, people have low income and require innovative technologies to improve their farming skills so that per capita income may increase. The Government of Pakistan has initiated few agricultural reforms in which smart farming is one of them. Initially, this study is a pilot test study and is implemented with only Pakistan climate factors considered due to the huge cost involved. However, this study will also be extended to other countries in the future.
In the future, we will work on critical factors of cloud-based agriculture such as priority-based farming, agriculture 5.0, precision agriculture, telematics, and data analytics by using artificial intelligence-based techniques. This would help in making optimal decisions by keeping in view the availability of a certain number of resources. | 9,007.6 | 2021-08-19T00:00:00.000 | [
"Computer Science",
"Agricultural and Food Sciences",
"Engineering"
] |
A plea for descriptive social ontology
Social phenomena—quite like mental states in the philosophy of mind—are often regarded as potential troublemakers from the start, particularly if they are approached with certain explanatory commitments, such as naturalism or social individualism, already in place. In this paper, we argue that such explanatory constraints should be at least initially bracketed if we are to arrive at an adequate non-biased description of social phenomena. Legitimate explanatory projects, or so we maintain, such as those of making the social world fit within the natural world with the help of, e.g., collective intentionality, social individualism, and the like, should neither exclude nor influence the prior description of social phenomena. Just as we need a description of the mental that is not biased, for example, by (anti)physicalist constraints, we need a description of the social that is not biased, for example, by (anti)individualist or (anti)naturalist commitments. Descriptive social ontology, as we shall conceive of it, is not incompatible with the adoption of explanatory frameworks in social ontology; rather, the descriptive task, according to our conception, ought to be recognized as prior to the explanatory project in the order of inquiry. If social phenomena are, for example, to be reduced to nonsocial (e.g., psychological or physical) phenomena, we need first to understand clearly what the social candidates for the reduction in question are. While such descriptive or naïve approaches have been influential in general metaphysics (see Fine 2017), they have so far not been prominent in analytic social ontology (though things are different outside of analytic philosophy, see esp. Reinach (1913). In what follows, we shall outline the contours of a descriptive approach by arguing, first, that description and explanation need to be distinguished as two distinct ways of engaging with social phenomena. Secondly, we defend the claim that the descriptive project ought to be regarded as prior to the explanatory project in the order of inquiry. We begin, in Section 2, by considering two different ways of engaging with mental phenomena: a descriptive approach taken by descriptive psychology and an explanatory approach utilized in analytic philosophy of mind. We take these two ways of approaching the study of the mind to be analogous to the distinction we want to draw in social ontology between a descriptive and an explanatory approach to the study of social phenomena. We consider next, in Section 3, how our approach compares to neighboring perspectives that are familiar to us from general metaphysics and philosophy more broadly, such as Aristotle’s emphasis on “saving the appearances”, Strawson’s distinction between descriptive and revisionary metaphysics, as well as Fine’s contrast between naïve and foundational metaphysics. In Section 4, we apply the proposed descriptive/explanatory distinction to the domain of social ontology and argue that descriptive social ontology ought to take precedence in the order of inquiry over explanatory social ontology. Finally, in Section 5, we consider and respond to several objections to which our account might seem to be susceptible.
Introduction
Social phenomena-quite like mental states in the philosophy of mind-are often regarded as potential troublemakers from the start, particularly if they are approached with certain explanatory commitments, such as naturalism or social individualism, already in place.In this paper, we argue that such explanatory constraints should be at least initially bracketed if we are to arrive at an adequate non-biased description of social phenomena.Legitimate explanatory projects, or so we maintain, such as those of making the social world fit within the natural world with the help of, e.g., collective intentionality, social individualism, and the like, should neither exclude nor influence the prior description of social phenomena.Just as we need a description of the mental that is not biased, for example, by (anti)physicalist constraints, we need a description of the social that is not biased, for example, by (anti)individualist or (anti)naturalist commitments.
Descriptive social ontology, as we shall conceive of it, is not incompatible with the adoption of explanatory frameworks in social ontology; rather, the descriptive task, according to our conception, ought to be recognized as prior to the explanatory project in the order of inquiry.If social phenomena are, for example, to be reduced to nonsocial (e.g., psychological or physical) phenomena, we need first to understand clearly what the social candidates for the reduction in question are.While such descriptive or naïve approaches have been influential in general metaphysics (see Fine, 2017 for a recent reassessment), they have so far not been prominent in analytic social ontology (though things are different outside of analytic philosophy, see esp.Reinach, 1913).In what follows, we shall outline the contours of a descriptive approach by arguing, first, that description and explanation need to be distinguished as two distinct ways of engaging with social phenomena.Secondly, we defend the claim that the descriptive project ought to be regarded as prior to the explanatory project in the order of inquiry.
We begin, in Sect.2, by considering two different ways of engaging with mental phenomena: a descriptive approach taken by descriptive psychology and an explanatory approach utilized in analytic philosophy of mind.We take these two ways of approaching the study of the mind to be analogous to the distinction we want to draw in social ontology between a descriptive and an explanatory approach to the study of social phenomena.We consider next, in Sect.3, how our approach compares to neighboring perspectives that are familiar to us from general metaphysics and philosophy more broadly, such as Aristotle's emphasis on "saving the appearances", Strawson's distinction between descriptive and revisionary metaphysics, as well as Fine's contrast between naïve and foundational metaphysics.In Sect.4, we apply the proposed descriptive/explanatory distinction to the domain of social ontology and argue that descriptive social ontology ought to take precedence in the order of inquiry over explanatory social ontology.Finally, in Sect.5, we consider and respond to several objections to which our account might seem to be susceptible.
Two traditions in the philosophy of mind
Two chief philosophical traditions have investigated the mind in the 20th century.Analytic philosophers have studied the mind under the label "philosophy of mind"; philosophers in the tradition of Brentano have approached mental phenomena under the label "descriptive psychology" (also referred to as "phenomenology").The most striking contrast between analytic philosophy of mind and descriptive psychology pertains to their relation to the mind/body problem, viz., the question of how to understand the relationship between mental episodes and their physical correlates.
Analytic philosophy of mind has been largely centered around the mind/body problem.Interest in mental phenomena has typically been driven by their implications for how the mind and the body are related to one another.Thus, pain, emotions, color perception, consciousness, or phenomenal character have typically received attention because they constitute potential troublemakers for physicalism.By contrast, descriptive psychologists, including Brentano, Meinong, Stumpf, Reinach, Husserl, and Scheler, among others, proposed an impressive number of claims and theories about the mind while remaining nearly silent about the mind-body problem, and deliberately so.Brentano, and many others following him, thought that descriptive psychology should avoid reference to physiological processes: [Descriptive psychology] will therefore, even in its highest state of perfection, never mention a physico-chemical process in any of its doctrines.(Brentano, 2002, p. 4).
The first difference between analytic philosophy of mind and descriptive metaphysics is thus a difference of interest.Descriptive psychologists want to arrive at a precise mapping of the different psychological phenomena as they appear to us, e.g., their relations, their differences, and their commonalities, and are interested in the relations between psychological phenomena and their physical correlates only insofar as understanding this relation helps to complete such a map.Analytic philosophers of mind, for their part, want most to understand the relation between the mind and the body, and are interested in mental phenomena primarily within this context.It is sometimes suggested that this contrast is at bottom a divergence between anti-naturalist and naturalist approaches to the mind: descriptive psychology, according to this characterization, would be fundamentally anti-naturalist, while philosophy of mind would tend towards naturalism.While it is true that descriptive psychologists are largely antinaturalists and that a majority of philosophers of mind are naturalists, 1 this proposal poorly captures the real distinction between the two approaches.A more plausible way to capture this distinction, in our view, is to maintain that the two approaches are involved in different projects.Descriptive psychology, on the one hand, aims mostly to describe mental phenomena as they appear to us; analytic philosophy of mind, on the other hand, is concerned primarily with explaining mental phenomena in causal or metaphysical terms, even if the resulting explanations diverge from what is immediately apparent to us.There are, clearly, exceptions on both sides, but we maintain that this broad characterization nevertheless captures dominant strands in these two traditions.Our interest is not here in the history of these philosophical schools, but in demarcating two different kinds of philosophical projects that are relevant to the philosophical study of the mind and, ultimately, as we will argue below, to the study of social phenomena.
Descriptive vs. explanatory projects
Our first claim is that philosophy of mind subsumes two different projects under it: descriptive philosophy of mind and explanatory philosophy of mind.Descriptive philosophy of mind aims at capturing mental phenomena as they appear to us, by making explicit the various distinctions, relations, and commonalities they exhibit.Descriptive philosophy of mind consists of claims of the form, "It appears to be the case that p" or "p, as it appears" (see below for a justification of this appositional formulation), where p is a proposition about some mental phenomena, such as the following: "It appears to be the case that desires and beliefs are distinct"; "All perceptions are intentional, as it appears"; "One cannot remember something that did not happen, as it appears"; "Fearing × depends on being presented with ×, as it appears"; and the like.
Explanatory philosophy of mind, on the other hand, starts from claims about mental appearances and submits them to critical scrutiny.There are two main ways of doing so.Explanatory philosophers of mind may, first, take the appearances described by descriptive psychology to be veridical, and set out to explain their content; alternatively, they may, as a second strategy, take appearances to be illusory, and subsequently attempt to explain why we have such illusory appearances.To illustrate, consider the descriptive claim, "Phenomenal consciousness exists, as it appears".Disagreements about the truth of this sentence are part of descriptive psychology.Consider now philosophers of mind who are engaged in an explanatory project and who agree that the descriptive claim is true.Some among them will consider the appearances reported in the descriptive sentence to be veridical.Thus, they will accept that phenomenal consciousness exists and that this phenomenon constitutes (part of) their explanandum.They will, typically, try to elucidate how phenomenal consciousness is related to brain states.But other explanatory philosophers who also agree with the descriptive claim will instead discard the appearance in question as illusory.For them, consciousness does not in fact exist and, therefore, does not call for an explanation.Illusionism about consciousness, however, still needs to explain the illusory appearance that consciousness exists: this erroneous impression, for the illusionist, is (part of) what needs to be explained.
Let us call explanatory philosophers of mind who take mental appearances to be veridical foundationalists, and those who take mental appearances to be deceptive revisionists.Identity-theories, functionalism, dualism, Russellian monism, to mention but a few, are foundationalist theories.Eliminativism about mental states (Churchland, 1981), illusionism about consciousness (Dennett, 1991;Frankish, 2016;Kammerer, 2021), or eliminativism about pain (Hardcastle, 2001) are revisionary theories either of the mind in its entirety or of some part or aspect of it.Both foundationalist and revisionary approaches are in the business of saving the appearances captured by descriptive metaphysics.But while foundationalists philosophers of mind want to save the content of the appearances, revisionary philosophers of mind consider such contents to be unsalvageable: only the appearing of such contents can be saved.
In practice, claims do not wear their descriptive or revisionary status on their sleeves.Consider the view that perception is not intentional in the sense that there is no distinction between perceptual acts and perceptual objects.This view may be defended as a revisionary view: perception seems to be intentional, but is in fact not (Dennett, 1987).Alternatively, it can also be intended as a descriptive thesis: perception does not seem to be intentional (as is argued, for example, by neutral monists such as Russell, 1919Russell, , 1995: pp. 142-143;: pp. 142-143;James, 1912or Carnap, 1928); the act/object distinction, according to this conception, is not part of the appearances.On the former revisionary view, the intentionalist description of mental appearances is correct, but the appearances in question are illusory.According to the latter descriptive view, Brentano and his followers misdescribed mental appearances.Importantly, heterodox descriptive claims should not be conflated with revisionary claims: disagreement about the right description of appearances is distinct from disagreement about the veridicality of appearances.
The priority of description
Both the descriptive and the explanatory projects are legitimate and important.How are they related?Our second claim is that the descriptive project is prior in the order of inquiry to the explanatory project.Three cognate arguments support this priority.The first is reminiscent of Meno's paradox (otherwise known as the paradox of inquiry).If we are to explain the mental, we must at least have a sufficient grasp of the mind in order to be able to know what to look for.At the same time, there must also be other aspects of the mind of which we are ignorant; otherwise, no explanation would be needed.The answer is given by descriptive philosophy of mind: prior to the explanatory project, we know how the mental appears to us; but we lack knowledge as to whether these appearances are veridical and what explains them (be it their content or their occurrence).The second argument pertains to the evaluation of explanations.Not only is the descriptive task crucial for fixing the explanandum from the start, it is also needed to assess the success or failure of the proposed explanation.Such an evaluation is impossible without an independent grasp of what it is that we are trying to explain (e.g., reduce, identify, ground, eliminate, etc.).The third argument pertains to the identification of our subject-matter.Absent a clear identification of the subject of the inquiry, one can neither know whether an explanation has resulted in a change of the subject-matter nor whether distinct explanations target the same explanandum.For this reason, it is vital to ensure that we first correctly identify the mental; and, secondly, that we do not overlook important features of it.
Descriptive psychology does just this: it characterizes the mental realm, and thereby provides explanatory philosophers of mind with their explananda.This is true for both versions of explanatory philosophy of mind.Within the foundationalist camp, reductionist philosophers of mind will view descriptions as yielding candidates for reduction, while dualists will view these candidates as not suitable for reduction.Revisionary philosophy of mind is no less dependent on descriptive psychology.For if we are to classify some appearances as illusory, we first need to be clear on what these appearances are.We need to agree, for example, that the Müller-Lyer lines look to be of different lengths (a descriptive claim) before we then go on to diagnose this appearance as defective.Likewise for mental phenomena.
Two fallacies
Failing to distinguish sharply between descriptive and explanatory philosophy of mind can lead to two important fallacies.The first was spotted by early identity theorists and dubbed "the phenomenological fallacy" by Place (1956).The phenomenological fallacy, broadly construed, consists in going beyond appearances by drawing ontological conclusions directly from the description of appearances.The theory of sense data, which claims that immediately perceived sensory objects are mind-dependent entities, is often considered to be an instance of this fallacy.From the fact that we seem to see a green object, while there is none in front of us, we cannot conclude, without further argument, that there is a green object before our mind, viz., a sense datum.Another instance of the phenomenological fallacy is when descriptive psychologists claim that the description of mental episodes targets the essences of such episodes.While such a claim may be true, it is not a descriptive but an explanatory claim.At best, essentialist descriptive claims might be of the form, e.g., "It appears to be part of the essence of emotions that they are either positive or negative".But whether this characteristic is indeed part of the essence of emotions is a question that cannot be decided on the basis of the appearances alone.Description, by itself, does not settle the explanatory challenges at issue.
There is a reverse fallacy, which we call the "explanatory fallacy".(Fine, 2017 uses the term, "foundationalist fallacy", to refer to the same phenomenon.)This is the fallacy of letting explanatory concerns shape the description of the phenomena under investigation.A good description should be immune from any inclination as to how the explanatory challenges at issue are ultimately to be settled.The explanans should fit the explanandum, not the reverse.Our description of what is to be explained should not be biased by the candidate explanation we may favor, on pain of ending up like the person who is looking for just what she found, whatever it is.For instance, if the reason why we describe desires in terms of their functional role is that we think that functionalism is the correct response to the mind-body problem, we are guilty of committing the explanatory fallacy.Early identity theorists who correctly drew attention to the phenomenological fallacy may themselves have fallen prey to the explanatory fallacy.Smart (1959Smart ( , 2007) ) insisted that the descriptions of our sensations should be topic-neutral, in the sense of being committed neither to physicalism nor to its negation.But from a descriptive standpoint, such a requirement is an unwarranted interference of explanatory constraints with the descriptive project.For why should the description of what is to be explained leave all explanations open?The explanans should fit the description, not the reverse.Descriptions should be neutral indeed, but in the sense of not being influenced by any explanatory commitments.
The descriptive psychologist, upon encountering the results of research in the philosophy of mind, might complain that the few descriptions he finds there are nearly all botched and biased.The philosopher of mind, in turn, will likely be annoyed by the way descriptive psychologists beat around the bush by drawing countless seemingly scholastic distinctions while failing to answer the explanatory question of how mental phenomena are related to physical phenomena.The truth is that proponents of these two approaches are involved in distinct and complementary projects.But while descriptive psychology without philosophy of mind is merely incomplete, philosophy of mind without descriptive psychology is simply impossible.Since we cannot engage in the explanatory project without first having fixed our explanandum, the best strategy is to confront this task explicitly and thoroughly.
Previous versions of the distinction
Although descriptive or naïve approaches so far have not taken hold in analytic social ontology, such approaches have been influential in general metaphysics and in philosophy more broadly.Arguably, Aristotle, when approaching a new subject-matter, tends to favor a descriptive approach which allows him, even in his considered views, to "save the appearances" as much as possible.In Nicomachean Ethics VII.2, for example, at the beginning of his discussion of weakness of the will ("akrasia"), Aristotle describes his method as follows: As in the other cases we must set out the appearances [phainomena], and first of all go through the puzzles.In this way we must prove the common beliefs about these ways of being affected -ideally, all the common beliefs, but if not all, most of them, and the most important.For if the objections are solved, and the common beliefs are left, it will be an adequate proof.[NE VII.2, 1145b2-7, transl. by Terence Irwin].
When Aristotle reports what appears ("phainesthai") or seems ("dokein") to be the case, the appearances in question, which may result from either perception or reason, also include commonly accepted beliefs ("endoxa") (Irwin, 1999, p. 317).At least initially, Aristotle neither endorses nor rejects the appearances he describes, but rather uses them as starting-points for his arguments, as the passage just cited indicates.The goal is to save the appearances, as much as possible; and, to the extent that not all of the appearances can be preserved, those that are deemed to be the most important ones among the appearances initially set out should be retained.
Although descriptive approaches are less common in analytic philosophy, at least three analytic schools of thought emphasize the value of description.First, ordinary language philosophers, under the influence of Wittgenstein, developed detailed descriptions of linguistic practices (see Cappelen and McKeever, 2022, for a recent reevaluation).Secondly, Strawson (1959) famously distinguished between "descriptive" and "revisionary" metaphysics and seems to prefer the former to the latter.According to Strawson, "[d]escriptive metaphysics is content to describe the actual content of our thought about the world, revisionary metaphysics is concerned to produce a better structure" (Strawson, 1959, p. 10).While ordinary language philosophers see descriptions as targeting ordinary linguistic practices, Strawson views descriptions as bearing on our conceptual scheme.A third kind of descriptive approach takes itself to be directed towards essences, rather than aiming to provide descriptions of our language or concepts.(A similar distinction between descriptive projects is discussed in Hacker, 2009, p. 137).
While the project of describing essences was highly popular in the aftermath of Brentano, such an essentialist descriptive approach is less widespread within contemporary analytic philosophy.This may be due first to skepticism towards essences and, second, to a relative lack of popularity of descriptive projects.In recent times, however, Fine (2017) has argued for a distinction between naïve and foundational metaphysics and similarly assigns priority, in some respects, to naïve metaphysics over foundational metaphysics.Both naïve and foundational metaphysics, in Fine's view, share a common interest in the nature or essence of things; but naïve metaphysics proceeds by studying the appearances, i.e., the world as it presents itself to us, while foundational metaphysics aims to discern the reality that lies behind these appearances.Naïve metaphysics, for Fine, can be pursued more or less independently of foundational metaphysics: thus, in investigating questions concerning the nature of numbers or sets, for example, one need not settle questions concerning their reality.When Fine speaks of "reality", here, he does not have in mind "existence"; rather, when something is really the case, in the sense at issue, the phenomenon in question is part of the "ultimate furniture of the world", i.e., truths concerning the types of entities in question must be included in a complete description of the world.In fact, in cases of conflict, Fine advises that the claims of naïve metaphysics, if anything, should be given precedence over those of foundational metaphysics.There is in general a danger, so Fine warns, that premature attempts to address foundational questions will lead to a distorted depiction of reality if the subject-matter in question is not first investigated properly from a naïve point of view.Both realists and anti-realists concerning a given phenomenon might agree on the need to "save the appearances"; but what this comes to can only be appreciated once we have determined, from the point of view of naïve metaphysics, what the appearances are that are supposedly worth saving.
The distinction between naïve and foundational metaphysics Fine is after is not one between the "ordinary" (or "pre-philosophical") and the "philosophical", since for Fine no clear dividing line can be drawn between what is part of philosophy and what is part of our everyday life.Unlike Strawson's distinction between descriptive and revisionary metaphysics, Fine's distinction between naïve and foundational metaphysics is not primarily aimed at characterizing our conceptual scheme or the structure of our thought.In contrast to Strawson, Fine does not view naïve and foundational metaphysics as competitors; rather, they are meant to complement each other, since both are directed at studying the same world, but through different lenses, one by focusing on the appearances and the other by aiming to capture the reality that lies behind these appearances.Just as naïve metaphysics does not necessarily proceed by aligning itself with our ordinary language or thought, so similarly there is nothing built into foundational metaphysics as such that would privilege science over common-sense.
Descriptive metaphysics
We agree with Strawson and Fine on the centrality of the meta-theoretical distinction between descriptive and non-descriptive metaphysics.Like them, we also believe that some priority must be given to the descriptive project.The way we propose to draw the distinction, however, significantly differs from each of their proposals.
Descriptive metaphysics, we contend, purports to describe the contents of appearances.We construe appearances very broadly as including perceptual appearances, inner and outer consciousness, memory, intuitions, affective presentations (e.g., something's feeling dangerous), beliefs, opinions, expectations, etc. (Though beliefs are excluded by Huemer (2007), the resulting view is nevertheless closely related to ours.)Moreover, we consider appearances as being intentional: the appearing (the act) is distinct from what appears (the content).Finally, we assume that the content of an appearance is propositional (although not necessarily conceptual).As a consequence, appearance reports always involve an appearance operator of some kind: e.g., "it seems that p"; "it appears to be the case that p"; or "p appears to be the case".Thus, appearances include whatever seems to be the case: that orange is more like red than like blue; that the Earth's population is increasing; that some vegetables require more water to grow than others; that 2 + 3 = 5; that there are trees; and so forth.
Descriptive metaphysics describes what appears and not primarily the appearing of these phenomena, except when the appearing is itself taken to be part of the appearances in question.To stress this point, and building on Dancy (2000, p. 128ff), we prefer to use the following appositional construction of the appearance operator to capture the objects of descriptive metaphysics: "p, as it appears"; "p, by the look of things"; "p, seemingly".It is indeed the contents of such statements that descriptive metaphysics seeks to describe properly, rather than the fact that they appear or their appearing.When p concerns mental states, the claim in question belongs to descriptive psychology.When p concerns numbers, the claim belongs to descriptive arithmetic.When p concerns social phenomena, the claim belongs to descriptive social ontology or descriptive social science.(Below, we take up the question of how to distinguish descriptive claims that belong to social ontology from empirical descriptive claims that are part of the social sciences.) We saw that Strawson, Fine and ordinary language philosophers disagree about the target of descriptive metaphysics.Does descriptive metaphysics concern the structure of our language, our conceptual schemes, or essences?Insofar as these claims are construed as meta-philosophical claims about the nature of descriptive metaphysics, we take these three approaches to be equally mistaken.Descriptive metaphysics, properly construed, leaves such questions open.Whether appearances reflect essences, linguistic structures, a priori categories of our cognitive system, or yet other entities are questions that cannot be settled by descriptive metaphysics but only by explanatory metaphysics.As far as descriptive metaphysics is concerned, all we have are claims to the effect that, e.g., material objects appear spatially located; processes appear to have temporal parts; some properties (e.g., being red, being elegant, or being painful) appear to be gradable, while others (e.g., being obligatory, being president or being an odd number) appear not to be gradable.Such appearances can then be explained in many different ways.A realist will think that objects appear to be spatially located, because they are in fact spatially located, and that our cognitive system represents their location as it is.A Kantian will think that objects appear to be spatially located, because space is an a priori form of sensibility.Neither appearances nor their description can by themselves decide whether appearances are in fact about, e.g., worldly essences, a priori forms of sensibility, conceptual schemes, or linguistic projections.This is not to say that descriptive metaphysics cannot contain claims about the existence or essence of some phenomena.But such claims are always under the scope of an appearance operator of some form, such as "x is essentially F, as it appears" or "x exists independently of its being perceived, as it appears".In the latter case, the apparent lack of a dependence relation between the existence of a certain phenomenon and its being perceived might itself be presented to us as part of the appearance.As before, however, whether it really is the case that the phenomenon in question can exist independently of its being perceived can only be determined once the investigation has proceeded from the descriptive to the explanatory phase.
Counterexamples are one familiar kind of descriptive claim.A counterexample does not purport to explain appearances but aims to report appearances that allegedly clash with explanatory commitments.Once it is seen that counterexamples are descriptive claims, it becomes easier to spell out the variety of possible reactions one might have to counterexamples.Consider, for instance, the view that no two entities of the same kind can be at exactly the same place at the very same time.Sounds have been advanced as a counterexample to this view (Casati & Dokic, 1994): as it appears, two sounds can be at the same place at the same time, as is illustrated for example by a chord played on a guitar.Upholders of the initial view may reject the counterexample in three main ways.First, they might deny the existence of the appearance in question: it is not the case, they might argue, that sounds appear to be exactly in the same place at the same time, because sounds do not appear to have an exact location.Secondly, they might grant that there is such an appearance but argue that the appearance is deceptive, in which case it would still have to be explained why we are subject to the illusion in question.Thus, given this response, though sounds appear to be exactly colocated, sounds in fact occupy distinct locations, but the determination of their exact locations exceeds our sensory thresholds.Thirdly, one might grant that there is such an appearance and that it is veridical, but argue that the counterexample has misdescribed the appearance, so that, once properly described, the appearance no longer clashes with the explanatory framework in question.According to this third line of response, the correct way to describe co-located sounds is to maintain that sounds do not appear to be at the same place at the same time, but rather appear to occur at the same place simultaneously, which no longer contradicts the initial view (Hacker, 1982).Finally, upholders of the view under attack might grant the counterexample: that is, they might agree that there is such an appearance; that the appearance is veridical; and that it has been properly described in a way that contradicts their view.Proponents of this last response might instead opt to restrict their view: for example, they might want to allow that, while no two material objects can be in the same place at the same time, sounds do not belong to the category of material objects and are therefore not affected by the apparent counterexample in question.
Explanatory metaphysics
We contrast descriptive metaphysics with explanatory metaphysics.Explanatory metaphysics, as we understand it, encompasses Finean foundational metaphysics as well as Strawsonian revisionary metaphysics.Explanatory metaphysics, as we conceive of it, is concerned not with how things appear, but with how things are.Explanatory metaphysics thus divides into two main branches: foundational metaphysics and revisionary metaphysics.
According to our usage, a metaphysician adopts a foundational stance with respect to some appearances when she takes the content of these appearances at face value and purports either to explain these contents or to provide a picture of the world which can accommodate these appearances.We use "explanation" here broadly to encompass a variety of different relations that might be said to hold among the phenomena in question or their descriptions, including reduction, identity, grounding, supervenience, functional realization, analysis, paraphrase, and even the position that a given content should be left unexplained, i.e., that things really just are as they appear to be, and that nothing needs to be added to, or taken away from, the description (as is illustrated, for example, by Campbell, 1993's simple view of colors).This position, which consists in just endorsing the appearances as they are, and stopping there, is not a descriptive position, but an explanatory one.For the facts, if indeed they are facts, that the appearances are not deceptive and that nothing explains them are not presented in the appearances themselves.
We will say that a metaphysician adopts a revisionary stance with respect to some appearances, by contrast, when she takes the appearances in question to be deceptive.Consequently, while a metaphysician who chooses such a revisionary perspective does not need to explain the contents of the appearances in question, she must nevertheless explain why things appear to us differently from how they in fact are.For example, a Humean about personal identity, who denies that anything genuinely persists through time in the face of qualitative change, may invoke the human customary propensity to posit persisting persons when in fact there are only distinct bundles of perception as an explanation of why it nevertheless appears to us to be the case that one and the same person can persist through qualitative change.
Note that, though some explanatory metaphysicians may be more inclined towards revising appearances while others are more invested in retaining appearances, these characterizations, in their current formulation, are too general: it is possible to maintain that some appearances are veridical while others call for revisions.In practice, most explanatory frameworks in metaphysics will simultaneously advance both foundational and revisionary claims.This is in part because appearances may contradict each other, so that giving up some appearances, as is done by the revisionary metaphysician, is in practice often unavoidable.
As in the philosophy of mind, it is important to draw a distinction within metaphysics as well between revisionary claims and heterodox descriptive claims.The revisionary or foundational status of a claim is not intrinsic to it, but depends on the descriptive position that is adopted by its proponent.Consider, for example, the claim that matter does not exist.Most philosophers would consider this to be a revisionary claim.Berkeley, however, maintains that his assertion that matter does not exist is not revisionary but foundational.In his view, the world present itself to us in such a way as to suggest that matter does not exist; relatedly, so Berkeley holds, the thesis that matter does not exist is "most agreeable to commonsense" (Berkeley, 1843, p. 172).Thus, the claim that matter does not exist, on this conception, follows from these two commitments: first, the descriptive claim that, as it appears, matter does not exist; and, secondly, the explanatory claim that appearances should be taken at face value.Berkeley's descriptive claim, that matter appears not to exist, is not a revisionary claim, but an unorthodox descriptive claim.As a result, Berkeley's explanatory thesis, that matter does not exist, is not revisionary, but foundational.By contrast, the claim that matter exists, in Berkeley's view, should be regarded as revisionary, rather than foundational.
The priority of descriptive metaphysics
Descriptive metaphysics is prior in the order of explanation to both foundational and revisionary metaphysics.Descriptive metaphysics is prior to foundational metaphysics: for in order to be able to assess the success of an explanation, one must first be clear about what it purports to explain.To accomplish this goal, an accurate description of the explanandum is needed.Descriptive metaphysics is also prior to revisionary metaphysics: for if we are going to fix defects in our existing ways of representing the world, we first need to be clear on how the world presents itself to us.Descriptive metaphysics, in our view, is prior to explanatory metaphysics not in a normative but in a factual sense.This factual priority is not the same as the normative priority often advocated by commonsense philosophers.On this latter view, descriptive claims that reflect commonsense beliefs are not only prior in the order of explanation but also enjoy a higher epistemic status.On a strong version of this epistemic priority, commonsense beliefs are held not to be revisable on philosophical grounds (Boulter, 2007); on a weaker version, commonsense beliefs only have default justification and shift the burden of proof to their opponents (Lycan, 2019;Guillon, 2021).Our priority claim is even weaker than this latter view.It is one thing to claim that we cannot avoid starting with a description of the appearances; it is quite another thing to claim that the descriptive enjoys some sort of normative priority, even if only in a weak sense.From the fact that appearances are our inescapable starting point, it does not follow that these appearances are accurate or trustworthy.Our plea for descriptive metaphysics in general, and descriptive social ontology in particular, does not hang on a positive prejudice towards commonsense, although it is compatible with it.As a result, given our proposal, the dispute in question between broadly Moorean philosophers, who hold that appearances enjoy some presumption of truth, and revisionary philosophers, who believe that there is no good reason to ascribe normative priority to appearances (see, e.g., Cappelen, 2020), will have an impact on explanatory metaphysics, but does not affect descriptive metaphysics.
The descriptive and the explanatory projects thus differ with respect to their goals: the former aims at providing descriptions, while the latter attempts to produce explanations.Given this difference with respect to their aims, we also expect these two approaches to be guided by distinct criteria of success.Thus, we might, on the one hand, judge a description to be good if it is adequate and comprehensive, i.e., if it fits the appearances and does not leave out important distinctions and relations that are found in the phenomena at hand.An explanatory approach, on the other hand, might invoke such criteria as parsimony and powerfulness, in the sense that a good explanation should explain as much as possible with the fewest explanatory posits possible.By contrast, we may not be particularly concerned with parsimony while being engaged in the project of providing adequate and exhaustive descriptions of the phenomena under investigation.Given these distinct goals and criteria for evaluating success, it is furthermore natural to take the activities of describing, on the one hand, and explaining, on the other hand, to require different sets of skills from agents who engage in them.Providing good descriptions, on the one hand, might require an agent to exhibit acuity, i.e., keenness of observation and the ability to recognize distinctions which might otherwise be easily missed.An agent who is skilled at providing good explanations, on the other hand, might be characterized by ingenuity, e.g., the ability to combine the fewest explanatory elements possible to yield the largest possible explanatory gains.
Descriptive social ontology
The very same distinctions, we maintain, apply within social ontology: In social ontology, as in philosophy of mind as well as in general metaphysics, one should carefully distinguish the task of describing social appearances, from the task of explaining them, by adopting a foundational or a revisionary approach.Here again, we maintain, that the description of social phenomena is prior in the order of inquiry to their explanation.Things are more complicated in the case of social ontology, however, for two reasons.
First, social phenomena are not pre-theoretically well-delineated.What counts as social or not will depend on the general theory the social one endorses.Given this, the very extension of the field of social ontology seems to depend on prior theoretical commitments about the nature of social phenomena.As a result, descriptive social ontology appears to presuppose a general account of the social in order to determine which phenomena should be described in the first place.If true, the idea of describing appearances prior to endorsing certain explanatory commitments would then be doomed to fail: any description of the social seems to rely on a prior explanation of the nature of the social realm, in order to delineate which phenomena fall within the scope of descriptive social ontology. 2ur answer to this worry is to argue that, as far as a descriptive social ontology is concerned, a "bottom-up" approach to social phenomena is preferable to a "top-down" approach.Top-down approaches start by asking what all and only social phenomena have in common in virtue of which they are social.Many existing accounts of social reality take this line and aim at unifying social phenomena under a common explanatory scheme (Epstein, 2016;Guala & Hindriks, 2015;Lewis, 1969;Searle, 1995Searle, , 2010, to mention but a few).In contrast to these top-down approaches, descriptive social ontology, in our view, is better served by embracing a bottom-up strategy.Instead of beginning with an examination of the entire social genus as a whole, bottom-up descriptive social ontology proceeds by focusing on a broad range of specific types of social phenomena.Not much hangs on the sense in which these phenomena are social at the outset: "social phenomena", according to this usage, can just be taken to mean phenomena that are regarded as "social" in some admittedly loose pre-theoretical sense of the term.Vague as this sense may be, it is sufficient to exclude, e.g., the study of molecules, black holes, or the auditory system from social ontology.One chief interest of a bottom-up approach for descriptive social ontology is to leave as open as possible further explanatory moves about the unity of social phenomena.In particular, a bottom-up approach initially leaves open the possibility that what unifies social phenomena is neither some one property that is shared by all and only social phenomena nor a relation that all and only social phenomena bear to some common source (e.g., collective intentionality).Rather, the social realm may simply turn out to be characterized by a plurality of essential distinctions and connections.If this latter possibility were to obtain, social phenomena would form a "flat" network consisting of various essential interconnections, no single one of which bears the responsibility of unifying the whole lot.If social reality were to fit this mold, then it would seem that only a bottom-up approach to the description of social phenomena would be able to uncover that this is in fact the structure of the social world.The second reason that complicates the application of the descriptive/explanatory distinction to the field of social ontology stems from the different uses of the term, "descriptive", within this discipline. 3Indeed, "descriptive" has recently come to be used in a distinct way in social ontology.In the aftermath of Haslanger (2000Haslanger ( , 2005Haslanger ( , 2006)'s influential work, descriptive social ontology is often contrasted, not with explanatory social ontology, but with ameliorative social ontology.How does ameliorative social ontology fit into our mapping of this discipline?We propose that ameliorative social ontology should not be viewed as a branch of explanatory social ontology.Explanatory social ontology is a theoretical endeavor, in the sense that it aims at epistemic values, such as truth, knowledge, or understanding.By contrast, the goal of ameliorative social ontology is not truth or other such epistemic values, but rather non-epistemic values, such as moral goodness, justice, or freedom.Ameliorative social ontology aims at providing theories and concepts that are helpful for the purpose of bringing about social change (or preserving social stability).Thus, the aim of explanatory social ontology is to discover truths about the social world, whereas the aim of ameliorative social ontology is to change or preserve (aspects of) the social world.Based on Aristotle's distinction between the theoretical and practical sciences (Metaphysics, 1025b25, 1026a18-19, 1064a16-19, b1-3), we propose to distinguish descriptive and explanatory social ontology, on the one hand, as branches of theoretical social ontology, from ameliorative social ontology and its cognates, on the other hand, as branches of practical social ontology: While the distinction between theoretical and practical social ontology has been one important focus of attention in recent years, as we just saw, the distinction between descriptive and explanatory social ontology has not so far received much systematic attention.It can, however, be spotted in various places in the literature, among them the following.
First, as in the philosophy of mind and in general metaphysics, philosophers belonging to the Brentanian tradition are more descriptively inclined in their approach to social phenomena than those belonging to competing traditions.Reinach (1989aReinach ( [1913]])'s account of speech acts (e.g., promising, ordering, granting, or submitting), of ownership and property rights, or of legal representation constitutes a remarkable achievement in this area.Scheler (1916)'s descriptive account of the varieties of social unities (e.g., herds, communities, societies, or collective persons) is another instance of a prolific descriptive social ontology, as is Stein (1925)'s descriptive account of the State, of its sovereignty, of its actions, and of the way it enacts laws (see Taieb, 2020, for recent discussion).Reinach, Scheler and Stein are not just descriptive social ontologists: all three assume, furthermore, that essentialist claims about, e.g., speech acts, social unities or the State, can be derived from a description of social appearances.This, on our account, is an explanatory claim that goes beyond the mere description of social appearances.Had these phenomenologists moved on without argument from the description of such intuitive claims about essences to essentialist conclusions, they would have been guilty of the phenomenological fallacy.This, however, is not how these theorists proceed: all of them propose arguments to the effect that appearances concerning essences should be taken seriously (see in particular Reinach, 1989b).
By contrast, few philosophers in the analytic tradition have adopted a mainly descriptive approach of social phenomena.Among the most notable exceptions to this generalization, though, are Austin (1975)'s description of the structure of what Reid (2011Reid ( [1788]], chapter VI) called social acts4 and Hart's descriptive approach to legal theory (see in particular the 1994 postscript to The Concept of Law ).The distinction between description and explanation also surfaces in some places.Thus, Gilbert (2011) anchors her study of promises on two "intuitive points".The first is that "if someone has promised to do something, then he is obligated to do it, by virtue of his promise" (Gilbert, 2011).The second is that promisory obligations are directed at the promisee.Pinpointing these two points belongs to the descriptive social ontology of promises.In addition to these descriptive observations, Gilbert advances two further claims.First, she maintains that these intuitive points are veridical, or at least that they have some presumption of truth: she thus endorses a foundationalist ontology of promises.Secondly, she proposes a reductive analysis of promises in terms of joint commitment and argues that this analysis vindicates these intuitive observations.One virtue of making these descriptive claims explicit and distinguishing them from foundational claims, is that disagreements concerning the description of the phenomena that are to be explained can then be easily marked off from disagreements concerning the proper explanation of these phenomena.Haslanger (1995) opposes manifest concepts to operative ones.The manifest concept of coolness, for example, is that of an intrinsic property of some individuals; the operative concept of coolness is that of an extrinsic property of individuals determined by the evaluation of an in-group.The operative concept "debunks" the manifest concept, showing it to be an illusion.Haslanger's description of the manifest concept of coolness belongs to descriptive social ontology; Haslanger's account of the operative concept of coolness belongs to explanatory, or more precisely to revisionary, social ontology.The descriptive/explanatory distinction is also at play in Horden & López de Sa (2021)'s recent analysis of groups.Identifying groups with pluralities, Horden and López de Sa face the objection that different groups may turn out to be co-extensive.They grant that there is an "appearance of distinct but coextensive groups" (Horden & López de Sa, 2021), a claim that belongs to descriptive social ontology, but then proceed to put forward two explanatory claims.First, they argue that the appearance in question is deceptive: for it would lead us to say, for example, that, if Paul is Dean of the Faculty and the husband of Mary, the husband of Mary is not the Dean of the Faculty.Once the deceptive nature of the appearance has been diagnosed, an explanation for why it occurs can then be given: namely, in a nutshell, by recognizing that pragmatic appropriateness has been conflated with truth.These latter two claims belong to revisionary social ontology.In all three cases-Gilbert's foundationalism about promises, Haslanger's distinction between manifest and operative concepts; and Horden and Synthese (2023) 202 :60 López de Sa's revisionism about co-extensive groups-explanatory social ontology is dependent on a prior description of social appearances.
Social appearances
While the appeal to appearances may not strike us as especially puzzling in the philosophy of mind, where consciousness or internal perception may be thought to provide us with some access to mental phenomena, the assumption that social phenomena appear to us in a particular way may strike us as less natural.In our view, however, such a skeptical attitude arises from an overly narrow understanding of social appearances.Social appearances, as we conceive of them, are not tied to any specific mental faculty or mode of access.There are clearly many claims about the social world on which we can agree prior to committing ourselves to a particular explanatory framework.The contents of these claims are social appearances.We thus think of social appearances in a very encompassing way and take some among them to be accessible to us prior to conducting any empirical studies.Here are a few examples of candidate descriptive social claims: it appears to be the case that… claim to be a problem for our proposed distinction between the descriptive and the explanatory: although appearances are in principle accessible to the keen observer, a thorough description of them may often require appeal to theoretical terms.We will take up the concern that such descriptive claims may strike us as purely philosophical or trivial in Sect.5.3.Some other descriptive claims require controlled studies or complex observations.Thus, data gathered by social scientists (e.g., concerning such phenomena as extreme global poverty, racial prejudice, the stock market, the under-representation of certain groups, or the occurrences of historical events) can supply descriptive claims that stand in need of an explanation.Such descriptive claims sometimes clash with common beliefs, which themselves deliver descriptive claims.To illustrate, consider the claim that, in the last 20 years, the share of the global population living in extreme poverty has increased.52% of respondents surveyed across the world accept this claim (Roser & Nagdy, 2014;Ipsos, 2017).This result clashes with the available data which shows that extreme poverty has in fact decreased during the period in question.When there is a conflict between two descriptive claims, one needs to leave the descriptive inquiry and move to an explanatory investigation in order to settle the dispute in question.In the specific case just cited, the obvious move is (i) to revise the common belief that extreme poverty has increased and to explain why most people falsely believe this to be the case; as well as (ii) to endorse the informed descriptive claim that extreme poverty has dropped and to try to uncover the factors that led to this decrease.Thus, tensions between folk theories and empirical data (e.g., inaccurate stereotypes or misperceptions) as well as, more generally, tensions between conflicting descriptive claims can arise at the level of descriptive social inquiry and call for a resolution within explanatory social inquiry.
Descriptive social ontology vs. descriptive social science
The distinction we have just introduced between description and explanation cuts across all domains of social inquiry, and applies not only to social ontology, but also to the social sciences.This raises the following question: what distinguishes descriptive claims in social ontology from descriptive claims in the social sciences?We regard claims above as belonging to descriptive social ontology.By contrast, claims to the effect that inflation has increased, that racism has decreased, that extreme poverty has fallen, and statements of this sort, are all descriptive claims that belong to the descriptive part of the social sciences.
For those theorists who, like Quine (1969), take philosophy to be continuous with science, the difference between descriptive social ontology and descriptive social science will be a matter of degree.By contrast, those who want to draw a sharp distinction between descriptive ontology and descriptive empirical science can endorse two main strategies.The first is to distinguish different modes of knowledge, by arguing, typically, that descriptive claims in the social sciences require refined methods of collecting empirical data, while descriptive claims in social ontology proceed in a more a priori fashion.One problem for a proposal of this kind is that some descriptive claims advanced in the social sciences also have an a priori flavor.Thus, Menger (1892), in describing the explanandum of his study on money, remarks as follows: "We have to explain why it is that the economic man is ready to accept a certain kind of commodity [money], even if he does not need it" (Menger, 1892, p. 239).Menger did not have to establish by empirical means that economic agents exchange goods and services for money: this is an observation to which he could help himself from the start.Yet this claim, one may think, does not belong to descriptive social ontology.A more promising way of drawing the distinction between descriptive social ontology and descriptive social science, we submit, is to focus not on the methods by which claims are established but on the subject-matter under investigation.Descriptive ontological claims tend to target specific aspects of social phenomena, namely those that are associated with what are sometimes called "formal" concepts (Mulligan, 2014), which include for example such notions as essence, existence, identity, parthood, necessity, possibility, ontological dependence, ontological categories (e.g., the category of particulars, properties, continuants, states of affairs, or facts), as well as grounding or anchoring relations.(See Epstein, 2015, Chap. 5, for the distinction between grounding and anchoring.)On this proposal, the descriptive claim, "Teams appear to have members", is a statement that belongs to descriptive social ontology only if it is construed, for instance, as making a claim about one of the formal notions just mentioned, as in, for example, "It appears to be part of the essence of teams to have members".Although we suspect that most, if not all, the claims cited in are tacitly essentialist descriptive claims, we will not defend this commitment in the present context (though see Koslicki and Massin, 2023, for a more thorough discussion of how a descriptive philosophical inquiry should be understood as targeting essences).Note that, insofar as essences figure within the scope of the appearance operator, such claims do not commit us to the reality of essences, only to what we might call "phenomenal essentialism", the view that social entities appear to have essential and accidental properties.Anti-essentialists can very well accept that there are appearances about essence; these theorists will then simply adopt a revisionary stance and take such appearances about social essences to be deceptive.
In a similar vein, a thorough description of the phenomena at issue may alert us to the apparent presence of an asymmetric explanatory connection, e.g., as denoted by some occurrences of "in virtue of", "depends", "because", and other such connectives.Whether the phenomena in question really are connected in this way, or what the nature of the relation in question might be, however, cannot be settled by purely descriptive means but requires an explanatory investigation.To illustrate, the descriptive claim that, as it appears, promises generate claims and obligations, cited above in ( 6), seems to suggest an asymmetric explanatory connection between an agent who has made a promise and the commitments the promisor has thereby incurred towards the promisee.We may use the verb, "to generate", to mark the productive relationship which appears to obtain among the phenomena in question; but the description, as such, should be read as leaving open whether, in reality, promises, claims, and obligations are in fact related in this way and, if so, whether the relation at issue might be ontological dependence, metaphysical causation, grounding, anchoring, or some other connection.
This leads us to a central bone of contention within descriptive social ontology, and within descriptive metaphysics more broadly.Some philosophers take social appearances to have rather poor and simple contents.Thus, in the same way as Hume conceived of perceptual appearances in terms of spatio-temporal associations between simple and separable sensations, some descriptive social ontologists will limit descriptive social claims to mere correlations between what they take to be independent social phenomena.By contrast, other philosophers instead consider appearances to have a rich and complex content.Thus Stumpf (1873) and Husserl (1900), maintained, contra Hume, that in order to describe perceptual objects and contents properly, one needs to recognize not just separable parts, but also dependent parts such as colors, extensions, motions, Gestalten, and intensities.(Dependent parts, which Husserl calls "moments", approximately correspond to what contemporary theorists call "tropes".)Such moments are, on their view, essentially dependent.As Stumpf (1873, p. 113) puts it: "Their nature forbids them to have an isolated and independent existence from other contents in representation". 5Now, in the same way as Stumpf and Husserl criticized empiricists for having neglected the complexity of visual appearances, descriptive social ontologists, such as Reinach (1989aReinach ( [1913]]), Scheler (1916) and Stein (1925), consider the content of social appearances to go beyond mere correlations.They maintain that the social world, as it appears to us, calls for descriptions in terms of such formal notions as essence and ontological dependence.For instance, promises, on Reinach's account, do not just appear to be correlated with intentions and obligations: rather, Reinach takes it to be part of the essence of promises that they presuppose intentions and generate obligations.
The Humean view of the content of social appearances may explain the lack of uptake descriptive social ontology has experienced within analytic philosophy.For if all social appearances only reveal to us correlations between seemingly independent social phenomena, descriptive social ontology simply boils down to descriptive social science.On the other hand, if essence, necessity, ontological dependence, and other such formal notions figure in a proper description of the content of social appearances, then the task of providing such descriptions is not exhausted by the empirical study of social correlations.We take this disagreement between an austere and a rich conception of social appearances to constitute a further reason to undertake a careful descriptive investigation before proceeding to explanatory social ontology.This debate, in particular, should not be settled by explanatory commitments one brings to the scene prior to having engaged in a thorough description of the appearances.To deny that social phenomena appear to enter into essential connections, for example, because one fails to accept the existence of such connections, or to maintain that social phenomena present us with essential connections because one accepts their existence, would exemplify the explanatory fallacy and violate the thesis that descriptive social ontology ought to be prior to explanatory social ontology in the order of inquiry, to which we now turn.
The priority of descriptive social ontology
Our central claim is that descriptive social ontology is not only distinct from, but also prior in the order of inquiry to, explanatory social ontology.While few philosophers in the analytic tradition explicitly embark on a thorough description of the phenomena prior to attempting to explain them, we believe that the priority of descriptive social ontology to explanatory social ontology is nevertheless largely implicitly acknowledged.A case in point is Searle's conception of the construction of social reality, as defended for example in Searle (1995Searle ( , 2010)).Searle is engaged in a foundationalist project.While he makes clear from the very start that his theory must meet certain explanatory desiderata, in particular that institutional facts must be explained in a unified manner that is compatible with Searle's naturalism, no thorough description is given by Searle of what he takes the explananda of his foundationalist theory to be, apart from a list of examples.Though Searle is not a descriptive social ontologist, his implicit acceptance of the priority of the descriptive over the explanatory nevertheless becomes apparent in how he handles objections.A number of objections have been leveled against Searle's account.Some of these objections do not target Searle's descriptive claims, but instead take issue with Searle's proposed explanations of the appearances in question.An objection of this sort is leveled against Searle's account, for example, by Hindriks & Guala (2015), who argue that constitutive rules can be reduced to regulative rules.Likewise, Tieffenbach (2010) argues on the basis of Menger ( 1892) that, contrary to Searle's explanatory account, the origin of money can be explained without resorting to collective intentionality.In both cases, it is the alleged lack of parsimony of Searle's explanatory account that is targeted by the objections in question.
Other objections, however, are not addressed to Searle's explanatory framework, but rather highlight certain descriptive deficiencies of Searle's account.In this vein, such phenomena as nations or electronic money (Ruben, 1997;Thomasson, 2003;Smith, 2003), whose creation does not seem to require the collective acceptance of statusfunction ascriptions to a physical phenomenon, have been used to make trouble for Searle's approach.Likewise, it has been argued that unintended social phenomena or by-products of intended phenomena, such as recessions or racism (Thomasson, 2003) or business cycles (Friedman, 2006), are incompatible with Searle's intentionalist framework.Such objections adduce counterexamples to Searle's theory.Counterexamples, as we argued above, are descriptive claims, in the sense that they point to the presence of certain appearances which the theory in question fails to address adequately.To illustrate, as it appears, inflation is a social phenomenon which does not consist in the collective assignment of status-functions through declarations.If such an appearance is regarded as veridical, we arrive at a counterexample to the view that all institutional phenomena consist in the collective assignment of status-functions.Searle (2006aSearle ( , b, 2010), p. 22), p. 22)'s answer to these objections consists in conceding that these considerations do indeed present counterexamples to his previous account and that, in response, his theory must be restricted in certain ways.Thus, in order to make room for unintended social phenomena, Searle introduces a distinction between "ground floor institutional facts", such as monetary exchanges, and "systematic fallouts", such as inflation, and maintains that his theory of institutional facts concerns only ground floor institutional facts.As long as other institutional phenomena can be viewed as "macro consequences" of these, such a restriction of his original theory, so Searle maintains, is harmless (see also Burman, 2015).This answer is unsurprising in light of our proposed understanding of the descriptive/explanatory distinction.In its reformulated form, Searle's theory does achieve better descriptive adequacy; but it does so at a loss of explanatory power, since two explanatory posits are now needed to explain institutional facts: the imposition of status functions plus an explanatory connection between micro-and macro-phenomena.Had Searle started from a description of the distinction between intended and unintended institutional social facts or their by-products, instead of adjusting his theory afterwards to address counterexamples involving apparently unintended institutional phenomena, he might have arrived at the same theory by way of a different procedure.As long as counterexamples are treated as they are by Searle, no explanatory fallacy results, and the priority of the descriptive is maintained.The only difference is that, when a theory is adjusted retroactively in the face of counterexamples, the description of the appearances is no longer temporally prior to their explanation.In other ways, however, the description of the appearances maintains its priority over their explanation in the order of inquiry, as long as counterexamples are not simply dismissed on the grounds that they are incompatible with the explanatory framework in question.
Searle's revised account, however, is open to further counterexamples in cases in which systematic fallouts do not arise from institutional facts but from individual actions, independently of any collective intentionality.Spontaneously formed pathways, pollution and traffic jams constitute natural examples of the phenomenon in question. 6Various moves are open to Searle in response to such counterexamples.He might further restrict his theory of institutional facts; or he might reject such counterexamples by arguing that, contrary to the appearances, these cases do in fact involve some concealed collective intentionality.One move, however, that is not open to Searle is simply to dismiss such counterexamples on the basis that they do not fit his proposed explanatory framework.To argue, for instance, that given the success of the statusfunction account of institutional facts, traffic jam must be redescribed in such a way that they involve collective intentionality will plainly be an instance of the explanatory fallacy, i.e., of letting one's explanatory agenda contaminate one's descriptions.
We have argued that descriptive social ontology is prior to explanatory social ontology in the order of inquiry.But what about practical social ontology?Should critical or ameliorative considerations not take precedence over descriptive ones?We think not, although we cannot argue for this stronger claim in the present context.In our view, theoretical social ontology-descriptive social ontology together with explanatory social ontology-should similarly take precedence in the order of inquiry over practical social ontology in the following sense.In order to determine whether a certain condition or feature of the social world needs to be improved, whether it should be improved, or in which direction and how it can be improved, one must first arrive at a clear vision of what the condition or feature in question is and what moral or political value(s) it actually has.Any attempt to improve or preserve a certain condition or feature relies on hypotheses, be they tacit or explicit, about its nature and worth.Even the project of "de novo conceptual engineering", recently proposed by Chalmers (2020), which advocates building new concepts in place of fixing old ones, presupposes a careful description and evaluation of the concepts that are already in place.One reason for this is that the very claim that a concept is new presupposes that it is not already included among the familiar concepts that are in our possession to begin with.This latter assessment requires that the concepts that are already in place be carefully studied descriptively before they are replaced by new ones.
Objections
In what follows, in order to clarify further the distinction we have proposed to draw between descriptive and explanatory approaches, we consider the following four objections.First, given the theory-ladenness of observation, one might wonder whether a distinction between description and explanation can really be drawn if any attempt at describing the appearances already implicates theoretical commitments.Secondly, although descriptions can apparently sometimes be revised on the basis of insights gained through explanation, it is not immediately obvious how or whether our approach makes room for this possibility.Thirdly, if appearances are immediately accessible to any inquirer who cares to look, we might expect that descriptive investigations are likely to yield only trivial or uninteresting results.Fourthly, on sensitive issues such as gender and race, political biases appear to be insurmountable, so that the idea of sharply distinguishing theoretical from practical social ontology might seem to be doomed to fail.
The theory-ladenness of observation
The distinction between the descriptive and the explanatory should not be conflated with the distinction between what is "pre-theoretical" or "theory-neutral", on the one hand, and what is "theoretical", on the other hand.While there may be some descriptive statements (e.g., "I am older today than I was yesterday") which strike us as plausible independently of any particular theory within which they are embedded, it should not be assumed in general that descriptive statements can always be formulated in an entirely theory-neutral way.
The first reason why descriptive metaphysics need not be committed to theoryneutrality is that, although the contents of appearances may be theory-neutral, the descriptions of these contents may require the introduction of new concepts and terms which are not already available in our everyday conceptual scheme or ordinary language prior to the attempted description.As Cappelen (2020, p. 147) puts it, "a 'pure' descriptivist would show a lack of understanding for the need to assess and improve on the concepts used to engage in the descriptive project".We fully agree, and so do many descriptivists in effect.To illustrate, the following distinctions and concepts have been introduced in Brentano's school, among many others, to describe appearances properly: the distinctions between presentations and judgments (Brentano, 1874); between affective sensations such as pains, itches or bodily pleasures, and emotions such as suffering, annoyance or enjoyment (Stumpf, 1928); between simple and complex colors (Brentano, 2009, pp. 127-160), between actions and their products (Twardowski, 1912), or between values and norms (Meinong, 1917); the concepts of representational content (Twardowski, 1977); of ontological dependence (Stumpf, 1873;Husserl, 1900); of Gestalt (Ehrenfels, 1890); or of boundaries (Brentano, 1988).None of these distinctions and concepts are uncontroversial; but each of them was introduced for a descriptive purpose; and some of them, such as the concept of ontological dependence, have been widely re-used in explanatory metaphysics following their introduction for descriptive purposes.
As a further illustration of the distinction we have in mind, consider for example the phenomenon of akrasia.Suppose we characterize an akratic agent, as neutrally as possible, as an agent who, in some sense, acts against their own better judgement.Even while still being engaged in the descriptive project of merely stating what the phenomenon of akrasia seems to consist in, we may find it necessary to take recourse to such terms as "action", "intention", "motivation", "evaluation", and "judgment", which may eventually go on to play a central role in an explanatory account of what goes on in akratic agents.Nevertheless, we should distinguish a mere description of what the phenomenon of akrasia seems to consist in from an explanation of what in fact goes on within an akratic agent, even when the descriptive statement already makes use of vocabulary which comes to be assigned a theoretical role within a particular proposed explanatory account.The descriptive phase targets the sense in which, for example, the Socratic, Platonic, Aristotelian, and Davidsonian account of akrasia all aim at providing a solution to a single problem, regardless of which precise terms are chosen to state the problem in question.
Secondly, descriptive metaphysics may be theory-laden in a stronger sense: the contents of the appearances themselves may be shaped by concepts.This broadly Kantian insight can take various forms (e.g., the critique of the myth the given; the Sapir-Whorf hypothesis; as well as the positing of conceptual content; cognitive penetration; the theory-ladenness of observation; etc.), and controversies surrounding such issues persist.Descriptive metaphysics is compatible with such top-down influences.Thus, Strawson mentioned Kant, along with Aristotle (in opposition to Descartes, Leibniz and Berkeley), as prominent descriptive philosophers (Strawson, 1959, p. 9).One should here distinguish the view that the appearances are shaped by shared concepts, such as social categories, from the view that the appearances are shaped by the very theoretical conceptualizations that are introduced by the descriptive metaphysician.The former view raises no special worry.What carves the appearances, in Kant's view, are the a priori forms of sensibility and the a priori categories of the understanding, not, obviously, his critique of pure reason.This latter view, however, would pose a more serious challenge for those who want to maintain a distinction between the descriptive and the explanatory: if the descriptive metaphysician's concepts shape the contents of the appearances he sets out to describe, it could be argued that descriptive metaphysics is not so much in the business of describing appearances that are already there, but rather creates the appearances it claims to describe.Such a strong kind of theory-ladenness would indeed be incompatible with the project of descriptive metaphysics, but it would also be intrinsically problematic for the very reason adduced in favor of descriptive metaphysics.Indeed, an explanation that generates its own explanandum seems to lack an independent criterion of evaluation and, consequently, threatens to be trivially successful.
Immunity to revision
Secondly, our account might appear to rule out situations in which a description of the appearances would need to be revised on the basis of insights gained through explanation.For in enforcing a distinction between what is descriptive, on the one hand, and what is explanatory, on the other hand, and in barring any influence of explanation on description, we might seem to be committed to rendering the descriptive immune from revision on the basis of explanatory insights.We have earlier granted the possibility of revising descriptive claims, for example, in cases in which we are presented with a seeming conflict between appearances (e.g., as when extreme global poverty is both thought to have increased and decreased).It is worth considering, however, whether descriptive claims might not also be subject to revision for other reasons.
Consider, for instance, the claim: "As it appears, all instances of jade are of the same kind".On the face of it, this statement strikes us as a purely descriptive claim concerning instances of what is ordinarily referred to as "jade".At the purely descriptive level, instances of what we commonly refer to as "jade" share certain similarities with respect to their phenomenal qualities, including for example their predominantly green color.These apparent similarities at the phenomenal level might tempt us to conclude that there is a single kind of mineral, jade, to which all jade-instances belong.When we attempt to explain why jade-instances are similar to one another in these respects, however, it turns out that their superficial resemblance can in fact be traced to two quite different underlying micro-structures which come to be associated with two distinct kinds of minerals, jadeite and nephrite.Does this case present us with a situation in which the descriptive statement with which we began is subject to revision on the basis of insights that are gained only once an explanatory framework is in place?
Let us suppose that we initially accept the appearance that all jade-instances belong to the same kind as veridical, and set out to explain why all instances of what we ordinarily call "jade" are similar to one another with respect to their phenomenal qualities.In the course of looking for an explanation, we discover that, contrary to our initial inclination, it is not in fact the case that there is a single kind of mineral, jade, to which all instances of what we ordinarily call "jade" belong.This new piece of evidence does not in itself provide us with an explanation of the initial appearance: facts that are discovered in the course of looking for an explanation of a phenomenon do not necessarily themselves provide an explanation of the phenomenon in question.
Rather, the discovery that all instances of what we ordinarily call "jade" do not belong to a single kind of mineral actually contradicts the initial appearance that led us to believe that they do.
When we encounter a situation of this type, we might attribute the mismatch in question not to the appearance itself, but rather to the initial description of the appearance: if this description is read as committing us to the existence of a single kind of mineral, jade, to which all instances of what we ordinarily call "jade" are supposed to belong, then we might regard the initial description of the appearance as defective in illicitly going beyond what is strictly speaking presented to us by the appearance itself.Given this option, the appearance in question, once accurately described, only licenses the descriptive claim that all instances of what we ordinarily call "jade" share certain phenomenal qualities, including their predominantly green color, without taking the further step of positing a single homogeneous kind of mineral, jade, whose alleged unity is somehow thought to underwrite the superficial similarities that appear to us.Thus, given this reaction, a faulty descriptive claim to the effect that instances of what we ordinarily call "jade" belong to a single kind of mineral, jade, should be rejected in favor of a more accurate descriptive claim to the effect that instances of what we ordinarily call "jade" are similar in certain respects (e.g., their predominantly green color).
This response illustrates that our approach can make sense of the idea that one's attitude towards one's initial description of an appearance may need to be adjusted in light of insights that are gained only through explanatory advances.At no point in this process should we ever expect to find ourselves in a situation in which an explanans conflicts with its explanandum: for we take it to be an unassailable principle governing explanation that if a phenomenon A explains a phenomenon B, then A and B cannot contradict each other.Nevertheless, in the course of searching for an explanation, we may very well discover that it is necessary to re-appraise what we initially took the explanandum to be.
Triviality
If descriptive claims simply aim to capture the world as it appears to us, one might be inclined to think that such an inquiry could only ever lead to results that are trivial or uninteresting.For assuming that the appearances are immediately accessible to anyone who cares to look, then why, so one might wonder, is it necessary for us to embark on a lengthy descriptive investigation only then to end up with insights that were already within our reach from the very beginning?In response to this concern, we want to highlight that, despite the fact that the appearances are in principle accessible to us, a descriptive project can nevertheless lead to significant advantages arising from, first, the inherent difficulty of attaining an accurate and complete description of the appearances; secondly, the importance of correctly identifying the phenomena that an explanatory framework needs to address; and, thirdly, the usefulness of the descriptive task in potentially spotting deficiencies or tensions in a proposed explanation of the phenomena in question.
First, while we agree that the appearances are (or at least should be) accessible to a keen observer from the start, it can nevertheless be a remarkable achievement to arrive at an accurate and complete description of the appearances: for what is familiar to us can sometimes be difficult to notice, precisely because we are so accustomed to it.We find Reinach voicing a thought along these lines in the following passage: One could no longer doubt the existence of an a priori sphere if one clearly realized what a vast multitude there is of such self-evident legal rules, which, although they are nowhere formulated, are naturally and easily applied, and which we usually do not become fully aware of only because they make so much sense and are so immediately understandable.(Reinach, 1989a, 273;English translation, p. 135).
Secondly, even if a descriptive approach merely makes explicit what should have already been obvious to a keen observer from the start, the significance of this activity can nevertheless at times be considerable.To illustrate, claims such as "A city is not a duck" as well as others cited earlier in , may initially strike us as, in a sense, so obvious that it is not worth making them explicit.However, once we identify an apparent triviality that is tacitly presupposed by everyone concerned with the phenomena in question, this observation can then in turn be used as a data point against which to measure the success of a proposed explanatory framework that is directed at the phenomena in question.Thus, Massin and Tieffenbach (2017) argue on just such grounds that an account of economic exchanges which contradicts the seemingly trivial observations such as those stated in ( 14) and (15), that money can be owned but services cannot be owned, should be rejected.Despite the intuitive plausibility of this desideratum, as Massin and Tieffenbach note, an explanatory account of economic exchanges which validates this descriptive starting-point is surprisingly hard to come by.
Thirdly, the benefits of arriving at a complete and accurate description of the phenomena in question, even if such an endeavor sometimes merely consists in making explicit seemingly obvious tacit presuppositions, becomes apparent when we consider the usefulness of descriptive claims in highlighting potential ambiguities or other problems that may affect a proposed explanatory framework.Outside of social ontology, the potential usefulness of descriptive claims for steering one's explanatory outlook in the right direction can be illustrated, for example, by reference to Fine's celebrated point that essence is not reducible to necessity (Fine, 1994).This claim is, at least in part, descriptive in nature and is supported by Fine through the use of various influential counterexamples which appear to make trouble for modal conceptions of essence.Counterexamples, as we argued above, should be understood as descriptive, rather than explanatory, claims.Thus, when we consider the world as it presents itself to us in naïve terms, the fact that Socrates is necessarily the sole member of Socrates' singleton set does not strike anyone, who is not already in the grip of a particular explanatory framework, as having any prima facie relevance to Socrates' identity as a person.This much is part of simply describing how the world appears to us in naïve terms.Once it has been appreciated, however, that what is necessarily the case can apparently diverge in this way from facts concerning the essence or nature of a thing, this intuitive realization can then pave the way for an explanatory framework which resists the temptation to reduce essence to necessity.The foundational move towards a non-modal conception of essence, therefore, has its origin in a descriptive insight, which belongs to the domain of naïve metaphysics, that essence and necessity appear to part ways in the manner highlighted by Fine.
Within the domain of social ontology, the potential usefulness of descriptive claims to clear up explanatory entanglements can be illustrated by means of the account of money offered in Guala (2021).Guala proposes an explanatory framework according to which money should be seen primarily as a type of institution and only derivatively as a type of object.Following the more general account of institutions developed in Hindriks & Guala (2015), Guala takes the institution of money to be a system of rulesin-equilibrium which regulates how people engage in economic transactions.This explanatory account runs into several potential conflicts with prima facie plausible descriptive claims, which seem to privilege the conception of money-as-an-object over Guala's preferred conception of money-as-an-institution.To illustrate, according to (4), (12), and ( 15), cited earlier, we can carry money in our pockets as well as own and exchange it.Evidently, however, we cannot carry systems of rules in our pockets; nor are systems of rules themselves what we own or exchange in economic transactions.When faced with these apparent conflicts, Guala has several options.First, he might choose to endorse a revisionary approach, as he seems tempted to do, and claim that the apparent priority of money-as-an-object over money as-an-institution is illusory.A second strategy that is open to Guala, however, is to refine his descriptive ontology of money and shift the target of his explanatory account to what we might call the "monetary system", i.e., the system of rules which institute money.Given this second response, Guala can now interpret the above-mentioned descriptive claims as concerning money in the sense of what is instituted by the monetary system.Moreover, it is this latter conception of money as what is instituted by the monetary system which seems more closely aligned with the functional roles which, as Guala acknowledges, are typically ascribed to money by economists, viz., that money serves as a medium of exchange, store of value and unit of accounting (Guala, 2021, p. 3). 7This potential shift in the primary focus of Guala's explanatory framework further supports our central claim that a thorough, accurate, and unbiased description can lead to clarifications and improvements in an explanatory account of social appearances.8
Political biases
Above, we addressed the objection that descriptions should be theory-neutral, by arguing that introducing theoretical concepts and distinctions for the purposes of capturing the appearances accurately is fully compatible with, and indeed congenial to, the descriptive project.The following closely related, but distinct, objection remains yet to be addressed: Is a politically neutral description of social appearances even possible? 9f not, then it may appear to be impossible to pursue the theoretical project of attempting to describe or explain social appearances without also simultaneously engaging in the practical project of striving to preserve or improve features of the social world.For if the description of social phenomena can never be insulated from the inquirer's political commitments, then the independence we have claimed for theoretical social ontology vis-à-vis practical social ontology also might seem to be jeopardized.In the preceding remarks, we have deliberately focused on such examples as money and contractual obligations, while avoiding such cases as gender and race, which appear to be more explicitly politically loaded.This, however, should in no way be taken to imply that these latter phenomena are any less worthy of being theoretically investigated.Our intention, in proceeding in this way, was simply to facilitate the task of distinguishing the theoretical from the practical project. 10ut how, then, would the descriptive social ontologist approach other more sensitive cases, where the theoretical inquiry seems to be inextricably linked to one's practical goals?There is an important truth in this objection: political biases are indeed ubiquitous and particularly strong when it comes to sensitive issues such as those surrounding race or gender.But the objection ultimately relies on a non-sequitur.From the strength and ubiquity of political biases we may only infer that a politically unbiased descriptive social ontology is difficult to carry out, not that its achievement is impossible or undesirable.The widespread sentiment that "everything is political", if accurate, does not entail that scientists and philosophers should stop striving to curb their political prejudices when engaged in theoretical projects.Quite the contrary: the fact that confirmation biases are so commonly found in contexts in which questions of political relevance are debated, if anything, provides an even stronger reason to try to neutralize their interference.Although no doubt difficult, progress in this direction can be made in a variety of ways, including the following.First, while individual attempts at overcoming one's own political biases often fail, collective endeavours to increase neutrality by cultivating viewpoint diversity and disagreement have been shown to be both effective and achievable (Wang et al., 2019;van Veen et al., 2020).Second, keeping in mind the distinction between theoretical and practical social ontology may contribute to our ability to pursue social ontology without becoming enmeshed in a political battlefield.If the domain of theoretical inquiry is recognized to be both important on its own terms and distinct from the pursuit of practical goals, the temptation to let our political inclinations influence our descriptive and explanatory endeavours can be at least to some extent resisted.
Conclusion
In this paper, we have put forward a plea for a descriptive approach to social ontology, which has often taken a back-seat to the development of explanatory frameworks.If our account is on the right track, explanatory commitments should be at least initially bracketed if we are to arrive at an adequate non-biased description of social phenomena.To help motivate the proposed distinction between description and explanation we considered, first, an analogy from the philosophy of mind where we similarly detect two different ways of investigating mental phenomena: a descriptive approach taken by descriptive psychology (otherwise known as "phenomenology") and an explanatory approach utilized in analytic philosophy of mind which focuses primarily on the resolution of the mind/body problem.Next, we situated our proposed distinction between description and explanation in relation to other similar positions in general metaphysics as well as philosophy at large, such as those proposed, among others, by Aristotle, ordinary language philosophers following Wittgenstein, Strawson, and, more recently, Fine.
We advanced two central claims in this paper: first, that description and explanation provide us with two distinct ways of engaging with social phenomena; and, secondly, that the descriptive project ought to be prior to the explanatory project in the order of inquiry.An accurate unbiased description of the phenomena, we argued, is needed to provide an explanation not only with its target explanandum, but also with the criteria of success by which an explanation, once it has been formulated, can be evaluated.Among explanatory approaches, we distinguished between foundationalism, on the one hand, and revisionism, on the other hand: the former approach takes the appearances to be veridical and then sets out to explain their content; the latter approach takes appearances to be deceptive in some way and must then explain why these illusory appearances present themselves to us.
Disputes can arise not only between the different explanatory postures, but also from within the descriptive camp.We identified two important kinds of descriptive disagreements.The first such disagreement concerns the relative degree of richness ascribed to the content of appearances: those, on the one hand, who take a broadly Humean approach consider the content of appearances to be rather minimal, consisting typically of associations of independent elements; those, on the other hand, who take a broadly Brentanian approach ascribe to the appearances a much richer content, which might for example include relations of essence, parthood, or ontological dependence.The second kind of descriptive disagreement concerns the orthodoxy or heterodoxy of the descriptions that are given.In our view, heterodox descriptions, such as Berkeley's claim concerning the non-existence of matter, for example, should not be conflated with revisionary explanatory claims, since a disagreement about the right description of an appearance must be kept distinct from a disagreement about the veridicality of an appearance.
With these clarifications in hand, we proceeded to apply our proposed distinction between description and explanation to the social domain.In this realm as well, we argued, we have available to us a wide array of descriptive claims concerning social phenomena, such as money, cities, groups, laws, promises, obligations, and the like, which often strike us as prima facie plausible, independently of any commitments to a particular explanatory framework.Although some of these claims might initially seem to us so trivial that they are not even worth stating explicitly (e.g., "We can carry money in our pockets"), such tacit presuppositions can nevertheless supply valuable data points against which to evaluate the success of an explanatory framework.
An important pending issue concerns the question of whether social appearances should be construed as having a minimal or a rich content.One reason, we suspect, why descriptive social ontology has been rather sidelined within the analytic tradition is the tacit assumption that descriptive social claims only report empirical data and statistical correlations.We are doubtful that this thin construal in fact yields the correct conception of the content of social appearances.Rather, it strikes us as plausible to think that a proper description of the contents of social appearances will have to avail itself of a philosophically substantive apparatus, including such formal notions as essence, ontological dependence, and parthood.Just as an appeal to such formal concepts is needed to arrive at a proper description of how, for example, color, extension, location and objecthood are related to one another in visual experience, so we believe that the social world as it appears to us is similarly imbued with a rich array of connections and distinctions whose proper description will require an elaborate descriptive social ontology.A proper defense of this commitment to a rich conception of the contents of social appearances, however, will have to await a separate treatment in its own right (see Koslicki and Massin, 2023).
Our primary aim in this paper has been to argue that descriptive social ontology ought to be regarded as prior to explanatory social ontology in the order of inquiry, i.e., that we should aim to describe how the social world appears to us prior to trying to explain it.The distinction between descriptive and explanatory social ontology, we proposed, belongs to the domain of theoretical inquiry.The latter domain, in turn, should be distinguished from practical social ontology which aims at preserving or changing the social world, e.g., by repairing or improving it in certain respects.It furthermore strikes us as plausible that theoretical social ontology similarly ought to be regarded as prior to practical social ontology, in the sense that trying to discover what the social world is like would seem to be necessary for any attempt at preserving or improving it.A defense of this latter claim, however, concerning the priority of the theoretical over the practical lies outside of the scope of the present discussion. 11 Funding Open access funding provided by University of Neuchâtel.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/ by/4.0/. | 21,376.6 | 2023-08-18T00:00:00.000 | [
"Philosophy"
] |
Density Profiles of 51 Galaxies from Parameter-Free Inverse Models of Their Measured Rotation Curves
: Spiral galaxies and their rotation curves have key characteristics of differentially spinning objects. Oblate spheroid shapes are a consequence of spin and reasonably describe galaxies, indicating that their matter is distributed in gravitationally interacting homeoidal shells. Here, previously published equations describing differentially spinning oblate spheroids with radially varying density are applied to 51 galaxies, mostly spirals. A constant volumetric density ( ρ , kg m − 3 ) is assumed for each thin homeoid in these formulae, after Newton, which is consistent with RCs being reported simply as a function of equatorial radius r . We construct parameter-free inverse models that uniquely specify mass inside any given r , and thus directly constrain ρ vs. r solely from velocity v ( r ) and galactic aspect ratios (assumed as 1:10 for spirals when data are unavailable). Except for their innermost zones, ρ is proven to be closely proportional to r n , where the statistical average of n for all 36 spirals studied is − 1.80 ± 0.40. Our values for interior densities compare closely with independently measured baryon density in appropriate astronomical environments: for example, calculated ρ at galactic edges agrees with independently estimated ρ of intergalactic media (IGM). Our finding that central densities increase with galaxy size is consistent with behavior exhibited by diverse self-gravitating entities. Our calculated mass distributions are consistent with visible luminosity and require no non-baryonic component.
Introduction
Hundreds of astrophysical studies concern the dynamics of galaxies, which are organized assemblies of stars, gas, and dust. Circular or elliptical motions are observed not only in these immense celestial bodies, but also for objects having many other scale lengths. Such rotational motions are classified either as orbits, which are governed by the potential exterior to an object that is often dominated by the effects of external bodies, or as axial spin, which is controlled by the interior potential of the object itself. These two gravitational potentials are mathematically distinct [1][2][3], matching only on the object's surface [4]. Hence, the induced motions of spins and orbits differ, even though both are more or less circular.
Orbital motions are familiar because many problems (e.g., Keplerian orbits of planets) are accurately described by the reduced 2 body problem, where a large, central mass controls motions of its small satellites, which are assumed to move independently of each other. However, even planetary orbits in our Solar System are not perfectly co-planar, and planet-planet perturbations are well known, which led to the discovery of Neptune. The velocity of such orbital systems projects to infinity at the central point. In great contrast, velocities inside spiral galaxies trend to the null value at the center. That the motions of central regions of spiral galaxies resemble spinning records is required for the organized motions of spin [5,6].
Spin is defined as rotation of a body about a special axis where tangential velocity is zero, and that the motions are predicated on interactions of the matter assembling the body itself. These two characteristics distinguish spin from orbits. Spin is quantified by simple equations involving the moment of inertia (I) and angular velocity [7]. Geometry [8] and internal densification affect I. For a rigid sphere or spheroid with constant volumetric density (ρ), mass (M), and equatorial body radius (R), I = 2MR 2 /5. If this rigid spheroid is self-gravitating and conservative, the Virial theorem (VT) provides the tangential equatorial velocity of its spin as: 2 2 spin spin at the equatorial perimeter along the polar axis where G is the gravitational constant [9]. In contrast, orbiting bodies at some distance r from a massive central body such as the Sun are described by: Equation (2) was applied in early analyses of galaxies (e.g., [10]). While these equations are deceptively similar, M and R in Equation (1) represent the mass and radius of the object itself, whereas Min and r in (2) represent the interior mass of the system and the distance of some external object to the system's center of mass. In multi-body systems these differences are extremely important. Clearly, considering orbits (2) rather than spin (1) as describing the coherent rotational motions of the galaxy yields a much larger mass for equivalent values of r and R. The poor fit of the orbital model of Equation (2) to rotation curves (Figure 1a) follows and underlies the proposal of dark matter halos [11]. to Keplerian orbits (which have high central velocities) and to galactic RC, which show a gradation between these limiting cases. Mass components cannot be summed to produce RCs [9], as has been pursued in the Newtonian orbital model approach (e.g., [12][13][14]. (b) Nesting of similar, homogeneous homeoids to form a stratified oblate body, with axes labeled.
Spin is a key attribute of planets and stars, yet this behavior is not predicated on rigidity or high density, because this gravitational phenomenon is described by the geometric arguments of Newton and Laplace [15]. Regarding rigidity, even layers in the dense and highly viscous Earth are decoupled, as demonstrated by westward drift of Earth's outermost lithospheric plates relative to its interior [16]. Decoupling of motions between layers is referred to as differential rotation, and may also exist for Earth's core [17,18]. The precise term is differential spin.
Spin exists regardless of the state of matter, or its density, since spin also describes gas giant planets, rarified stars [19], and many atmospheric phenomena. Hurricanes spin, yet these objects are mostly composed of N2 and O2 gas with embedded water droplets, and so rigidity is not required for spin (e.g., [6]). Galactic rotation curves (RCs), which limit the dependence of velocity v to only equatorial radius r, implicitly assume zero velocity along the axis of rotation (Figure 1a). This description is consistent with spiral galaxies spinning (Figure 1b).
For spinning, self-gravitating objects, Maclaurin quantitatively showed that the balance between centrifugal and gravitational forces produces an oblate spheroid, as deduced by Newton [20]. This theoretical shape is observationally established for spinning planets and several stars and is confirmed by edge-on images of spiral galaxies at wavelengths ranging from the visible to radio ( Figure 2). Oblate shapes even describe the rarified, distant material of spirals [21,22]. [23] is modified after Carignan [24]. NGC 3034 shows near-infrared (IR) at ~2 μm (Ks band from [23]): the original image of Jarrett et al. [25] minimized the effect of dust. NCG 4594 (Sombrero) shows isophotes at 3.6 μm [23], modified after Gadotti and Sánchez-Janssen [26] where the circles are foreground stars. Grey shaded images show median isophotes in the L-band at radio wavelengths from averaging 30 spirals, using the 22 μm contour, which represents the luminous disk, as a reference [22], modified after Wigert et al. [21]. All images are publicly available from [22] or [23]. Rotation curves (RCs) of the numbered galaxies are analyzed below.
That gaseous astronomical objects hold together during spin is obviated by behavior of planetary and stellar atmospheres. The gas molecules are part of the spin of the star and cannot be treated as each having independent motions. We refer to this behavior as dynamical cohesion, and note that this condition does not require a solid body, but can apply to interacting "particles" comprising a larger, non-solid object, such as cars in congested traffic or droplets in convecting clouds [5,6,27].
Historically, oblate spheroids were recognized as being the required shape for galaxies [28,29]. For an oblate spheroid (Figure 1b), the minor (c) and major (a) axes are tied via the ellipticity (e): ( ) This definition for the oblate spheroid surface can be used to relate the axial position (z) to cylindrical radius (r) for each surface of the nested homeoids, because these have the same shape factor e: From (4), volumetric density ρ(z) depends on ρ(r), along with a and e, for oblate spheroids, which considerably simplifies analyses of RCs from spin.
Given the above, we recently investigated the outer velocities of a spinning oblate spheroid galaxy by applying the rigid body approximation while considering various internal density variations [5,6]. Because galaxies are bound systems, their energetics are specified by the Viral theorem (VT) of Clausius. Emden's [30] classical book applied the VT to dispersed nebulae and rainclouds. For density governed by a polytrope of index j, the mass interior to any given radius is: Galactic masses calculated from Equation (5) match the luminous mass for the 14 spirals with well-constrained data [5]. Applying Equation (5) Because RC data depict an average condition at an instant of time, the motions must be modeled as being steady-state, i.e., the galaxy is spinning stably. Stability requires that each homeoidal shell ( Figure 1b) has constant density. Based on this finding, Criss and Hofmeister [27] addressed differential spin, which allows probing interior motions. Their inverse solution is: This result was applied to Andromeda but is applicable to isolated galaxies of all morphological types. We made no assumptions other than Newtonian physics and conservation laws. Equation (6) is analytic and exact, has no free parameters, and allows direct and unambiguous extraction of density and mass profiles from RC.
In the present paper, Section 2 describes how spreadsheet data on rotation curves (vtangent as a function of equatorial radius) can be used to compute mass and density as a function of radius, in an approach equivalent to Equation (6). Section 3 applies our model to RC data to directly provide mass and density, each as a function of r, for 51 galaxies that range widely in size, RC pattern, and type (spiral, lenticular, irregular, spheroidal, elliptical, and polar ring), although the majority (36) explored are spirals. Although elliptical galaxies and other roundish types do not technically have RC, we include these types because Romanowsky et al. [31] and others have treated their velocity dispersions as RC. We test and verify our approach by comparing our deduced densities with independent measurements involving many astrophysical environments (Table 1). Consistent trends emerge from our analysis and provide the physics underlying empirical luminosity-velocity relations (Section 4). Neither dark matter nor non-Newtonian physics are required to explain galactic rotation curves (Section 5).
Methodology
Our inverse model is based on the VT, which stems from a mathematical identity. An important consequence for a bound, conservative state is that average linear momentum cancels in all directions over its restricted space [9]. The axial spin of galaxies arising through gravity can thus be analyzed using the VT [5,6,27]. Some details are given below. This paper calculates volumetric density.
Inverse Models
Inverse problems are fundamentally different than forward (direct) problems (Table 2). These are best understood via contrast to the latter which are much more familiar. Direct problems are solved by inserting known, or assumed, inputs, such as source characteristics, into a standard equation, formula or program, that returns a result. In contrast, many important problems in astronomy involve the opposite approach, which is to deduce the nature of a remote source from the nature of its output (e.g., [37]). The direct calculation of density profiles from observed RC data, as rendered possible by Equation (6), is a successful example of a solved inverse problem. Mathematical solutions to inverse problems are not always possible [38], which has made this approach less familiar. The importance of inverse models to astronomy is underscored by Ambartsumian's [39] determination of the proper velocity distribution of stars from observed radial velocity distribution.
Homeoid Properties and Their Consistency with Observations
Diverse astrophysical environments are less dense than vacuums produced in the laboratory (Table 1) and collisions are rare, and so friction is absent, and galaxy motions, shape, and distribution of mass are all gravity driven and linked. Hence, the nested spheroidal shells (Figure 1b) each rotate independently at some angular velocity. Direct evidence for independent rotation of galactic shells is provided by the reduction of tangential velocities above the planes of spiral galaxies [40,41], slower rotation in the bulge than in the disk at any given r in an S0-E6 type [42], and examples of counter rotation and perpendicular rotation (polar rings) in a few spiral galaxies (e.g., [43]).
As Newton discovered, ellipsoidal shells (homeoids) are equipotential [44] (p. 87) and therefore each has a characteristic density. Otherwise, forces along its surface would be unbalanced. Newton's homeoid theorem shows that a test particle inside a homeoid experiences no net force. Constant density for individual homeoids, which is associated with gravitational stability [45,46] (p. 100-101), is appropriate for several reasons. First, v(r) are angular averages that do not describe evolutionary changes. Second, spiral galaxies vary greatly in appearance (star distribution) yet their RC curves are quite similar [47], permitting classification as a very few types (e.g., [48]). Thus, only the radial dependence of the density distribution is important. Third, heterogeneities in the images which correlate with locations of luminous stars do not reveal heterogeneity in density over a larger scale. For example, clouds in Earth's atmosphere are no more dense than dry surrounding air, even though water droplets have high density.
An oblate spheroid is simply a flattened sphere that can be considered to represent a collection of nested homeoids. The self-potential of a spheroid differs from that of a sphere only through the ellipticity. Based on Maclaurin's geometrical constraints, Todhunter [20] determined that the selfgravitational potential of a spheroid ( Figure 1b) is a simple factor (k) times the self-gravitational potential of a homogenous sphere of equivalent volume, where: ( ) From Newton's homeoid theorems (e.g., [44]), rotating external homeoids are equipotential surfaces and exert no net force on matter inside, as is the case for spherical shells. Differentiating Equation (7) gives the potential for a homeoid of mass m that surrounds an interior mass Min: where the left hand side incorporates the radius of the sphere of equivalent volume for the oblate spheroid of seqv = a(1-e 2 ) 1/6 , since the equatorial radius r which varies from 0 to a is most appropriate. Differentiating the well-known formula for I of an oblate spheroid which is: ( ) Existing complicated relationships in Sersic [49] reduce to Equation (10). For reference, homeoid volume (dV) and mass (m, which actually is dm) are, respectively: Applying the Virial theorem to a series of nested coaxial homeoids results in (GMinm/r)arcsin(e)/e = ⅔mr 2 ω 2 . Rearranging terms gives the equatorial velocity as: From (12), the angular velocity of any spheroidal shell depends only on its size, ellipticity, and the mass interior to that homeoid, provided that the latter is distributed in a spheroidally symmetric manner (Figure 1b,2). This relationship, a consequence of Newton's homeoid theorem, differs only from that for a spherical assemblage of particles by a geometric factor involving the ellipticity.
Because e ranges from 0 to 1, the geometrical factor arcsin(e)/e ranges only from 1 to π/2 (~1.57). Thus, for flattened homeoids, v 2 is ~ (2 ± 0.4) GMin/r, which is not greatly different than the relationship for a spherical shell. Importantly, nested homeoids also spin about the same axis; the composite object is merely a flattened sphere. We reiterate that the shape of the surface of the homeoid shares the ellipticity with the oblate body, per Newton and Laplace.
Tie to Previous Conventions and Equations
The RC literature uses the variable "r" to depict galactic distance in the equatorial plane. The preceding equations used r for cylindrical radius and s for spherical radius where appropriate (see Figure 1b). The remainder of this report uses "r" because we are concerned with the equatorial radii of galaxies.
Calculating orbital velocities of a system of coaxial, rotating spheroidal shells of variable density is straightforward. The relevant mass is the mass of matter interior to a particular shell.
Note that n in the denominator of equation 15 of Criss and Hofmeister [27] should be replaced by 3. This typo did not affect other equations in [27]. This forward model is not being used here.
According to Marr [50], inverse models for galactic spin which input measured velocities and output mass or density had not been constructed previously. Prior to detailed measurements of RC, Brandt [51] attempted to provide analytical expressions for mass as a function of radius by inverting the formulation for oblate spheroids of Burbidge et al. [29], which describe attraction to the exterior of the oblate (r > R). Setting e = 0 (spherical symmetry) in their formula and using constant ρ provides a Keplerian orbit, thereby confirming that Burbidge et al. [29] modelled satellite orbits exterior to an oblate body, rather than describing the organized motions inside.
In exploring the case of e ~ 1, Brandt [51] arrived at Keplerian orbits in his equation number 25, which instead requires e = 0. This inconsistency resulted from his inversion including division by the factor (1-e 2 ) ½ , which is 0 in the limit of e = 1. Division by 0 occurred early in Brandt's [51] derivation, specifically in his equation number 7, which voids his inverse model for orbits about an oblate.
Such difficulties stem partially from the equations for forces and potentials exterior to an oblate body not being cast in particularly useful forms until 2018 [4]. More importantly, correct and simple equations for the gravitational potential and moment of inertia for a homeoid are presented for the first time as equations (8) and (10), above. These new equations are the basis of our inverse model.
Methodology for Inverse Modelling
For spiral galaxies lacking well constrained e, we used c/a = 0.1 for which arcsin(e)/e = 1.4871 (Table 3). These assumed values are consistent with edge-on spirals (excepting Sombrero with its large bulge, Figure 2) and with compiled data on aspect ratios [52], recognizing that tilt makes the objects appear rounder. Moreover, assuming c/a = 0.1 has little effect, since arcsin(e)/e neither varies greatly (Table 3), nor has a huge effect, as shown by a few examples in Section 3. [23]. 2 The other 7 dwarf irregulars and spheroidals were assumed to have the same c/a. 3 We used c/a = 0.1, based on the infrared contours of Figure 2 that average 30 nearly edge-on spirals [21,22] and the generally accepted view that spirals are thin and flat.
To convert reported values of angular diameter to kpc, we used the average distance in the NASA/IPAC Extragalactic Database (NED) (see Section 2.5). Next, we calculated Min in a spreadsheet from Equation (12) and then computed dM/dr. Density is obtained from a geometrical constraint on the homeoids: Results for Andromeda from (6) and (13) are indistinguishable, although the former equation provides the most direct link between density and RC data.
Criteria for Selecting Galaxies and Data Available
The database on RCs of spiral galaxies is very large, although skewed to SA and SAB classes [53]. In comparison, relatively few irregular, lenticular, elliptical and (rare) polar ring galaxies have been studied, although efforts have been directed towards LSB and dwarf spheroidal galaxies (e.g., [54]). Measurements on non-spiral types concern velocity dispersions, which have been treated as RCs to estimate masses. High resolution, recent studies were sought, particularly for the various morphologic spiral sub-types (a-d), as this sequence illustrates increasing gas content and decreasing bulge size, e.g., [14,55]. As described below, we analyzed 36 spirals and 15 other galaxy types, which encompass known morphologies, cover a wide range of galactic properties, and include both typical and unusual patterns for RC. Ellipticities are listed in Table 3. Table 4, which lists galaxy properties from [23], is divided into five subsections, which address different issues arising in analyses of RC. Each subsection lists galaxies by NCG number, if available. Types are discussed in [23]. For a consistent measure of size, we used the NED [23] tabulations of distance and of r at 25 B-magnitude arcsec −2 and denote this isophote as the visual edge, for brevity. All distances are based on models which make some assumption about absolute luminosity. For an explanation see [56] (redshift section). Table 4 consists of large and moderate sized spiral galaxies that encompass the main morphologies, have different attributes, and were measured in several studies, mainly due to their proximity (e.g., Andromeda, Triangulum, and Sombrero). In addition, the Milky Way provides a unique internal view of rotational data [57][58][59][60]. NGC 253 and 2599 have declines in v with r at large r [61]. Large Uppsala General Catalog (UGC) 2855 has data extending far beyond its visible disk [62]. Irregular NGC 3034 has lopsided rotation curves [63]. This galaxy is similar in size to NGC 4826, which counter-rotates, and to NGC 7793 [54].
The 1 st subsection of
The 2 nd subsection consists of galaxies considered by Bottema and Pestaña [64] to have highly desirable characteristics for accurate analysis. Useful selection criteria include a well-established distance, symmetric RCs that extend beyond the optical disk, inclinations between 50 and 80°, and a range of masses. We omitted 2 of the 12 galaxies from [64] because their rotation curves are similar to others in this grouping and/or have widely spaced data points. Of the 10 galaxies examined, 8 are spirals, NGC 4789a is irregular, and NGC 1560 has low surface brightness.
The 3 rd subsection lists all Messier objects studied by Sofue and collaborators using tabular data on Sofue's website [65]. Not only are various types of spirals included, but the method of data analysis is consistent, and both centers of the galaxies and outer reaches were examined and merged in a consistent fashion. Sofue et al's [53,66] measurements compare reasonably well with more recent studies [14] and have been used to evaluate models [67].
The 4 th and 5 th subsections cover galactic types other than spirals. We used high-resolution data or extended RC, if available, but for types such as ellipticals and spheroidals, velocity dispersion curves are the only data available (e.g., [68]). 2 "Edge" denotes the isophote which is at 25 B-mag arcsec −2 for almost all galaxies listed in the NED, which approximates the visible regions of the galaxy. 3 Appears to be a spiral in near-IR imaging.
Results: Mass and Density from Inverse Models of Rotation Curves
Most of the visualizations of mass and density for the individual galaxies are placed in Appendix A. Results are summarized in Table 4: Note: this broadside and after the references.
Detailed Analysis of the Milky Way
Sofue [58] compiled available data on the Milky Way to provide a RC from 0.001 kpc to the outer reaches, using the accepted value of v = 200 km s −1 near the Sun. Recent observations at greater distances [81,82] motivated revision of the outer RCs to higher velocities [59]. We combine v at low r from Sofue's [58] compilation with v at high r from his [59] compilation and use the differences in values to gauge uncertainties.
Assuming c/a = 1, 0.5, 0.1 or 0.01 causes the calculated values for Min to vary only by a factor of ~1.5 ( Figure 3a). Since e can vary only from 0 to 1, this range encompasses all reasonable values for e. Calculated Min values for flat shapes (c/a = 0.1 or 0.01) are similar at any r. Due to this finding and the behavior observed for density (discussed below), we use the nominal aspect ratio (c/a = 0.1) for most spiral galaxies unless data exist (Table 3), permitting us to concentrate on the effects of different patterns for RC. [81,82] and from globular cluster data [60]. Lower box lists aspect ratios assumed in analyzing each dataset for mass, shown on the right axis. Error bars on v from [58] are shown. Downturns in Min and ρ indicate that the "edge" of the Galaxy is gradual and between 18 and 30 kpc. In (b), density above r = 0.1 kpc was fit to a power law, as shown.
Notably, if v decreases weakly as r increases, calculated values for Min increase with r, as is required by geometry. This behavior largely arises because volume is a 3 times an ellipticity factor for a spheroid. For a strong decrease in v, the calculated values of Min decrease with r. This unrealistic behavior partially results from large uncertainties in v measured near the termination of the Milky Way, as shown in Figure 3a. If the edge were sharp, Min would increase with r up to some radius, and remain constant thereafter. Our results indicate that the galactic edge is not sharp, but grades into the IGM, which is consistent with the known existence of substantial outlying material, such as gas, globular clusters, or satellite galaxies [83,84].
Most of the mass in the Milky Way (MW) lies at r beyond the visible disk at 18 kpc. This behavior occurs because spheroid volume is a 3 times an ellipticity factor and underlies velocity profiles being flat at large r. Luminosity from visible wavelengths is therefore derived from a much smaller mass, mainly that inside the visible edge (Tables 4,5). Milky Way density (Figure 3b) was determined using extended RCs [58,59]. Calculated densities increase as c/a decreases, per Equation (13). Density follows a power law over most of the Milky Way. Density falls off rapidly from 18 to 30 kpc, consistent with the existence of an edge, although the termination of the galaxy is not particularly abrupt. Near the galactic center, density depends weakly on r (discussed further below).
Our calculated densities (Table 5) are consistent with independent measures (Table 1). Near the MW galactic center, density is like that of molecular cloud cores, whereas mid-galaxy densities are like those of typical molecular clouds. In the Solar Neighborhood, our results match the sum of the distributed star density plus the ISM density (cf. Tables 1, 5). Further out, the density grades to lower values. Out to 2000 kpc, ρ remains significantly higher than that of the cosmologically inferred density (Table 1), which is consistent with the Milky Way being part of a local group of galaxies that constitutes a concentration of matter in the universe The average density in the innermost region of the MW varies more gradually than in the outer zones (cf. Figure 4). A power-law fit presumes a singularity at r = 0, but the actual trend at the smallest values of r is flatter. The real trend is neither fit by an exponential nor any other simple function. Therefore, two extrapolations were made to estimate conditions near the center. Extrapolating the mass trend suggests 1 star of Solar mass per sphere of 1 a.u. radius. Alternatively, assuming a constant density of 100 times the highest density defined by the data provides 1 Solar mass per 200 a.u. radius. Averaging these two estimates provides a density for the galactic center similar to that of the Dispersed Solar System (Table 1).
Well-Studied and/or Illustrative Spiral Galaxies
Compiled velocity for Andromeda [59] varies with r in an "oscillatory" pattern, similar to data for the Milky Way. Likewise, mass and density (Figure 3 from [27]; Appendix Figure A1) resemble values for the Milky Way (Table 5), but better conform to a power law. An edge was not obvious in either mass or density, even though the RC extends far beyond the visible edge at 23 kpc.
Three RC datasets for Triangulum are available [14,65,66,70] and all agree ( Figure 5), providing similar values for Min. We used the smoother data sets to calculate densities. RCs barely cross the visible edge at 9.5 kpc, and thus do not define the density of its outer reaches. This galaxy is only half the size of MW and Andromeda yet has similar density from r = 1 to 10 kpc. [14] (grey crosses) were used to calculate mass (grey diamonds). RCs of Sofue [61] (black dashed line) were used to calculate mass (heavy black dotted line) and density (black dotted line). RCs of Corbelli and Salucci [70] (black points with error bars) were used to calculate mass for c/a = 1 (solid line) and c/a = 0.1 (medium dashed line) and density for c/a = 0.1 (circles and solid line). Agreement is good and the masses extracted are similar. The fit to ρ is over all r of [70]. An abrupt drop off in density is not observed. RC data are limited to the visible disk.
The next few examples further probe how details in RCs affect calculations of mass and density.
In the remaining figures, both mass and velocity are plotted against the left axis, which is linear, by scaling Min by 10 6 to 10 9 MSun, as appropriate. Density is depicted by the logarithmic right axis, which facilitates comparison with independent measures (Table 1) and allows us to extrapolate to the central regions where velocity data are not available. Sombrero ( Figure 6) has an unusual shape. Jardel et al. [71] and Kormendy and Westphal [72] both indicate internal structure near 0.5 kpc, but neither RC extends beyond the visual edge at 14.4 kpc. We use the more extended data set for density calculations. A decrease in density exists at 0.4 kpc, but given the uncertainties, ρ follows a power law, despite the zig-zag pattern for v(r). [72] used to calculate mass (light broken line) and density (dots and fit). The visual edge is beyond the measurements; (right) Counter-rotating NGC-4826. RCs from deBlok et al. [54]. Results for mass and density are close to similar sized normally rotating NGC 7793 ( Figure A3). We assumed that v = 0 at 4 kpc, midway between the nearest counter-rotating data points. The breadth of the v = 0 region affects the mass calculation, but little alters the density profile.
NGC 2599 ( Figure A2) has very high velocities near the center, but does not show the expected flat trend observed for many large spirals. Instead velocities decline further out. Nonetheless, the density profile is remarkably similar to that of Andromeda ( Figure A1) for which the RC is oscillatory but is overall fairly flat.
Several other large galaxies have visual edge densities close to that of the ISM, whereas the calculated ρ near galactic centers depends on the specific velocity profile. For example, studies of NGC 253 ( Figure A2) have RCs that significantly differ at low r [66,69]. The calculated mass and density are affected at low r, but beyond a few kpc the differences are small. Hence, the total mass of the galaxy is indicated by the magnitude of v at high r. The fits differ in detail, due to the contrasts in v at low r, but the differences are not huge. The large spiral NGC 925 has low velocity ( Figure A2) which indicates low density at its center. Moderate to strong differences in velocity for irregular NCG 3034 ( Figure A3) give similar calculated masses. The differences in v are more apparent in the densities; however, the effect is not large. Therefore, when asymmetries exist, averaging for more symmetric rotation curves should provide well-constrained mass and density.
Large UGC 2855 has an extended RC [62,66]. A drop off in density exists at high r ( Figure A2). The power-law fit is good over an extended region, but density is flatter at small r and steeper at large r. Densities are similar to those of the Milky Way and Andromeda.
Moderately large NGC 7793 ( Figure A3) has lower central density (Table 5) than the aforementioned large galaxies. The pattern resembles that of the similar sized Evil Eye ( Figure 6). Counter-rotation does not strongly affect the calculated density. Because the Virial theorem involves energies, the direction of rotation is not important to our calculations.
"Representative" Galaxies
Large galaxies (visual edge >15 kpc) among the set examined by Bottema and Pestaña [64] ( Figure A4) have similar mass and density profiles. Significantly smaller galaxies have much lower central density ( Figure A5) like Triangulum ( Figure 5). Otherwise, the profiles are similar to those of very large to moderately sized spirals. Even smaller dwarf DDO-154 has very low ρ at the center (Figure 7a). [64]. Min ratioed to 10 6 MSun Two fits to density are shown: solid = power law and dots = exponential; (b) Compact dwarf elliptical M32. RCs from Howley et al. [55]. Min ratioed to 10 6 MSun. Density above 0.6 kpc is described by a power law, but not an exponential.
Messier Galaxies, including the Virgo Cluster
The Messier examples are strongly skewed to large galaxies. Despite the variety of RCs [53,61], calculated mass and density are quite similar ( Figure A6; Table 5).
Dwarf Galaxies
Rotation curves of the dwarfs Holmberg II, WLM, and M81dwb [73,74,85] (Figure A7) provide trends similar to those of DDO 154 (Figure 7a). For two of the four dwarfs, density decreases roughly exponentially with radius. The main difference between the dwarf irregulars and the largest spirals is the centrally concentrated density of the latter.
Lenticular Galaxies
RC of lenticular galaxies (e.g., [61]) resemble those of spiral galaxies, yielding similar mass and density profiles ( Figure A9). As observed for the spirals, central density is small for small galaxies. The disk of NGC 2769 has lower density than its bulge, which is consistent with depiction of the density structure in terms of shells (Figure 1b). This galaxy is nearly elliptical.
Elliptical Galaxies
Elliptical galaxies have ambiguous orientations, precluding clear definition of their rotation curves, and so astronomers present their raw data as velocity dispersions. It must be recognized that uncertainties are large compared to spirals, and differential rotation between shells may involve different axial orientations. For the large ellipticals, the results are similar to spirals of like size ( Figure A10). Smaller ellipticals [79] show more variety. High ρ calculated at the center of NGC 221 ( Figure 7b) may or may not be typical and no other data exist for this compact type. Yet, measurements of v for NGC 221 are available for very small radii and do set a constraint on its interior (Table 5).
A Polar Ring Galaxy
Accurate RC data are available for only one polar ring galaxy, NGC4650a. Recent data of Iodice et al. [80] were analyzed (Figure 8). At small r, RCs of receding and approaching limbs of both the central discoid and the orthogonal polar discoid are all identical within uncertainty. A smooth variation of velocity across the entire object is seen, showing a common control. Velocities differ at large r between the limbs, as has been observed in some other spirals. Because data from the receding polar limb are much less certain, we do not use this in our extraction. However, the fit for all data is not much different than that excluding the receding polar limb (Figure 8a). The data are well described by v = Ar 3 -Br and resemble trends for many spirals (see Appendix). The consistent, gradual change in v implies gradual changes in Min and ρ with r. Trends in extracted mass and density (Figure 8b) are similar to other spirals but less dense, more like the small galaxies (Figure 7). Density for NGC 4650a over all r does not conform to an exponential function or a power law, but a definite outwards decline is seen, which is more pronounced towards the edge. Galactic dynamics of this polar ring galaxy differ little from that of spirals of similar size, other than the tilt of the inner discoid. Tilt angles are high, ~90 ° (NGCs 4650a and 4282) or ~73 ° (SPRC-7) [86].
Discussion
Profiles of density and mass vs. radius can be uniquely extracted from RC. These profiles are similar for all the galaxies examined, despite the wide range in shapes, sizes, and types (Table 4) and the variety of RC patterns exhibited (Figure 3-8,A1-A10). Density tends to fall off as a power law, whereas values of Min rise strongly with radius. If velocity data are available near the galactic center (e.g., the Milky Way), then the extractions show that density at low r depends more weakly on r than it does at high r. If velocity data are available at great distance, density at very high r sometimes steeply declines with radius, defining a physical "edge." In some cases, the visual and physical edges coincide (cf. Tables 4,5). Scatter existing in the calculated densities partly results from uncertainties in velocities, which can be substantial as suggested in Figure 3 and by comparing results of different studies of the same object (Figure 4-6). Despite the uncertainties, clear and consistent trends of extracted density with r exist.
Our calculations (Figures 3-8,A1-A10) show that the rotation curves are flat where the density dependence on radius approximately follows r −1.8 ( Table 5). The following subsections compare our extracted Min and ρ with several measures that are entirely independent of RC determinations.
Trends in Density with Galaxy Size and Morphology
For all galaxies examined, ρ at the visual edge does not vary much, such that the average value of 1.1×10 −21 kg m −3 matches ISM density (cf. Tables 1,5). The visual edge, being an isophote, is defined by a certain concentration of luminous matter in the galaxy. Association of the visual edge with a certain value of density (Figure 9,10a) indicates that total mass correlates with star mass. For spirals, density at the visual edge decreases with galaxy size (Figure 9a), ostensibly because larger galaxies have greater attractive power, thus producing more gradational outer reaches. For non-spiral morphologies, the trend at the visual edge is rather flat (Figure 9b).
Figure 9.
Various measures for the density of galaxies compared to their size, set to the visual edge at 25 B-magnitude arcsec −2 . (a) Spiral and similar galaxies. M82 is included since near-IR images [23] suggest it is a spiral. Highest and lowest densities are depicted, but these values depend somewhat on smallest and largest radii explored in each RC study. For reference, we include ρ measured at r = 0.1 kpc and ρ measured at the visual edge. The polar ring galaxy has lower ρ than ordinary spirals of similar size. (b) Morphologies other than spiral. Fits to ellipticals (lines) roughly also describe the lenticular and spheroidal classes and are similar to fits for spirals. Table 0. kpc. Density at 0.1 kpc strongly increases with the size of both spiral and elliptical galaxies (Figure 9ab). The slopes of the regression lines for these types are similar, despite the disparity in the number of samples (36 spirals vs. 4 ellipticals), and disparities in central density between ellipticals and spirals. The three lenticular and eight dwarf galaxies measured also fall on or near these trends. The polar-ring type is less dense, due to its shape, with considerable matter out of the plane. Notably, the smallest galaxies, the dwarf spheroidals, have densities near the crossing of the trends and show little variation in ρ with r. Interestingly, increasing tightness of the spiral arms is associated with higher ρ at 0.1 kpc (Figure 10b). This finding is consistent with spiral arms being concentrations of stars.
Density at the visual edge for different spiral morphologies (types SA, SAB and SB) follow similar trends (not shown) to those in Figure 9a. Spiral type is not important to RC patterns [47] and thus not to extracted parameters. Rings or bars seem not to have a strong effect on density. However, tilt of the interior does have an effect: the polar-ring galaxy is less dense than others of its size. Essentially, this shape has more volume than the ordinary spirals, which are closer to being 2D objects than the obviously 3D ring type with mass well out of the equatorial plane.
The increase in density in the interior (at 0.1 kpc) with galaxy size is consistent with gravitation. For some small galaxies, the density may depend exponentially on radius (Figures 8b,A3,A5,A7-A9). However, exponential trends neither fit all small galaxies, nor any of our large galaxies ( Figure A6).
Comparison of Extracted Density with Independently Known Densities
Our calculations of galactic mass and density are supported by consistent trends and confirmed by independent measures of density (Tables 1,5). Near the centers of most galaxies, density is like that of molecular cloud cores, whereas at the middle of large galaxies, densities are like those of typical giant molecular clouds (GMCs). For some large galaxies, inner densities exceed those of GMC cores. Yet, even in this inner zone the calculated densities are lower than ρ associated with evenly distributing the mass inside our Solar System (Table 1).
At the radius of the Milky Way associated with our Sun, our calculated values for ρ match the sum of the Solar Neighborhood density, calculated from the mass and distance of proximal stars, plus the ISM density (Tables 1,5). Further out in the Milky Way, the density grades to lower values, but even out to ~1500 kpc, the radius of the local group, ρ remains significantly higher than cosmological values. Our finding is consistent with the known existence of globular clusters, gas, and satellite galaxies surrounding the Milky Way, and with a large amount of material surrounding Andromeda, constituting ~30% of total mass [87]. Furthermore, at 400 kpc from centers of each of the Milky Way and Andromeda we obtain the same value for ρ of ~1.5×10 −24 kg m −3 . This equivalence is consistent with separation of their centers by 780 kpc.
We next estimate IGM density by two approaches. The local group would have a density of 1×10 −26 kg m −3 , if its total galaxy mass ( Table 5) were uniformly redistributed out to a radius of 1.6 Mpc. Alternatively, the IGM for the local group can be determined by extrapolating the trends in Figure 3 and A1. RC data on the Milky Way projected to 1.6 Mpc provide an upper limit of 1×10 −25 kg m −3 . These two estimates are compatible with independent estimates of the IGM (Table 1).
When plotted against galaxy size, the trends for ρ at 0.1 kpc and at the visual edge diverge, suggesting an average density of ~10 −21 kg m −3 for all galaxies. The trends for the lowest and highest densities calculated are scattered, but their tendency to diverge (Figure 8a) also supports this "average" density. This crude average resembles that of the ISM and Solar Neighborhood ( Table 1).
The highest central density, 10 −14 kg m −3 , was calculated for the Milky Way at 0.001 kpc and for a compact dwarf ellipsoid at 0.0003 kpc (Table 5). Yet, this value is far lower than the distributed ρ of the Solar System (Table 1). The trend for the central Milky Way (Figure 4c) suggests that cores of large galaxies may reach 1 star per cubic a.u.
Rotation curves begin too far from galactic centers to precisely constrain how central ρ depends on r. Exponential forms have been inferred from luminosity data (e.g., [88]). However, measured luminosity near the center is often more complicated than an exponential form and luminosity profiles generally end at r below where RCs are collected. The densities we calculate for some small galaxies may have an exponential dependence. Velocities at small r are needed to better quantify mass distribution near the center. Our calculations are consistent with ordinary matter being concentrated at galactic centers, but not inordinately so.
Dependence of Galactic Mass on Size
The calculated mass within the visual edge increases with galaxy size raised to the ~2.7 power (Figure 11a). A power of 3 is compatible with constant ρ, or with any trend where a common central density declines to a consistent value at the visual edge. Our result is consistent with a gradual increase in the average density of increasingly large galaxies. The dependence of ρ on r is weaker inside 0.1 kpc than suggested by the power law fits extrapolated from values at larger r. Density may be nearly constant in the innermost zones of typical galaxies, where RCs cannot be determined. The total mass of a galaxy is larger than Min at the visual edge, because the latter is an arbitrary cutoff, albeit a consistent one. The maximum mass extracted depends linearly on the radius corresponding to this mass (Figure 11b). However, the maximum mass could not be defined when the RC data were terminated close to the visual edge. Due to uncertainties in v at large r, it is difficult to ascertain where the physical edge of a galaxy is located, if indeed such exists. Spiral galaxies grade into the gas of the IGM, whereas ellipticals and some dwarf galaxies have sharper edges (Figure 3-8; A1-A10). These different behaviors are attributed to spirals having a much higher proportion of gas and dust (e.g., [89]).
Relationships of Extracted Parameters with Luminosity
Density depends on the visible luminosity in a manner that is similar to the dependence of density on size (cf. Figure 9,12). Density depends on radius to some power (Figure 3-8), whereby density is more concentrated to the centers of larger galaxies, which is consistent with known concentration of luminosity towards the center. Mass is proportional to the visible luminosity (Figure 13a). If we compare Min at the visual edge to measured Lvis, then Lvis/Medge is ~0.4 in Solar units for all types of galaxies, on average. This ratio is halved if the maximum mass is used (Figure 13a), which is consistent with most galaxy mass residing at high r and young stars being at distance, because these require gas to form. For consistency, the measured luminosity in these figures are those reported in the NED [23]. Luminosity only measures surface brightness. Therefore, galactic mass should lie above the M = L line in Figure 13a, as observed for almost all galaxies. Figure 13. Dependence of calculated parameters on measured visible luminosity of our galaxy sample. Least squares fits are shown: (a) Mass of all galaxy types. The maximum mass is affected by the termination of velocity measurements with distance. The visual edge provides a more consistent measure of galaxy size in relationship to luminosity. Power law fits give exponents very close to unity: the linear fits shown are similar. Grey line indicates a 1:1 correspondence; (b) Power (−n) for fits to density vs. radius. The least squares fit for spirals (solid curve) shows that density is more concentrated in the centers of larger galaxies. The polar-ring galaxy is less concentrated than normal spirals of similar size.
The relationship of luminosity to mass is affected by several additional factors. The distribution of star types is important, because luminosity for main sequence stars is mstar 4 (e.g., [90]). A second factor is the number of white dwarfs and other dense stars which contribute significantly to mass but negligibly to luminosity: Ledrew's [32] independent assessment of the Solar Neighborhood provides Lvis/M = 0.8, which reflects combined main sequence stars and white dwarfs. The third factor is presence of H atoms, gas, and dust, which will further reduce Lvis/M of a galaxy. The average Lvis/M = 0.4 determined here is compatible with the Solar Neighborhood value of [32] because the later compares star mass to gas mass when scattering is unimportant to the measurement, and conversely for the former. Note that overall, the elliptical, lenticular, and dwarf galaxies have higher Lvis/M than the spirals, which is consistent with the spirals having considerably more gas (e.g., [89]). The fourth factor is size (Figure 14). Size has an effect because bigger galaxies are denser, as shown above, and thus pack more stars into a smaller volume. With this in mind, M32 owes its relatively high luminosity to being compact. Table 4. Fits exclude the compact ellipsoid, M32.
Non-Baryonic or Non-luminous Matter?
In addition to the above correlations, independent measurements of the visible luminosity of the 51 galaxies and of their visual size from the NED [23] are linearly correlated ( Figure 14). This correlation neither depends on the angle of inclination nor on morphology, except that the compact ellipsoid (NGC 221) is far brighter for its size than the other 50 galaxies. Likewise, luminosity at the radio (21 cm) emission line of atomic H follows a parallel trend with the visual radius, again excepting NGC 221 (Table 5; Figure 14). Similar power laws suggest that the star mass and the H mass in a galaxy are correlated, which is consistent with luminous stars being mostly hydrogen.
The solid body approximation to spin provides mass at the visible edge that is consistent with luminosity [5,6]. Here, we deduce densities from detailed velocity data without this approximation and find that these densities are consistent with independent measures of baryonic matter. We have shown that flat RCs are expected for a rotating galaxy which becomes more dilute with extent. The flatness of the RC occurs where the decrease in density with r offsets the growth in volume with r: a large halo of dark matter is unnecessary.
The inner part of typical rotation curves features an increase in v as r increases. A linear increase in velocity indicates that density is nearly constant as r increases in this innermost zone.
The mid-point of the Milky Way has ~1 gas mass to ~4 star masses, determined by comparing measured densities to those calculated at 8 kpc. If this ratio represents the entire Milky Way, then its L/M should be about 2, if the surface brightness includes all emitted light, which is clearly not the case. Importantly, gas increases to the outside of spirals, as does mass. Thus, L/M of the whole galaxy must be reduced from that inferred from Solar Neighborhood properties and is compatible with the average L/M = 0.4 at the edge determined here.
The connections found here are compatible with independent assessments of galactic structure by Disney et al. [91] who showed that one parameter governs galactic structure. This key parameter is baryonic mass, which is directly connected with galaxy size (Figure 11).
Conclusions
Evaluation of galactic rotation curves is greatly simplified by the combined use of Newton's homeoid theorems, Clausius's Virial theorem, and Maclaurin's gravitational self-potentials. These powerful theorems can be applied to galaxies which possess the expected spheroidal shape of a spinning self-gravitating body (Figure 1b). Existing orbital models include errors in mathematical physics [6] and cannot explain the rotational curves for galaxies based on detectable matter.
Previous efforts are forward models of RC, which presume the cause, i.e., the density structure for each of multiple shapes is specified. Rotation curves are then calculated from the resulting multicomponent mass distributions (e.g., [12][13][14]41,54,55,64,68,70,74,86). In contrast, the present paper applies the inverse model of [27] for differentially spinning, variable density oblate spheroids to 51 different galaxies having detailed RC data. Our simple and exact equations for the homeoid allow density and mass distributions of individual galaxies to be directly and uniquely calculated from their measured RC and aspect ratio. Neither fitting parameters nor deconvolution are required. Groetsch [38] clarifies the differences between forward and inverse models and provides many examples of the latter.
Calculated densities are almost independent of galaxy type. Calculated densities in the inner parts of galaxies approximate those in GMC cores, whereas those closer to the middle zones approximate those of average GMCs (cf. Tables 1 and 5). Densities calculated from RCs for the Milky Way near the Sun's position are consistent with the independently determined distributed density of proximal stars. Slightly further out, extracted densities match ISM determinations, which is reasonable. In the outermost zones, density grades into, but is higher than, previous estimates of IGM values. We calculate IGM density for the local group as >10 −26 kg m −3 . Almost all galaxies are described by density depending on radius to a power n. The statistical average for the powers of the 36 individual spirals is n = −1.80 ± 0.40. The 15 galaxies with other morphologies have similar average of n = −1.63 ± 0.63. Including the polar ring galaxy in the spiral category negligibly affects these values.
Importantly, our results explain the empirically determined dependence of luminosity on velocity for each of the spiral and elliptical types (the Tully-Fisher and Faber-Jackson relationships). Our extracted masses are consistent with measured luminosity, given the presence of significant amounts of gas and some scattering. Our luminosity to mass ratios are consistent with the independent assessment of the Solar Neighborhood by Ledrew [32].
Our results for ellipticals and spirals differ only in detail, despite velocity dispersions being relevant in the first case and actual rotation curves in the other. Without friction, homeoids can rotate independently, as exemplified by counter-rotating spirals (e.g., NGC 4826) and polar-ring galaxies (e.g., NGC 4650a). For the latter case, the outer, less-dense region is tilted with respect to the inner, dense region, yet the RC curve is continuous, consistent with self-gravitation. Unlike flat spirals which must have co-axially spinning homeoids, in ellipticals, the rotation axes of homeoids can point in various directions, producing velocity dispersions.
We have shown that rotation curves of 51 galaxies covering diverse types and sizes can be explained by Newtonian physics without invoking non-baryonic matter.
Author Contributions: Both authors contributed more or less equally to all aspects and components of this research. Both authors have read and agreed to the published version of the manuscript.
Funding: This research was partially funded by the National Aeronautics and Space Administration, grant number NNX13AB32A-3971105. dotted lines. Results of fitting are listed. For size comparisons, we use the visual edge at 25 Bmagnitude arcsec -2 provided by the NED [23] and illustrate this with an arrow. [66] and derived M and ρ are colored grey. RCs from Hlavacek-Larrondo et al. [69] are black curves. Although velocity data differ substantially, similar mass and density are obtained near the visible edge. Differences at small r affect the fits to ρ; NCG 925 has low velocities which produce low density; NGC 2599 RCs from Noordemeer et al. [61]. This spiral has both high v and a strong decline in v with radius. The increase in v at low r was assumed; UCG 2855 RCs below 13.7 kpc from Sofue et al. [66], and RCs above 13.7 kpc from Roelfsema and Allen [62]. The merge produces unrealistically high density over a short interval but does not seem to affect the fit. A drop off in density exists at high r, whereas the trend is flatter than the fit near the galaxy center. Figure A3. Comparison of moderate-sized galaxies: NGC 3034 is the irregular M82 galaxy. RCs from Greco et al. [63]. An abrupt drop off in density is not observed, because RC data are limited to the visible galaxy. NGC 7793 RCs from deBlok et al. [54] have fairly low velocity, and lower density. [53,66]. These large and luminous galaxies have a variety of RC patterns. Tabular data were downloaded from [65]. [74]. Grey diamonds = RCs from Bureau and Carnigan [85], who state their velocities are not accurate beyond 10 kpc, and so this dataset was not used in the calculations. Dotted line = calculated mass. Circles = calculated density. Two different fits are shown as solid line and widely spaced dotted curve; WLM is unusual due to its isolation RCs from Leaman et al. [73]; M81dWb RCs from Oh et al. [74]. Figure A8. Dwarf spheroidal galaxies. RCs presented by Salucci et al. [75] were digitized. Original data from Walker et al. [76.77] and Mateo et al. [78]. Fornax is larger, more like the dwarf irregulars in Figure A7. For Draco, we compare extractions from the published data with those from the same data, but smoothed, and find little effect on the results. For Leo I, we compare extractions from the published data with those for an interpolated data set. Interpolating at r below the lowest measurement is equivocal and the choice of v affects interior density. Figure A9. Lenticular galaxies. RCs of UGC 3993 and NGC 7786 from Noordemeer et al. [61]. RCs of NGC 2768 from Forbes et al. [42]. Figure A10. Elliptical galaxies. RCs of nearly spherical NGC 2434 from Rix et al. [68]. RCs of dwarf NGC 4431 from Toluba et al. [79]. RCs of NCG 3379 from Romanowsky et al. [31]. | 13,066.2 | 2020-02-26T00:00:00.000 | [
"Physics"
] |
The Effects of Cetirizine on P-glycoprotein Expression and Function In vitro and In situ
Purpose: P-glycoprotein (P-gp) plays a major role in oral absorption of drugs. Induction or inhibition of P-gp by drugs contributes to variability of its transport activity and often results in clinically relevant drug-drug interactions. The purpose of this study was to investigate the effect of cetirizine, a second generation H1 antihistamine, on P-gp function and expression in vitro and in situ. Methods: The in-vitro rhodamin-123 (Rho123) efflux assay in Caco-2 cells was used to study the effect of cetirizine on P-gp function. Western blot analysis was used for surveying the effect of cetirizine on expression of P-gp in Caco-2 cells. Rat in situ single-pass intestinal permeability technique was used to calculate the intestinal permeability of a known P-gp substrate (digoxin) in the presence of cetirizine. The amounts of digoxin and cetirizine in intestinal perfusion samples were analyzed using a HPLC method. Results: The results showed significant increase in Rho123 uptake (P < 0.05) and also P-gp band intensity decrease in cetirizine-treated cells in vitro. Furthermore the intestinal permeability of digoxin was also increased significantly in the presence of cetirizine (P < 0.01). Conclusion: Therefore it is concluded that cetirizine is a P-gp inhibitor and this should be considered in co administration of cetrizine with other P-gp substrate drugs. Further investigations are required to confirm our results and to determine the mechanism underlying P-gp inhibition by cetirizine.
Introduction
Currently, most drugs used in clinical practice are administered orally and must be adequately and consistently absorbed to achieve successful therapy.Appropriate regulation of drug delivery to the body, including intestinal absorption, hepatic metabolism, renal excretion, and transport into the tissues, is important to ensure suitable therapeutic efficacy.Maintaining appropriate serum concentration of drugs is also necessary to exert their beneficial effects and to prevent unexpected adverse effects, especially for drugs with narrow therapeutic index.Furthermore, the increasing potential of clinical drug-drug, herb-drug, and food-drug interactions resulted from developing new drugs and increasing wide-use of drugs and herbal drugs have been noticed.In addition to other mechanisms, membrane transporters are the most important factors involved in the mentioned interactions and they are major barriers limiting oral drug delivery. 1,2glycoprotein (P-gp) is the most important membrane transporter which is responsible for secreting (active efflux) passively diffused drugs and xenobiotics out of the cell.2,3 P-gp is an ATP binding cassette protein and plays a major role in drug absorption and distribution due to its abundant expression on several barrier epithelia (including the epithelial cells of intestine, kidney, liver, brain, testis, adrenal gland, placenta, and bile canalicula) and its broad substrate specificity.3 Some drugs, foods, and chemicals may be substrates of P-gp.The transport activity of P-gp can be induced or inhibited in vivo by a variety of drugs with different structures, such as verapamil, rifampicin, erythromycin, ketoconazole, and cyclosporine 4,5 which may affect the absorption of drugs themselves and the concomitantly used drugs.Induction (enhancing P-gp activity) or inhibition (impairing P-gpmediated efflux) of P-gp by drugs or other xenobiotics contributes to variability of its transport activity and often results in clinically relevant interactions.Therefore, P-gp-related interactions have important clinical impacts and it is critical to understand which drugs are inducers or inhibitors of P-gp to minimize or avoid adverse interactions.3,6 Cetirizine, a member of the second generation H 1 antihistamines, is chemically known as (RS)-2-[2-[4-(4chlorophenyl) phenylmethyl] piperazin-1-yl] ethoxy] acetic acid dihydrochloride (Figure 1).Cetirizine is a piperazine derivative and is marketed as a racemic mixture containing both levocetirizine and dextrocetirizine. It s a long-acting non-sedative antihistamine and an antagonist of H 1 -receptor.Cetirizine di-hydrochloride is used for symptomatic treatment of allergic conditions including seasonal allergic rhinitis and chronic urticarial.[7][8][9] Figure 1. Cemical structure of cetirizine.10 Therefore the purpose of this study was to investigate the effect of cetirizine treatment on the P-gp function and its expression both in vitro and in situ.In addition, the potential role of digoxin, a typical substrate of P-gp, in intestinal effective permeability (P eff ) of cetirizine was also studied.
Cell Culture
Caco-2 cells were obtained from national cell bank of Iran (Pasteur Institute, Tehran, Iran) and were routinely cultured in Dulbecco's modified Eagle's medium (DMEM) containing 10% fetal bovine serum, 10,000 I.U./mLPenicillin, and 10,000 μg/mL Streptomycin.Cells were maintained in an incubator at 37°C with a humidified atmosphere of 5% CO 2 in a CO 2 incubator (Memmert, Schwabach, Germany).The culture medium was changed 2-3 times per week.After 2-3 weeks culture and differentiation of Caco-2 into intestinal-like cells, the cells were detached from the culture flask by addition of 0.25% trypsin-EDTA.Cells were then seeded at the needed density for performing the following tests.
Cytotoxicity study (MTT assay)
The MTT assay was used to determine the cytotoxicity of 0.2, 10, and 100 µM cetirizine.The Caco-2 cells were seeded into 96-well plates at a density of 15 × 10 3 cells per well.After 24 h, the medium was replaced with 200 µL per well of cetirizine different concentrations.After 24 h of incubation, MTT solution (2 mg/mL final concentration) was added to each well and incubated for 4 h at 37 °C in a CO 2 incubator.The MTT solution was removed and the resulting formazan crystals were solubilized with 200 µL/well of dimethyl-sulfoxide (DMSO) and 25 µL/well Sorensen buffer.The optical densities (ODs) were measured with an enzyme linked immune-sorbent assay (ELISA) plate reader (Statfax-2100, Awareness, Palm City, FL, USA) at 570 nm with background subtraction at 630 nm.The following formula was used for calculating percentage of cell viability. (1) The mean ± standard deviation (SD) was calculated for each concentration and analyzed statistically.
Rho123 efflux assay
P-gp function study in Caco-2 cells was realized by the measurement of intracellular accumulation of Rho123, a well-known P-gp substrate, which is inversely proportional to P-gp activity.Caco-2 cells were seeded in 24-well plates and allowed to attach for 24 h.Then, the old medium was removed and cells were washed with PBS.New culture media containing 100 µM cetirizine and 300 µM verapamil, as P-gp inhibitor, were added to separate wells and left for another 24 h.After that, the old medium was removed and cells were washed three times with PBS.Rho123 solution [DMEM containing 10 mM HEPES (pH = 7.4) and 5 μM Rho123] was added and incubated for 3 h at 37°C, followed by three washes with ice-cold PBS (pH = 7.4).Cells were lysed in 1 %
| 113
Effects of cetirizine on P-gp Advanced Pharmaceutical Bulletin, 2016, 6(1), 111-118 Triton X-100 and centrifuged at 1500 g for 10 min (3-18k, Sigma).Supernatant was used to measure the Rh123 and the total protein contents.[13] Western blot Caco-2 cells were seeded to a 6 well-plate in the density of 10 6 cells per well and treated for 48 h with the culture medium (as control), culture medium containing 0.2, 10, and 100 µM cetirizine, and 300 µM verapamil, as P-gp inhibitor.Then, solutions were removed and cells were washed with PBS, and incubated for 5 min with trypsin/EDTA 0.25 % in 37 °C.The detached cells were washed twice with PBS.Lysis buffer (Triton X-100 50 mM, NaCl 150 mM, EDTA 5 mM, 1 % protease inhibitor cocktail, tris-HCl, pH=7.4( was added and cell suspension was centrifuged at 1500 g for 5 min.Proteins were separated through sodium dodecyl sulfatepolyacrylamide 12.5 % running gel (SDS-PAGE) electrophoresis (80 V, 120 min).The gel was electro blotted to a poly-vinylidene di-fluoride (PVDF) membrane using a semi-dry western blotting system (Bio-Rad, Hercules, CA, USA).All membranes were blocked for nonspecific binding by incubation in 3% nonfat-dried milk for 1 h at room temperature and the membrane was washed 3 times with Tris buffered saline (TBS) with 0.1% Tween 20.Then, the membranes were incubated overnight with mouse monoclonal anti-P-gp antibody (1/1000 in TBS).After washing with TBS, the membrane was incubated with HRP-conjugated rabbit anti-mouse secondary antibody for 2 h.Membrane was washed and the proteins were then detected using an ECL kit.The proteins were visualized by exposing the membrane to a medical x-ray film (Fuji, Japan) for 5 min in a dark room and scanned using a bio-imaging analyzer (Bio-Rad, USA).β-actin was the internal standard, and was detected using rabbit polyclonal anti-β-actin as primary antibody and HRP-conjugated goat anti-rabbit IgG as secondary antibody.P-gp expression is presented as the ratio of P-gp band intensity to β -actin band intensity in the same blot run (P-gp/ β -actin). 14
Single-pass intestinal perfusion (SPIP)
The animals were anesthetized with an intra-peritoneal injection of thiopantal sodium (60 mg/kg).A midline abdominal incision of 3-4 cm was made and approximately 10 cm of the jejunum segment of the intestine was isolated and cannulated at both ends with polypropylene tubes.The exposed segment was kept moist with body tempered saline.At first, the segment was rinsed with 37℃ normal saline to wash and clear the segment, then PBS (pH = 7.4) containing 50 mg/l phenol red without drug (blank solution) was pumped through the segment at a constant flow rate of 0.2 ml/min (Q in ) through a volumetric infusion pump (Argus Medical AG, Switzerland).Blank perfused solution was collected at the outlet and used to prepare cetirizine and digoxin calibrator solutions and also for stability studies. 15,16fter reaching steady state, perfusates were quantitatively collected for every 10 min (2ml) lasting 90 min (9 samples).Water absorption and secretion and other changes during the perfusion may cause errors in the calculated permeability values, phenol red, therefore, was added at a concentration of 50 mg/L as a nonabsorbable marker to correct the results.Digoxin (20 µM in blank solution), Verapamil as a typical P-gp inhibitor (in blank solution with digoxin 20 µM), and different concentrations of cetirizine (0.2, 10, and 100 µM in blank solution with digoxin 20 µM/L) were administrated as described above (n = 3) for each drug and each concentration). 15,16At the end of procedure, the length of the segment was measured (cm) and the animal was euthanized.Samples were stored at -70°C (ultra-low temperature freezer, Jal Tajhiz Production, Karaj, Iran), until analysis.The concentrations of phenol red in perfused (outlet) samples were measured at 560 nm using an UV-VIS spectrophotometer (Ultra-spec 2000, Pharmacia, Pfizer, New York, NY, USA). 17Digoxin and cetirizine amount in outlet samples were detected by HPLC method.
HPLC analysis of digoxin and cetirizine in intestinal perfused samples
The mobile phase for digoxin and cetirizine analysis was 35% (v/v) acetonitrile in water which was filtered through sintered glass filter P5 (1.0-1.6 μ, Winteg, Germany) and degassed in a sonicator.The mobile phase was pumped in isocratic mode at a flow rate of 2 ml/min at ambient temperature.UV detection was accomplished at 218 nm and samples of 20 μl were injected using Hamilton injector syringe (Hamilton, Switzerland) onto the column (Knauer-15VE081ESJ -150X4.6 mm with precolumn-Eurospher 100-5 C8, Berlin-Germany). 15The high performance liquid chromatography (HPLC) system was composed of Smartline manager 5000, Smartline UV detector 2600, and Smartline pump 1000 (Knauer Advanced Scientific Instruments, Berlin, Germany).Figure 2 shows a representative HPLC chromatogram of analyzed samples.Data analysis P eff values were calculated from the steady-state concentrations of compounds in the perfusate collected from the outlet tubing.6][17] Corrected C out (outlet concentration of the drug) was calculated from the following equation. 18) Where C out (corr) is corrected outlet concentration of the drug, C out is outlet concentration of the drug, CPR in is concentration of phenol red entering the intestinal segment and CPR out is concentration of phenol red leaving the intestinal segment.Calculations were based on outlet perfusate steady state concentrations achieved after the selected time points.The steadystate intestinal P eff was calculated according to following equation: (3) P eff represents effective permeability (cm/s), Q in represents the perfusion rate (0.2 ml/min), C in and C out represent the concentrations of the test drug entering and leaving the segment respectively, r is the radius of the intestinal segment (0.18 cm), and l is the length of the intestinal segment (cm).
Statistical analysis
Data were presented as the mean ±SD.Statistical analyses were carried out using one-way analysis of variance (ANOVA) followed by Tukey's multiple comparison test and the differences between two groups were determined using the unpaired t-test.Statistical tests were performed with SPSS13.0 (SPSS Inc., Chicago, IL, USA), where p < 0.05 and p < 0.01 were considered to be statistically significant.
Results
Preliminary experiments showed that no considerable adsorption of compounds on the tubing and syringe took place and the stock and working standard solutions of cetirizine in water was stable during the experiments. 9
Rho123 efflux assay
To assess the impact of cetirizine treatment on the function of P-gp as an efflux pump in vitro, we examined the P-gp-mediated Rho123 transport in Caco-2 cells treated with cetirizine using verapamil as a P-gp inhibitor.As shown in Figure 4, 100 µM cetirizine significantly increased the intracellular accumulation of Rho123 in Caco-2 cells (p < 0.05).The mean intracellular concentration of Rho123 in control cells was 50.2 ± 6.0, while in cetirizine and verapamil treated cells were 88.8 ± 2.3 and 420.6 ± 25.4 pg/mg protein, respectively.
In situ single-pass intestinal permeability study
To further confirming the inhibitory effect of cetirizine on P-gp activity, the in situ experiments were conducted.For this purpose digoxin intestinal permeability, as a typical P-gp substrate, in jejunal segment of rats was determined.As shown in Figure 5, 0.2 µM cetirizine did not significantly increase the P eff of digoxin relative to control group (digoxin alone) (p > 0.05), however the difference reached to significance level at higher concentrations (10 and 100 µM, p < 0.01).The P eff values of digoxin (20 µM) in the absence and presence of verapamil, as a typical inhibitor, were 3.4 ± 0.8 and 8.9 ± 0.7 ×10 -5 cm/s, respectively.Whereas, the P eff values of digoxin in the presence of 0.2, 10, and 100 µM cetirizine were found to be 4.4 ± 0.4, 6.8 ± 0.4, and 8.7 ± 1.0 ×10 - 5 cm/s, respectively.The concentration of cetirizine was also determined in intestinal perfused samples, and P eff values of 10 and 100 µM cetirizine were also calculated.The results, illustrated in Figure 6, showed that by increasing the concentration of cetirizine from 10 to 100 µM, the P eff value decreased from 6.7 ± 0.7 to 3.4 ± 0.4 ×10 -5 cm/s.The difference between two groups was statistically significant (p = 0.02).
Western Blotting
Immuno-blotting of Caco-2 cells were carried out using monoclonal antibody against P-gp, to further investigate the inhibitory effect of cetirizine treatment on P-gp activity after 48 h treating.The beta-actin protein expression was considered as internal immuno-blotting control.P-gp expression was presented as the ratio of Pgp band intensity to β -actin band intensity (P-gp/ βactin) and were compared with verapamil and control bands in the same blot run. 14Low density of immunoblot bands of cetirizine treated cells compared with those of untreated cells (control), as shown in Figure 7, demonstrated low expression of P-gp.
Discussion
In vitro and in situ results of the present study showed Pgp inhibition potential of cetirizine.The mean intracellular concentration of Rho123 in 100 µM cetirizine treated Caco-2 cells was 1.8 times the control cells and was statistically significant (p < 0.05) while it was 8.4 times in verapamil treated cells (p < 0.01).In SPIP study, P eff of digoxin was increased to 1.3, 2, and 2.6 times the control in the presence of 0.2, 10, and 100 µM cetirizine, respectively.Although, difference was not significant for 0.2 µM concentrations, it reached to a high significant level at 10 and 100 µM cetirizine (p < 0.01).The upward trend of P eff of digoxin showed a dose-dependent P-gp inhibition effect of cetirizine, at least in the range of 0.2-100 µM concentrations.In our study, low density of P-gp immune-blot bands of cetirizine treated Caco-2 cells relative to non-treated cells showed down-regulation of P-gp expression by cetirizine.P-gp inhibitors, as demonstrated by photo-affinity labeling, may exert their inhibitory effect by one of the following three mechanisms: (1) altering P-gp expression; (2) Disrupting the ATP hydrolysis; and (3) competition for a binding site or reversible inhibition. 19t might be speculated that the P-gp inhibitory mechanisms of cetirizine are both altering P-gp expression and competition for P-gp binding sites.Western blotting results of present study confirm the idea.On the other hand, previous studies demonstrated P-gp substrate role of the cetirizine in mice blood-brain barrier (BBB), 20 1b (mdr1a/b) knockout mice, 21 human MDR1-transfected Madin-Darby canine kidney cells, 21 and human Caco-2 cells 22 using monolayer efflux assay 20 and other in vitro and in situ assays.To the best of our knowledge, both digoxin and cetirizine are substrates of P-gp and competition for binding sites of P-gp molecules by them is suggested.As shown in Figure 5, by increasing the concentration of cetirizine from 0.2 to 100 µM, digoxin intestinal P eff raised from 4.4 ± 0.4 to 8.7 ± 1.0 ×10 -5 cm/s, but P eff of cetirizine decreased from 6.7 ± 0.7 to 3.4 ± 0.4 ×10 -5 cm/s; this may confirm the competition of two drugs for P-gp binding sites.From the literature review, the physicochemical properties of cetirizine may help to suggest the P-gp inhibition effect of cetirizine.Ekins et al. focused on identifying functional groups of P-gp inhibitors active molecules in their study.The study was carried out with 27 inhibitors of digoxin transport in Caco-2 cells in vitro.They found that two hydrophobic groups along with a hydrogen-bond acceptor group and an aromatic core were required for P-gp inhibition. 23The chemical structure of cetirizine was shown in Figure 1 (C 21 H 27 Cl 3 N 2 O 3 ) and it has one hydrogen bond donor, 5 hydrogen bond acceptors, and 8 rotatable bonds.Log octanol/water partition coefficients, log P and log D, of cetirizine are 4.48 and 1.04 (pH = 7.4), respectively.Polli et al. studied the influence of P-gp on the brain concentrations of cetirizine and hydroxyzine.They concluded that P-gp influences the brain concentration of cetirizine, as a P-gp substrate, but not hydroxyzine.They investigated and compared the physicochemical properties of cetirizine and hydroxyzine and suggested that the difference in the substrate activity is related to the log D oct (pH = 7.4) values.The log D oct , which is a measure of lipophilicity at pH = 7.4, for cetirizine was 1.04 compared with a value of 2.87 for hydroxyzine. 20he carboxylic group of cetirizine can interact with the basic nitrogen via folded conformers due to the molecular structure of cetirizine.This can cause relatively high lipophilicity at physiological pH.Oral bioavailability of cetirizine is more than 70% and shows high intestinal absorption in human. 7,24-gp activity undoubtedly plays an important role in limiting intestinal permeability of BCS class II-IV P-gp substrates (not for BCS class I compounds).The high Pgp expression levels in the intestine make the moderately absorbed P-gp substrates more susceptible to P-gpmediated efflux.Inhibition of P-gp by inhibitors, like cetirizine, can move compounds of BCS class III and IV (for example, digoxin) to class I and II, respectively, by significantly increasing the permeability.25 This may result in unexpected toxic effects of administrated drug(s).The unwanted drug-drug interactions may be reduced by considering the P-gp inhibitory effect or substrate activity of the concomitantly used drugs.
Conclusion
In conclusion, our results demonstrated that cetirizine treatment could down-regulate the function and expression of P-glycoprotein in vitro and in situ and its inhibition activity is dose-dependent.The P-gp inhibitory effect of cetirizine must be considered for predicting potential drug-drug interactions when the drug is administered concurrently with P-gp substrate drugs.However further investigations with larger sample sizes are required to approve the obtained results and also find out the exact P-gp inhibitory mechanisms of cetirizine.
Figure 3 .
Figure 3. Effects of different concentrations of cetirizine on Caco-2 cells viability.Bars show mean ± SD of at least 3 measurements.* p < 0.05 was considered as significance level.
Figure 4 .
Figure 4. Effects of cetirizine and verapamil treatments on the intracellular accumulation of Rho123 in Caco-2 cells.Bars show mean ± SD of at least 4 measurements.* p < 0.05 was considered as significance level.
Figure 5 .
Figure 5. Effects of verapamil and 0.2, 10, and 100 µM cetirizine on the effective permeability (Peff) of digoxin.Bars show mean ± SD of at least three measurements.* p < 0.05 was considered as significance level.
Figure 6 .
Figure 6.The effective permeability values of 10 and 100 µM cetirizine in the presence of 20 µM digoxin (n = 3).Bars show mean ± SD of at least three measurements.* p < 0.05 was considered as significance level.
Figure 7 .
Figure 7.Western blot of P-gp and βactin in 100 µM cetirizine and 300 µM verapamil treated, and control (non-treated) Caco-2 cells (bands of 2 samples of each are shown).
| 117
Effects of cetirizine on P-gp Advanced Pharmaceutical Bulletin, 2016, 6(1), 111-118 of Tabriz University of Medical Sciences (Grant No. 91-123) and is a part of the PhD thesis of Dr. Mehran Mesgari Abbasi. | 4,831.4 | 2016-03-01T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
A GPU-Accelerated Radiation Transfer Model Using the Lattice Boltzmann Method
: A prototype of a three-dimensional (3-D) radiation model is developed using the lattice Boltzmann method (LBM) and implemented on a graphical processing unit (GPU) to accelerate the model’s computational speed. This radiative transfer-lattice Boltzmann model (RT-LBM) results from a discretization of the radiative transfer equation in time, space, and solid angle. The collision and streaming computation algorithm, widely used in LBM for fluid flow modeling, is applied to speed up the RT-LBM computation on the GPU platform. The isotropic scattering is assumed in this study. The accuracy is evaluated using Monte Carlo method (MCM) simulations, showing RT-LBM is quite accurate when typical atmospheric coefficients of scattering and absorption are used. RT-LBM runs about 10 times faster than the MCM in a same CPU. When implemented on a NVidia Tesla V100 GPU in simulation with a large number of computation grid points, for example, RT-LBM runs ~120 times faster than running on a single CPU. The test results indicate RT-LBM is an accurate and fast model and is viable for simulating radiative transfer in the atmosphere with ranges for the isotropic atmosphere radiative parameters of albedo scattering (0.1~0.9) and optical depth (0.1~12).
Introduction
Radiative energy transfer plays an important role in many areas of science and technology.Accurate modeling of incoming solar radiation and outgoing infrared radiation from the Earth's surface and their interactions with atmospheric constituents has been among the most important tasks for the atmospheric sciences and remote sensing of environments.Compared with cloud-free and large-scale atmospherics, in which radiative transfer is usually treated one-dimensionally by assuming horizontal homogeneity, radiation transfer near the ground is a complex three-dimensional (3-D) phenomenon due to interactions with the Earth's surface features, such as terrains, buildings, soil, and vegetation.The mathematical description of electromagnetic radiation propagation is the radiative transfer equation (RTE) with associated emission, absorption, and scattering parameters and complex boundary conditions.Analytical solutions for real atmospheric conditions are not available.Due to the complexity and difficulties in solving the RTE, many numerical methods, such as the Monte Carlo method (MCM) [1], finite volume method (FVM) [2], and discrete ordinates method (DOM) [3], to name a few, have been developed.Proper choice of numerical method is dependent on the problem parameters and boundary conditions.The MCM is considered as a versatile and accurate method that can handle complex situations and is free from numerical errors such as ray effects, numerical smearing, and false scattering [1,3,4].Therefore, the MCM is often used as a benchmark tool for radiative transfer modeling.The drawback with the MCM is it requires a very large number of photons to be released to avoid statistical noise; therefore, it is computationally expensive.FVM and DOM have the advantages of ease and efficiency to set up the radiative transfer problem but have the problems of ray effects and slow convergence in small optical thickness situations.All the above numerical methods for solving the RTE demand a great deal of computation power for a large computational domain.Moreover, variations of scattering and absorption coefficients depend on atmosphere constituents in different wavelengths, which require multiple computations for different wavelengths, thus making the computation load even more formidable.
In recent years, a novel and powerful method, the lattice Boltzmann method (LBM), has been developed for solving the RTE.One of the motives for using the LBM in solving the RTE is to improve the computation speed of radiative transfer modeling.The LBM was first discovered and developed in the fluid mechanics community [5][6][7][8][9][10][11] and has become one of the most effective methods for fluid flow and heat transfer simulations.The LBM is based on the kinetic theory of statistical mechanics and solves the Boltzmann equation that governs the probability distribution of fluid particles.The LBM solves the Boltzmann equation for a particle at each grid point by performing collision and propagation calculations of the particle's probability distribution function (PDF) over a discrete and symmetric lattice mesh with certain fixed directions.The macroscopic variables, such as fluid density and velocity, are computed from the statistical moments of the particle PDF.The major advantages of the LBM in fluid modeling are its intrinsic parallelism and the ease with which it treats complex boundary conditions.Flow modeling using the LBM is as accurate as FV or finite elements (FE) methods, and has a computation speed several hundred times faster than FV and FE methods.
It is natural to explore the LBM as a computational method to accelerate the computation speed in solving the RTE because radiative transfer is one of most computationally intensive tasks in atmospheric modeling.The LBM can be considered a direct discretization of the Boltzmann equation [7].The earliest work of solving the RTE using the LBM by Geist et al. [12] was to investigate lighting in computer graphics with angular discretization with 19 directions and using a graphics processing unit (GPU) for computation [13].Because of the similarity between the RTE and Boltzmann equation [14], earlier research and development of the LBM for the RTE started with direct discretization of the RTE with respect of space, time, and angular direction, which is a valid approach [7,14].Asinari et al. and Mishra et al. [15,16] developed a two-dimensional (2-D) LBM for radiative transfer modeling in a particular medium.Ma et al. [17] derived an LBM used for a one-dimensional (1-D) radiation problem that compared well with an analytical solution.Bindra and Patil [18] and McCulloch and Bindra [19] also developed a 2-D LBM for the RTE for a simulation of the conjugated radiative and convective heat transfer problem.A general review on modeling neutron and photon transport using LBM is provided in [20].The LBM was also used in a non-equilibrium radiation transfer problem [21].Zhang et al. [22] and Yi et al. [23] derived a 2-D LBM using the Chapman-Enskog expansion for a steady-state radiative transfer problem that can deal with both thin and high optical depths.The LBM was used in a model for astronomical radiation transfer by Weih et al. [24].For a better treatment of the radiation source term, a multi-relaxation time LBM was developed by Liu et al. [25].McHardy et al. [26,27] developed a 3-D LBM model using a direct discretization of the RTE and the model produced accurate results for the ballistic radiation condition in which the medium scattering albedo is less than 0.7.An anisotropic case of Mie scattering was also computed and compared well with the LBM method [26].Mink et al. [28,29] developed a 3-D LBM method for high optical thickness situations based on the Chapman-Enskog expansion and a steady-state RTE was approximated by the Helmholtz equation and solved with the LBM.
The LBM with a GPU has shown to be very effective in numerical simulation of turbulent flow in urban environments with at least a 200 to 500 times speed-up (CPU/GPU time ratio) depending on the GPU type [30,31].Since radiative transfer is a very important component of energy transfer in the atmospheric boundary layer and the computation is very challenging, it is advantageous to exploit the LBM method with a GPU when solving the RTE.It is also beneficial to have the same computational methodology and grids set up for coupling our LBM flow model and the LBM radiative transfer model.
The objective of this study is to evaluate the accuracy and computation capability in a newly developed radiative transfer model using the lattice Boltzmann method, called RT-LBM.Specifically, we focus on RT-LBM's accuracy in simulating direct solar radiation with different incoming boundary conditions.The computation speeds using a GPU and a CPU are compared for different sizes of computational grid setups.The organization of this work is as follows: The second section describes the derivation of RT-LBM, radiation parameters, boundary conditions, and its computation method.The Monte Carlo (MC) radiative transfer model used for the comparison study is also described in this section.The third section presents the results of RT-LBM simulations of radiative transfer around buildings and compares the model results using the well-established MCM.The computation speeds of RT-LBM on a GPU are described and compared with CPU implementation.The final section gives a summary and discussion of applications of RT-LBM.
The Lattice Boltzmann Model for Radiative Transfer
Spectral radiance propagation in a scattering and absorbing medium is described by the following RTE: where L(x, n, t) is the radiance at spatial point x and time t that travels along unit vector n into the solid angle Ω with the speed of light c.µ a and µ s are the absorption and scattering coefficients of the medium, respectively; L b is the blackbody radiance of the medium; and Φ is the scattering phase function of the medium.S is other radiation source such as radiation from ground, road, and buildings in the atmospheric boundary layer.This term is epically important in the atmospheric boundary layer.It is also a very complex term which deserved a very careful and thorough study.Since this paper is focused on the solar radiation transfer, we neglected the source term hereafter in this paper.The integral term represents the radiation scattered from the other directions onto the volume surface.The spectral dependence is omitted since a participating medium with a specific wavelength band is considered in this paper.
According to a kinetic theory of radiative transport [14], the RTE can be written as the Boltzmann equation form using a probability distribution function (PDF), f of a virtual radiative particle or a photon [26,29].The relation between the PDF at a direction i ( f i (x, t)) of a virtual particle or photon and the radiance is expressed as where w i are the weights corresponding to the lattice directions (Figure 1).Neglecting the medium blackbody radiation source term for a much smaller magnitude in a clear atmospheric boundary layer, the RTE of Equation ( 1) can be written in following form: where c is the speed of light and c i = cn i in the finite directions.The Boltzmann form of the RTE can be discretized in space in specific lattice directions, i (Figure 1), and time, t, as follows [7,26]: where Φ is the discrete scattering matrix describing the probability that a photon is scattered from the i to j direction, and are the weighting factors corresponding to the direction i.This function can be used for describing the anisotropic scattering by prescribing the elements of Φ .For the isotropic scattering considered in this work, Φ = 1.The computation domain is first divided into structured cubic grids.For each grid point (0 point in Figure 1), there are 26 lattice directions and neighbor points.The computational algorithm for RT-LBM takes typical collision and streaming operations for each time step.The collision operation is computed in the terms on the right hand of Equation ( 4), where the interactions, the scattering and absorption, of the photon with medium particles in every lattice direction are accounted for.The equilibrium PDF is computed as in Equation ( 9).In the streaming operation, the probability ( + ∆, + ∆) in a grid point is propagated in every direction to neighbor grid points (1 to 26) for the next time step.The macroscopic radiative variables are computed from Equations ( 5) and (6).The time step ∆t is related to the lattice length ∆x and c, c = ∆x ∆t .With the above definitions, the macroscopic radiation quantities, I (radiative intensity) and J (radiation flux vector), are computed from the statistical moments of the particle PDF, f, which are resulted from following integral form equations providing the (2) as the connection.
It is important to point out that the equilibrium function f eq i in the collision term has a different mechanism in radiative transfer than in fluid flow.The f eq i in radiative transfer represents the interaction of photons with the surrounding medium rather than the equilibrium PDF of fluid particle collision in the LBM for fluid modeling.The f eq i is defined as follows: where Φ ij is the discrete scattering matrix describing the probability that a photon is scattered from the i to j direction, and w i are the weighting factors corresponding to the direction i.This function can be used for describing the anisotropic scattering by prescribing the elements of Φ ij .For the isotropic scattering considered in this work, The computation domain is first divided into structured cubic grids.For each grid point (0 point in Figure 1), there are 26 lattice directions and neighbor points.The computational algorithm for RT-LBM takes typical collision and streaming operations for each time step.The collision operation is computed in the terms on the right hand of Equation ( 4), where the interactions, the scattering and absorption, of the photon with medium particles in every lattice direction are accounted for.The equilibrium PDF is computed as in Equation ( 9).In the streaming operation, the probability f i (x + c i ∆t, t + ∆t) in a grid point is propagated in every direction to neighbor grid points (1 to 26) for the next time step.The macroscopic radiative variables are computed from Equations ( 5) and (6).
To keep the model non-dimensional for the comparisons and applications, the medium's scattering albedo, a, and optical depth, b, (non-dimensional parameters) are used instead of the coefficients of absorption and scattering.The characteristic length scale for the photon is l c = (µ a + µ s ) −1 , representing the length of a photon's free path between two consecutive scattering events.The relationship between these parameters is expressed as where l phy = 1 is a modeled normalized physical domain length.
The Monte Carlo Model of Solar Radiation
A MC model is used to evaluate RT-LBM.It tracks a plentiful luminosity packet (referred to hereafter as MC "photons") first and then counts them statistically for distribution of radiative intensity as a function of location, direction, and frequency.Each package carries energy L ∆t/N, where N is the number of MC photons.As a result, each MC photon represents L ∆t/(N hν) real photons, where hν denotes the energy of a real photon.
The MC model emits plentiful MC photons to mimic a radiation source.Each photon travels a distance s and then is scattered, absorbed, or re-emitted.The distance s is determined by where ξ is a random number between 0 and 1.After traveling the distance s, the photon is scattered if a new random number, ξ, is below a; otherwise, the photon is absorbed.The direction of scattering photon is described by the zenith angle θ and the azimuth angle φ.Since scattering is assumed to be isotropic in the model, θ and φ are chosen as [32] φ = 2πξ (13) The MC model uses (10) and (11) to simulate solar radiation that penetrates the model top downward.It emits 5 × 10 9 MC photons to mimic the incoming solar radiation and then tracks them in the atmosphere individually.Its statistical results are eventually used to obtain the distribution of radiative intensity.
The Computation Domains Setup and Boundary Conditions
The design of the 3-D modeling domains is shown in Figure 2. All three cubic domains have the same number of computational grid points (101 × 101 × 101) in the x, y, and z directions.In the GPU computation speed test (Section 3.3), two setups of computational grid points were made much more dense, 501 × 501 × 201, to evaluate the effect of the number of grid points on computation speed.
Results
RT-LBM is evaluated with the MC models, since high-density 3-D radiation field data for these kinds of simulation are not available for comparison.Although the MC model generally requires much more computation power, it has been proven to be a versatile All the incoming solar beam radiation is from the top boundary.The first is the incoming boundary which includes the entire top plane of the computational domain (Figure 2a), the second is the center window incoming boundary condition of the top boundary (Figure 2b), and the third (Figure 2c) is the window incoming boundary with oblique incoming direct solar radiation.A unit radiative intensity at the top surface is prescribed for direct solar radiation, f 6 = 1, f 13,14,17,18,19,22,24,25 = 0, for perpendicular beam f 13 = 1, f 6,14,17,18,19,22,24,25 = 0, for 45 • solar zenith angle beam (15)
Results
RT-LBM is evaluated with the MC models, since high-density 3-D radiation field data for these kinds of simulation are not available for comparison.Although the MC model generally requires much more computation power, it has been proven to be a versatile and accurate method for modeling radiative transfer processes [1,26,29].In the following validation cases, the same computation domain setups, boundary conditions, and radiative parameters were used in the RT-LBM and MC models.In these simulations, we set every variable as non-dimensional, including the unit length of the simulation domain in the x, y, and z directions.Normalized, non-dimensional results provide convenience for application of the simulation results.The model domain is a unit cube, with 101 × 101 × 101 grid points in these simulations except in Section 3.3.The top face of the cubic volume is prescribed with a unit of incoming radiation intensity.The rest of the boundary faces are black walls, i.e., there is no incoming radiation and outgoing radiation freely passes out of the lateral and bottom boundaries.
Direct Solar Beam Radiation Perpendicular to the Entire Top Boundary
Figure 3 shows the simulation results of the plane (Y = 0.5) with RT-LBM (left panel) and the MC model (right panel).In these simulations, the entire top boundary was a prescribed radiation beam with a unit of intensity and the other boundaries were black walls.The simulation parameters were a = 0.9 and b = 12, which is optically very thick as in a clouded atmosphere or atmospheric boundary layer in a forest fire situation [31].The two simulation methods produced similar radiation fields in most areas except the MCM produced slightly greater radiative intensity near the top boundary.Near the side boundaries, the radiative intensity values were smaller due to less scattering of the beam radiation near the black boundaries.This case is also simulated in Mink et al. [29] with their MC model.Figure 4 is a plot of the radiative intensities along the line at the center of the computation domain using these three models.The simulation results from the three methods compare well.First, the results from the two MC models agree well, which validates the correctness of our own MC model.There are small differences near the top boundary between RT-LBM and the MC models.The reason for over-estimation near the incoming boundary area is caused by a small effect of false anisotropic radiative transport in LBM where only the direct beam radiation is specified in the incoming boundary.However, after penetration of two times of the free path lengths, the diffuse radiation becomes dominant and the results are much closer to the MC.Since the optical depth is very high, the radiation intensity from the top boundary to the bottom boundary gradually has a two orders of magnitude reduction.The MC model produced a radiative intensity field that had very little fluctuation in the contour plots (Figure 2), indicating that the 10 9 photons release in this simulation is adequate for removing the statistical noise.
Direct Solar Radiation from a Top Boundary Window
In this case, a perpendicular incoming beam entered a window (0.2 × 0.2) in t dle of the top boundary (Figure 2b).The parameters (a = 0.9, b = 2) of the particu dium are comparable to episodes of heavily polluted atmosphere in some urban ar 35].The LBM simulation was also evaluated with our MC model and other MC [29] results.
Direct Solar Radiation from a Top Boundary Window
In this case, a perpendicular incoming beam entered a window (0.2 × 0.2) in the middle of the top boundary (Figure 2b).The parameters (a = 0.9, b = 2) of the particular medium are comparable to episodes of heavily polluted atmosphere in some urban areas [33][34][35].The LBM simulation was also evaluated with our MC model and other MC model [29] results.
Figure 5 compares our RT-LBM and the MC simulations.The results between the two models matched reasonably well except at the area at the top of the window.Again, the MC model produced slightly larger radiative intensity values near the radiation entrance window.The other area, away from perpendicular to the incoming window, also had much smaller values due to the scattering of the direct beam area for this relatively medium optical depth and large scattering albedo.Some difference between RT-LBM and the MC model was observed in these low-intensity areas.The RT-LBM-simulated slightly smaller values near the incoming radiation boundary are also reported in Mink et al. [29].Figure 6 compares the line samples in the z direction (Y = 0.5; X = 0.5, 0.75, 0.85) for RT-LBM, our MC model, and the other MC model [29] simulations.The simulations compare well in the centerline, excepting slight differences near the window area.The radiation intensity compares reasonably well but there are slightly more differences off the centerline.
Direct Solar Radiation from a Top Boundary Window
In this case, a perpendicular incoming beam entered a window (0.2 × 0.2) in the middle of the top boundary (Figure 2b).The parameters (a = 0.9, b = 2) of the particular medium are comparable to episodes of heavily polluted atmosphere in some urban areas [33][34][35].The LBM simulation was also evaluated with our MC model and other MC model [29] results.
Figure 5 compares our RT-LBM and the MC simulations.The results between the two models matched reasonably well except at the area at the top of the window.Again, the MC model produced slightly larger radiative intensity values near the radiation entrance window.The other area, away from perpendicular to the incoming window, also had much smaller values due to the scattering of the direct beam area for this relatively medium optical depth and large scattering albedo.Some difference between RT-LBM and the MC model was observed in these low-intensity areas.The RT-LBM-simulated slightly smaller values near the incoming radiation boundary are also reported in Mink et al. [29].Figure 6 compares the line samples in the z direction (Y = 0.5; X = 0.5, 0.75, 0.85) for RT-LBM, our MC model, and the other MC model [29] simulations.The simulations compare well in the centerline, excepting slight differences near the window area.The radiation intensity compares reasonably well but there are slightly more differences off the centerline.To analyze the effect of window size on radiative intensity, we conducted multiple runs with different window sizes (0.05 × 0.05, 0.2 × 0.2, and 0.9 × 0.9) but with identical radiation intensities prescribed at the incoming window.Figure 7 displays the X-Z cross section at Y = 0.5.The medium parameters, a = 0.5 and b = 0.1, resemble a typical clean atmosphere [36].In spite of more fluctuations, in the MC simulation results many more photon particles (10 12 ) were released (Figure 7, bottom panel) due to the low optical depth in clean air, and the RT-LBM and MC simulations compare reasonably well.The variation of the centerline radiative intensity values was less than 5% with the larger window size having a slight greater radiative intensity.In these cases of the lower scattering albedo (Figures 7 and 8), the differences between RT-LBM and the MC model are much smaller near the incoming boundary compared with the case of large scattering albedo (Figure 4).This analysis indicates that very clean atmospheric conditions can be parameterized without much error using a single computation given an identical boundary or very similar boundary conditions.To analyze the effect of window size on radiative intensity, we conducted multiple runs with different window sizes (0.05 × 0.05, 0.2 × 0.2, and 0.9 × 0.9) but with identical radiation intensities prescribed at the incoming window.Figure 7 displays the X-Z cross section at Y = 0.5.The medium parameters, a = 0.5 and b = 0.1, resemble a typical clean atmosphere [36].In spite of more fluctuations, in the MC simulation results many more photon particles (10 12 ) were released (Figure 7, bottom panel) due to the low optical depth in clean air, and the RT-LBM and MC simulations compare reasonably well.The variation of the centerline radiative intensity values was less than 5% with the larger window size having a slight greater radiative intensity.In these cases of the lower scattering albedo (Figures 7 and 8), the differences between RT-LBM and the MC model are much smaller near the incoming boundary compared with the case of large scattering albedo (Figure 4).This analysis indicates that very clean atmospheric conditions can be parameterized without much error using a single computation given an identical boundary or very similar boundary conditions.To analyze the effect of window size on radiative intensity, we conducted multiple runs with different window sizes (0.05 × 0.05, 0.2 × 0.2, and 0.9 × 0.9) but with identical radiation intensities prescribed at the incoming window.Figure 7 displays the X-Z cross section at Y = 0.5.The medium parameters, a = 0.5 and b = 0.1, resemble a typical clean atmosphere [36].In spite of more fluctuations, in the MC simulation results many more photon particles (10 12 ) were released (Figure 7, bottom panel) due to the low optical depth in clean air, and the RT-LBM and MC simulations compare reasonably well.The variation of the centerline radiative intensity values was less than 5% with the larger window size having a slight greater radiative intensity.In these cases of the lower scattering albedo (Figures 7 and 8), the differences between RT-LBM and the MC model are much smaller near the incoming boundary compared with the case of large scattering albedo (Figure 4).This analysis indicates that very clean atmospheric conditions can be parameterized without much error using a single computation given an identical boundary or very similar boundary conditions.Another situation, of solar direct beam radiation oblique to the level ground surface, is simulated.The atmospheric optical parameters of a clean air (a = 0.5, b = 0.1) situation were used.The motivation for this simulation was to look into whether direct solar radiation decreases when the solar ray is not perpendicular to the top boundary surface.The incoming solar zenith angle was set to 45° from the west and the incoming direct solar radiative intensity was set to one.The RT-LBM and MC simulations compare reasonably well (Figure 8).The decaying of the solar radiative intensity along the centerline of the perpendicular to window (Figure 7) and oblique to the window at 45° shows small differences between the perpendicular and oblique cases.This comparison analysis indicates that a perpendicular-to-window simulation can be used to approximate the radiative intensity for different solar elevation angles, which has a very significant implication for reducing the radiation computation in a situation where the atmospheric optical parameters are constant and vertically homogenous.Another situation, of solar direct beam radiation oblique to the level ground surface, is simulated.The atmospheric optical parameters of a clean air (a = 0.5, b = 0.1) situation were used.The motivation for this simulation was to look into whether direct solar radiation decreases when the solar ray is not perpendicular to the top boundary surface.The incoming solar zenith angle was set to 45° from the west and the incoming direct solar radiative intensity was set to one.The RT-LBM and MC simulations compare reasonably well (Figure 8).The decaying of the solar radiative intensity along the centerline of the perpendicular to window (Figure 7) and oblique to the window at 45° shows small differences between the perpendicular and oblique cases.This comparison analysis indicates that a perpendicular-to-window simulation can be used to approximate the radiative intensity for different solar elevation angles, which has a very significant implication for reducing the radiation computation in a situation where the atmospheric optical parameters are constant and vertically homogenous.Another situation, of solar direct beam radiation oblique to the level ground surface, is simulated.The atmospheric optical parameters of a clean air (a = 0.5, b = 0.1) situation were used.The motivation for this simulation was to look into whether direct solar radiation decreases when the solar ray is not perpendicular to the top boundary surface.The incoming solar zenith angle was set to 45 • from the west and the incoming direct solar radiative intensity was set to one.The RT-LBM and MC simulations compare reasonably well (Figure 8).The decaying of the solar radiative intensity along the centerline of the perpendicular to window (Figure 7) and oblique to the window at 45 • shows small differences between the perpendicular and oblique cases.This comparison analysis indicates that a perpendicular-to-window simulation can be used to approximate the radiative intensity for different solar elevation angles, which has a very significant implication for reducing the radiation computation in a situation where the atmospheric optical parameters are constant and vertically homogenous.
GPU Implementation of RT-LBM and Computational Speed
In solving the complex RTE, RT-LBM uses the typical collision and streaming steps that the LBM used to solve fluid simulations [30,31].By writing the Equation (4) in the following form, The complex scattering and absorption processes in the RTE are computed within each node (Figure 1) by a photon-medium collision process (Equation ( 16)), while a streaming process of photons is handled by simple linear propagation between the neighbor grid nodes (Equation ( 17)).This makes the LBM very effective within the massive parallelization of the GPU architecture.Modern GPUs have thousands of compute cores and thus are capable of running thousands of threads simultaneously.In the LBM algorithm, the data locality is satisfied and time-stepping is explicit.Each computation grid cell in the domain is assigned to a GPU thread.The entire computation procedure described in Section 2.1 can be summarized in following pseudo-code: 1.
Set up the radiation parameters and computation grids; 2.
For each iteration time step t do; 3.
For each lattice node x do; 4.
If node x is a boundary point then; 5.
Apply the boundary conditions; 6.
The model code is written in NVidia CUDA (Common Unified Device Architecture).In our implementation, the 3-D computation grids are mapped to 1-D memory.In GPUs, threads execute in lockstep in group sets called warps.The threads within each warp need to load memory together in order to use the hardware most effectively.This is called memory coalescing.In our implementation, we manage this by ensuring threads within a warp are accessing consecutive global memory as often as possible.For instance, when calculating the PDF vectors in Equation ( 15), we must load all 26 lattice PDFs per grid cell.We organize the PDFs such that all the values for each specific direction are consecutive in memory.In this way, as the threads of a warp access the same direction across consecutive grid cells, these memory accesses can be coalesced.
A common bottleneck in GPU-dependent applications is transferring data between main memory and GPU memory.In our implementation, we are performing the entire simulation on the GPU and the only time data need to be transferred back to the CPU during the simulation is when we calculate the error norm to check the convergence.In our initial implementation, this step was conducted by first transferring the radiation intensity data for every grid cell to main memory each time step and then calculating the error norm on the CPU.To improve performance, we only check the error norm every 10 time steps.This leads to a 3.5× speedup over checking the error norm every time step for the 101 3 domain case.This scheme is sufficient, but we took it a step further, implementing the error norm calculation itself on the GPU.To achieve this, we implement a parallel reduction to produce a small number of partial sums of the radiation intensity data.It is this array of partial sums that is transferred to main memory instead of the entire volume of radiation intensity data.
On the CPU, we calculate the final sums and complete the error norm calculation.This new implementation only results in a 1.32× speedup (101 3 domain) over the previous scheme of checking only every 10 time steps.However, we no longer need to check the error norm at a reduced frequency to achieve similar performance; checking every 10 time steps is only 0.057× faster (101 3 domain) than checking once a frame using GPU-accelerated calculation.In the tables below, we opted to use the GPU calculation at 10 frames per second but it is comparable to the results of checking every frame.
Tables 1 and 2 list the computational efficiency of our RT-LBM.A computational domain with a direct top beam (Figures 2 and 3) was used for the demonstration.In order to see the domain size effect on computation speed, the computation was carried out for different numbers of the computational nodes (101 × 101 × 101 and 501 × 501 × 201).The RTE is a steady-state equation, and many iterations are required to achieve a steady-state solution.These computations are considered to converge to a steady-state solution when the error norm is less than 10 −6 .The normalized error or error norm ε at iteration time step t is defined as: where I is the radiation intensity at grid nodes, n is the grid node index, and N is the total number of grid points in the entire computation domain.The single-thread CPU computation using a FORTRAN version of the code, which is slightly faster than the code in C, is used for the computation speed comparison.The speed of the RT-LBM model and MC model in a same CPU are compared for the first case only to demonstrate that the MC model is much slower than the RT-LBM.RT-LBM in the CPU is about 10.36 times faster than the MC model from the first domain setup using the CPU.A NVidia Tesla V100 (5120 cores, 32 GB memory) was run to observe the speed-up factors for the GPU over the CPU.The CPU used for the RT-LBM model computation is an Intel CPU (Intel Xeon CPU at 2.3 GHz).For the domain size of 101 × 101 × 101, the Tesla V100 GPU showed a 39.24 times speed-up compared with single CPU processing (Table 1).It is worthwhile noting the speed-up factor of RT-LBM (GPU) over the MC model (CPU) was 406.53 (370/0.91)times if RT-LBM was run on a Tesla V100 GPU.For the much larger domain size, 501 × 501 × 201 grid nodes (Table 2), the RT-LBM in the Tesla V100 GPU had a 120.03 times speed-up compared with the Intel Xeon CPU at 2.3 GHz.These results indicated the GPU is even more effective in speeding up RT-LBM computations when the computational domain is much larger, which is consistent with what we found with the LBM fluid flow modeling [30].We are in the process of extending our RT-LBM implementation to multiple GPUs which will be necessary in order to handle even larger computational domains.
The computational speed-up of RT-LBM using the single GPU over CPU is not as great as in the case of turbulent flow modeling [30], which showed a 200 to 500 speed-up using older NVidia GPU cards.The reason is turbulent flow modeling uses a timemarching transient model, while RT-LBM is a steady-state model, which requires many more iterations to achieve a steady-state solution.Nevertheless, the GPU speed-up of 120 times in RT-LBM is significant for implementing radiative transfer modeling which is computationally expensive.
The model code is also tested for the grid dependency by computing the radiation field in a same domain using three different grid densities.Figure 9 shows the radiation intensities in three different grid densities (101 3 , 201 3 , and 301 3 computation grids).The convergence criteria were set to be 10 −5 for the error norm.The LBM model produced very similar radiation fields that are hard to see the differences visually.This fact indicates that 10 −5 error norm for convergence criteria is probably good enough for this model domain situation.The convergence behaviors of the different densities of computation grids were also recorded at each iteration step and plotted in Figure 10.The three different grid densities showed a similar trend in convergence behavior with respect to iteration time.The iteration time to reach a same small error norm of 10 −6 for denser grids requires many more steps of iterations.In these simulations, the Courant number are all equal to one.Numerical stable algorithm was assured and also observed.The model code is also tested for the grid dependency by computing the radiation field in a same domain using three different grid densities.Figure 9 shows the radiation intensities in three different grid densities (101 3 , 201 3 , and 301 3 computation grids).The convergence criteria were set to be 10 −5 for the error norm.The LBM model produced very similar radiation fields that are hard to see the differences visually.This fact indicates that 10 −5 error norm for convergence criteria is probably good enough for this model domain situation.The convergence behaviors of the different densities of computation grids were also recorded at each iteration step and plotted in Figure 10.The three different grid densities showed a similar trend in convergence behavior with respect to iteration time.The iteration time to reach a same small error norm of 10 −6 for denser grids requires many more steps of iterations.In these simulations, the Courant number are all equal to one.Numerical stable algorithm was assured and also observed.The model code is also tested for the grid dependency by computing the radiation field in a same domain using three different grid densities.Figure 9 shows the radiation intensities in three different grid densities (101 3 , 201 3 , and 301 3 computation grids).The convergence criteria were set to be 10 −5 for the error norm.The LBM model produced very similar radiation fields that are hard to see the differences visually.This fact indicates that 10 −5 error norm for convergence criteria is probably good enough for this model domain situation.The convergence behaviors of the different densities of computation grids were also recorded at each iteration step and plotted in Figure 10.The three different grid densities showed a similar trend in convergence behavior with respect to iteration time.The iteration time to reach a same small error norm of 10 −6 for denser grids requires many more steps of iterations.In these simulations, the Courant number are all equal to one.Numerical stable algorithm was assured and also observed.The unit conversion in LBM fluid modeling is complex [37].However, in this steadystate radiative transfer modeling, the time step is only for the iteration computation and there is no problem to map the non-dimensional variables to variables' units.Since the LBM-RT in this paper is a steady-state problem, only conversions are needed between physical length and non-dimensional length, and the scattering and absorption coefficients and non-dimensional parameters a and b (a scattering albedo, b optical depth) can be transformed using Equations (10) and (11).The radiation intensity can be converted to a physical unit by multiplying the value of incoming boundary intensity with a physical unit.
Discussion and Conclusions
This paper reported a newly developed radiative transfer model using the lattice Boltzmann method, RT-LBM, for applications in atmospheric environments.The test results indicated the new RT-LBM has reasonably accurate results compared with traditional MC models.The model takes advantage of the LBM algorithms of collision and streaming to accelerate the computation speed.The implementation of RT-LBM using the GPU has realized a computation speed-up of 120 times faster than a CPU implementation for a very large domain.RT-LBM also had a 10 times speed-up over the MC model for a same radiative case on the same CPU, which makes a total of a 406 times speed-up for RT-LBM on a GPU over the MC model on a CPU.
The atmospheric environment is a complex composite of many different gases, aerosols, and hydrometers, and the composition is very dynamic.The optical parameters are often very different for different wavelengths of radiation.In atmospheric radiative transfer modeling, many runs for different spectral lengths with different optical parameters must be made to complete the entire radiative energy transfer domain.Since radiative modeling is computationally intensive, the newly developed RT-LBM provides advantages.However, many research areas, such as complex boundary specification, anisotropic scattering by large aerosols, and optical parameters specification, need to be carried out to realize the potential of this new method for specific applications.Some applications, such as for solar energy, are feasible with RT-LBM using broadband optical parameters to reduce the complexity.In this case, solar radiation can be divided into two spectral bands, shortwave and longwave.Two different sets of bulk optical parameters can be used for solar shortwave radiation and longwave radiation from the ground surface.
Figure 1 .
Figure 1.D3Q26 lattice used in RT-LBM.The numbered arrows are the lattice directions of the photon propagation to neighbor lattice nodes.
Figure 1 .
Figure 1.D3Q26 lattice used in RT-LBM.The numbered arrows are the lattice directions of the photon propagation to neighbor lattice nodes.
Atmosphere 2021 , 15 Figure 2 .
Figure 2. Three types of incoming radiation boundaries (a-c) and setups for the simulations.The red vertical planes are the Z-X cross sections at Y = 0.5, which are plotted in the Results section.
Figure 2 .
Figure 2. Three types of incoming radiation boundaries (a-c) and setups for the simulations.The red vertical planes are the Z-X cross sections at Y = 0.5, which are plotted in the Results section.
Figure 3 .
Figure 3.Comparison of the simulation results from RT-LBM (left panel) and the MC mod panel).The X-Z cross sections (Y = 0.5) are from the 3-D radiative intensity fields.The radia parameters are a = 0.9 and b = 12.
Figure 4 .
Figure 4. Comparison of the radiative intensity along the Z lines (X = 0.5, Y = 0.5) for RT-LB MC model, and the MC model from Mink et al. (2020).The radiative parameters are a = 0.9 12.
compares our RT-LBM and the MC simulations.The results between models matched reasonably well except at the area at the top of the window.Ag MC model produced slightly larger radiative intensity values near the radiation e window.The other area, away from perpendicular to the incoming window, a much smaller values due to the scattering of the direct beam area for this relativ dium optical depth and large scattering albedo.Some difference between RT-LB the MC model was observed in these low-intensity areas.The RT-LBM-simulated smaller values near the incoming radiation boundary are also reported inMink et
Figure 3 .
Figure 3.Comparison of the simulation results from RT-LBM (left panel) and the MC model (right panel).The X-Z cross sections (Y = 0.5) are from the 3-D radiative intensity fields.The radiative parameters are a = 0.9 and b = 12.
Figure 3 .
Figure 3.Comparison of the simulation results from RT-LBM (left panel) and the MC model (right panel).The X-Z cross sections (Y = 0.5) are from the 3-D radiative intensity fields.The radiative parameters are a = 0.9 and b = 12.
Figure 4 .
Figure 4. Comparison of the radiative intensity along the Z lines (X = 0.5, Y = 0.5) for RT-LBM, the MC model, and the MC model from Mink et al. (2020).The radiative parameters are a = 0.9 and b = 12.
Figure 4 .
Figure 4. Comparison of the radiative intensity along the Z lines (X = 0.5, Y = 0.5) for RT-LBM, the MC model, and the MC model from Mink et al. (2020).The radiative parameters are a = 0.9 and b = 12.
Figure 5 .
Figure 5. Windowed simulation results from RT-LBM (left panel) and the MC model (right panel).The X-Z cross sections (Y = 0.5) are from the 3-D radiative intensity fields.The radiative parameters are a = 0.9, b = 2.
Figure 5 . 15 Figure 5 .
Figure 5. Windowed simulation results from RT-LBM (left panel) and the MC model (right panel).The X-Z cross sections (Y = 0.5) are from the 3-D radiative intensity fields.The radiative parameters are a = 0.9, b = 2.
Figure 7 .
Figure 7. Different window size effects on the direct solar radiation intensity.The top row are from RT-LBM simulations.The bottom row are from MC model simulations.The radiative parameters are a = 0.5, b = 0.1.
Figure 8 .
Figure 8. Oblique incoming solar direct beam radiation simulation case.Comparison of the radiative intensity at X-Z cross section at Y = 0.5.for RT-LBM and the MC model.The radiative parameters are a = 0.5, b = 0.1.
7 . 15 Figure 7 .
Figure 7. Different window size effects on the direct solar radiation intensity.The top row are from RT-LBM simulations.The bottom row are from MC model simulations.The radiative parameters are a = 0.5, b = 0.1.
Figure 8 .
Figure 8. Oblique incoming solar direct beam radiation simulation case.Comparison of the radiative intensity at X-Z cross section at Y = 0.5.for RT-LBM and the MC model.The radiative parameters are a = 0.5, b = 0.1.
Figure 8 .
Figure 8. Oblique incoming solar direct beam radiation simulation case.Comparison of the radiative intensity at X-Z cross section at Y = 0.5.for RT-LBM and the MC model.The radiative parameters are a = 0.5, b = 0.1.
Figure 9 .
Figure 9.The plots of vertical (Y = 0.5) cross-sections of radiation intensities with different computation grid densities (left: 101 3 , center: 201 3 , and right: 301 3 ).The radiation parameters (a = 0.5, b = 0.1) and the domain sizes are the same for these three runs.
Figure 10 .
Figure 10.The convergence behaviors of the model simulations with three different grid densities.
Figure 9 .
Figure 9.The plots of vertical (Y = 0.5) cross-sections of radiation intensities with different computation grid densities (left: 101 3 , center: 201 3 , and right: 301 3 ).The radiation parameters (a = 0.5, b = 0.1) and the domain sizes are the same for these three runs.
Figure 9 .
Figure 9.The plots of vertical (Y = 0.5) cross-sections of radiation intensities with different computation grid densities (left: 101 3 , center: 201 3 , and right: 301 3 ).The radiation parameters (a = 0.5, b = 0.1) and the domain sizes are the same for these three runs.
Figure 10 .
Figure 10.The convergence behaviors of the model simulations with three different grid densities.Figure 10.The convergence behaviors of the model simulations with three different grid densities.
Figure 10 .
Figure 10.The convergence behaviors of the model simulations with three different grid densities.Figure 10.The convergence behaviors of the model simulations with three different grid densities. | 10,696 | 2021-10-09T00:00:00.000 | [
"Physics"
] |
catena-Poly[[[bis(methanol-κO)bis(selenocyanato-κN)manganese(II)]-μ-1,2-bis(pyridin-4-yl)ethane-κ2 N:N′] 1,2-bis(pyridin-4-yl)ethane monosolvate]
The reaction of manganese selenocyanate with 1,2-bis(pyridin-4-yl)ethane (bpa) leads to the title compound, {[Mn(NCSe)2(C12H12N2)(CH3OH)2]·C12H12N2}n. The MnII cation is coordinated by two N-bonded selenocyanate anions, two bpa ligands and two O-bonded methanol molecules, within a slightly distorted octahedral geometry. The MnII cations and the non-coordinating N-donor ligands are located on centers of inversion while the coordinating N-donor co-ligands are located on a twofold rotation axis. In the crystal, the MnII cations are linked into chains along the c-axis direction by the bpa ligands. The chains are further connected via a non-coordinating bpa ligand into layers parallel to (3-10) via O—H⋯N hydrogen-bonding interactions.
The reaction of manganese selenocyanate with 1,2-bis-(pyridin-4-yl)ethane (bpa) leads to the title compound, {[Mn(NCSe) 2 (C 12 H 12 N 2 )(CH 3 OH) 2 ]ÁC 12 H 12 N 2 } n . The Mn II cation is coordinated by two N-bonded selenocyanate anions, two bpa ligands and two O-bonded methanol molecules, within a slightly distorted octahedral geometry. The Mn II cations and the non-coordinating N-donor ligands are located on centers of inversion while the coordinating N-donor coligands are located on a twofold rotation axis. In the crystal, the Mn II cations are linked into chains along the c-axis direction by the bpa ligands. The chains are further connected via a non-coordinating bpa ligand into layers parallel to (310) via O-HÁ Á ÁN hydrogen-bonding interactions.
Susanne Wöhlert, Inke Jess and Christian Näther Comment
Recently, we have reported on the synthesis and characterization of thiocyanate coordination polymers with monodentate and bidentate neutral co-ligands like e.g. pyridine, pyridazine or 1,2-bis(pyridin-4-yl)ethylene (Boeckmann & Näther, 2010Wöhlert & Näther, 2012a;Wöhlert & Näther, 2012b). Within this project we investigated the influence of the neutral co-ligand on the structural, thermal and magnetic properties of such compounds. In further work we also investigated the influence of the anionic ligand. In this context we have reported a new coordination polymer based on cobalt(II) selenocyanate and 1,2-bis(pyridin-4-yl)ethylene, in which the cobalt(II) cations are connected by the selenocyanato anions into chains that are further linked into layers by the neutral N-donor co-ligand (Wöhlert et al., 2012). In the present investigation we tried to prepare similar compounds with manganese(II) selenocyanate and 1,2-bis(pyridin-4yl)ethane (bpa), which results in the formation of single-crystals of the title compound.
The asymmetric unit of the title compound [Mn(NCSe) 2 (C 12 H 12 N 2 )(CH 3 OH) 2 ] n . nC 12 H 12 N 2 solvate consists of a manganese(II) cation and one non-coordinating bpa ligand which are located on a center of inversion, one coordinating bpa ligand on a 2-fold rotation axis and one selenocyanate anion and one methanol molecule in general positions (Fig. 1).
In the crystal structure each manganese(II) cation is coordinated by two terminal N-bonded selenocyanate anions, two Obonded methanol molecules and two N-bonded bpa ligands within slightly distorted octahedra. The MnN 4 O 2 distances ranges from 2.180 (3) Å to 2.322 (2) Å with angles around the manganese(II) cation between 88.87 (9) ° to 91.13 (9) ° and of 180 ° (Tab. 1). The manganese(II) cations are linked by the bpa ligands into chains which elongate in the direction of the crystallographic c-axis (Fig. 2). These chains are further linked into layers parallel to the (3 -1 0) plane by noncoordinated bpa molecules via O-H-N hydrogen bonding (Fig. 3).
Refinement
The C-H H atoms were positions with idealized geometry (methyl H atoms allowed to rotate but not to tip) and were refined with U iso (H) = 1.2 U eq (C) (1.5 for methyl H atoms) using a riding model with C-H = 0.93 Å, C-H 2 = 0.97 Å and C-H 3 = 0.96 Å. The O-H H atom of the methanol molecule was located in difference map, its bond length was set to 0.82 Å and finally it was refined with U iso (H) = 1.2 U eq (O) using a riding model. SHELXL97 (Sheldrick, 2008); molecular graphics: XP in SHELXTL (Sheldrick, 2008) and DIAMOND (Brandenburg, 2011); software used to prepare material for publication: XCIF in SHELXTL (Sheldrick, 2008).
Figure 1
The crystal structure of the title compound with labeling and displacement ellipsoids drawn at the 50% probability level.
Special details
Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 )
x y z U iso */U eq | 1,239 | 2013-03-20T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Paleostress Analysis of the Northeastern Limb of Pulkhana Anticline /NE Iraq: Implications for Arabian Plate Tectonic Evolution
Pulkhana anticline is located in Tuzhurmatu area, about 50 km SE of Kirkuk city. The study area forms a part of the Zagros Folded Zone which is situated in the unstable shelf of Iraq within the physiographic zone called Foothill Zone (in the middle of HemrinMakhul subzone). The north eastern limb of the anticline reaches to 50o and the dip of the south western limb reaches to 70o. The core of the structure comprises the rocks of Fat’ha Formation surrounded by rocks of Injana and Mukdadiya Formations, whereas Bai-Hasan Formation forms the slopes of the low hills surrounding the anticline. These Formations range in age from Middle Miocene to Pliocene. More than 761 readings of joint planes were collected from 20 stations within 5 traverses in the study area. The study of joint sets and system was within Injana and Mukhdadiya formations, along traverses with 3-5 stations for each travers track. The results showed the presence of two sets of tension joints (bc, ac) and five sets of shear joints, through defining the maximum stress axis (σ1) and acute angle dividers for these conjugate joints. It was determined that two directions of Paleostress are present in the area, which are NE-SW and NW-SE. The direction of the first major stress (NE-SW) is orthogonal with, or normal to, the fold axis in the study area, which can be considered as a horizontal component which resulted from oblique collision of Arabian and Eurasian Plates. This old compressive stress is the reason behind the formation of the tension joint (ac) and shear joints, where the sets (ac) and system are perpendicular-semi perpendicular to the bedding plane, as they were formed at an early stage of folding. Also, the ) joint was formed in five tectonic stages with different time intervals. Joints formed in different tectonic stages, in the study area, are attributed to oblique collision of Arabian and Eurasian plates and counter clockwise rotation of Arabian plate relative to Eruasian plate.
1-Introduction
The state of stress in the rocks is generally anisotropic and is defined by stress ellipsoid axes, which characterize the magnitude of the principal stresses [1]. Stress analysis is a useful and popular tool for structural and seismological elements [2]. If an ellipsoidal body undergoes positive compression, the longest axis is the ellipsoid's major stress (σ1), the intermediate axis is the intermediate stress (σ2), and the shortest axis is the minimum stress (σ3) [3]. Several paleostress inversion methods have been developed using graphical and analytical means. The orientation and shape of the stress ellipsoid, with respect to earth's surface, controls the type, orientation and slip sense of faults developed in an area [4]. Extensional structures grow perpendicular to a minimum principal stress (σ3) [5]. (σ1) is oriented perpendicular to compressional structures, and for strike-slip fault and other tectonic structures produced by shearing, the intermediate stress is vertical [6,7]. Although paleostress analysis proved to be empirically valid and successful, there are some limitation to its usage [8]. Paleostress inversion studies are used to understand the effects of past slip events along active fault by making use of deflection in the orientations of the stress axes to recognize stress perturbations near the major fault [9,10]. Standard paleostress inversion techniques are used only for determining the orientations and relative magnitude (stress ratio) of the regional principal stress axes [4]. It is assumed that slip on the activated pre-existing planes of weaknesses and newly developed faults occur in accordance with the orientation and relative magnitudes of the principal stresses [11].
When the fault cannot be observed by paleostress analysis, joints have been used instead [12]. Joints as paleostress markers provide the record of stress orientation at the time of propagation and are often extensional in nature [13,14]. Joints can be used separately or collectively with other structures, including contractional fractures such as stylolites, to constrain the stress field that led to their formation [12]. The assumption is that the fractures formed in the same homogenous stress field, i.e. related to the same deformational event that the rocks themselves are fairly homogenous, do not significantly perturb the stress field in their vicinity, and also that the structures have not rotated significantly since their initiation [15].
This study aims to rebuild the evolution of the collision between the northeastern parts of the Arabian plate with the Eurasian plate, in addition to determining the plaeostress in the northeast limb of the Pulkhana anticline. In order to achieve that, we analyzed more than 760 joint slip surfaces in the study area.
2-Geological Setting
The study area is located between latitudes (34º 39' 34.9" N _ 35º 01' 15.6") and longitudes (44º 31' 52.7" E_ 45º 00' 57.3" E). The northwestern end of the study area is located in Salah Al-Deen Governorate, while the southeastern end is located in Diyala Governorate (Figure-1). The study area forms a part of the Zagros Folded Zone which is situated within the physiographic zone called Foothill Zone (in the middle of Hemrin-Makhul subzone) in the unstable shelf of Iraq [16] (Figure-2). Pulkhana anticline (study area) is one of the important structural elements in Hemrin-Makhul subzone. It has asymmetric long anticline trends (NW_SE). The anticline is overthrust from the NE in the exposed rocks and the most of the SW limb of the anticline is absent beneath the recent deposits (Figure-3). The dip of the north eastern limb of the anticline reaches 50ᵒ, while the dip of the south western limb reaches to 70ᵒ. Accordingly, Pulkhana anticline is an upright anticline depending on its axial plane, while it is a gentle anticline depending on its interlimb angle (Table-1& Figure-4), according to the classification of a previous study [17]. This information must be moved to the first page The core of the structure comprises rocks of Fat'ha Formation surrounded by rocks of Injana and Mukdadiya formations, whereas Upper Bakhtiary Formation forms the slopes of the low hills surrounding the anticline. These Formations range in age from Middle Miocene to Pliocene [18]. The present study was conducted in the exposed Fat'ha, Injana, and Mukdadiya Formations. In the study area, the exposed Fat'ha Formation includes 641.8m thick layers of gypsum, limestone, claystone and marl. Gypsum is white, occasionally light grey and medium grey, moderately hard to hard, massive, occasionally showing nodules, bands and rosy shapes, randomly fractured, and occasionally filled by clay minerals. Limestone beds are pale yellowish brown to dark yellowish brown, moderately hard to hard, thinly to thickly bedded, fossiliferous, fractured, occasionally recrystallized, and showing karst features in some places, while anhydritic and silty in other places. Claystone is reddish brown, occasionally moderately yellowish brown, soft, and eroded. Injana Formation is comprised of 1393.6 m thick layers of sandstone and claystone. Sandstone is light brownish grey to light olive grey, occasionally reddish brown, firm to hard, fine to coarse grained, thickly bedded, poorly cemented, calcareous, occasionally ripple marked, and cross bedded. Claystone is moderately reddish brown to moderately brown, soft to firm, thinly bedded, moderately bedded in some places, silty, and fractured. Mukdadiya Formation includes 655.3m thicklayers of pebbly sandstone, sandstone and claystone. Sandstone is light olive grey to light grey, occasionally yellowish grey, soft to firm, coarse to very coarse grained, poorly cemented, cemented in some places, moderately bedded, occasionally cross bedded, silty, fractured, and pebbly. The size of the pebbles ranges 1-1.5 cm. Claystone is moderately brown to light brown, soft to firm, thinly to moderately bedded, silty, fractured, and eroded. Marl is greenish grey to very pale green, soft, thinly bedded, and eroded. Bai-Hassan Formation is the youngest Formation in the study area, as recognized on the basis of the first appearance of thick conglomeratic bed, according to previously reported criteria [19,20]. It is characterized by 741m thick layers, calcareous and poorly cemented conglomerates with intercalated light olive grey sandstone lenses and light brown to moderately yellowish brown claystone. Conglomerate consists of gravels of different sizes (1-15 cm). The shape of the gravels is spherical, rod, or bladed .
3-Methodology
The methodology of this investigation included three stages; first, the stage of collection of data from academic books, researches, papers, maps and satellite images as well as personal communications. Second, field work stage which started with 3 field exploration trips and 13 field work trips to the study area. The required equipment included a compass device (Brunton( to measure bedding planes and joint planes and GPS devices (Garmin type) to calculate the location attitude in 5 traverses (Figure- -Brief description of the exposed rocks of each formation in the study area.
Desk work stage included: -Analyzing and describing joints data by stereographic projection. -Calculating the paleostress direction of the conjugate joints by measuring their acute bisector , using Win-Tensor software.
-Preparing the topographic map of the study area by using GIS software.
Paleostress analysis
In this study, conjugate shear joints were used for analyzing paleostress.
4-1 joints
Joints are among the most common of all geological features. Hardly any outcrop of rock exist that does not have some types of joints through it. They provide the sequence of tectonic events during which the joints were formed and the physical characteristics of the rocks in which they occur [21]. Studying of joints in rocks, however, shows that the joints' geometry is self-similar, which means that the joints have the same geometric pattern and spatial distribution regardless of whether the scale at which they are viewed is a microscopic scale, an outcrop scale, or a regional scale [21]. Because the outcrop scale is easy to observe and is the basis of most field geology studies, we emphasized the descriptive characteristics of joints at this scale. Joints in the study area were classified according to their geometrical relations with the three perpendicular geometrical axes (a, b & c.), where (a) is parallel to the dip direction, (b) is parallel to the strike direction and (c) is perpendicular to a and b. This classification was used by an earlier work [21], and followed by later reports [22][23][24][25][26] (Figure-5).
2-1-1 joints analysis in the study are
Dip direction and dip angle were measured for the joint planes as well as the attitude of the bedding plane which contains the joints. The shear systems appeared either as individual or conjugate. Many collected data were neglected due to the nonexistence of the two conjugate joints of the system in the same station.
Traverse 3/ station 4
As shown in Figure-30, (hk0) acute about (b) axis and (0kl) acute about (c) axis shear systems were recognized. The paleostresses which were analyzed from the shear joint systems are described below.
Traverse 4/ station 4
As shown in Figure-37, (hk0) acute about (b) axis and (0kl) acute about (c) axis shear systems were recognized. The paleostresses which were analyzed from the shear joint systems are described below.
Extensional stress
Analyzed stress from (h0l) acute about (c) with (σ3) attitudes (04°/018°) was found in this station (Figure-40 B). Individual (hk0) acute about (b) axis shear system as well as (ac) and (bc) sets were also recorded in this station.
Traverse 5/ station 3
NE-SW compressive stress from (h0l) acute about (a) axis shear system with (σ1) attitude (02°/015°) was recognized in this station ( Figure-42 and B). Individual (hk0) acute about (b) axis system as well as (ac) and (bc) sets were also recognized. Figure-43 A shows a field photo for (h0l) acute about (a) in the study area. acute about (a) in traverse 5/ station 3.
Discussion
Pulkhana anticline is considered as of a Fault -Propagation Fold type [27][28][29][30][31] . The dip of the south western limb of the anticline was previously reported to be about 70⁰, whereas the dip of the north eastern limb reached to 50º [18]. Accordingly, Pulkhana anticline is classified as a gentle fold depending on its interlimb angle or as an upright fold depending on its dip of axial plane.
Pulkhana anticline intersects by many of joint sets, some of which are parallel to the hinge line of the fold and the others are vertical. From the study of these joints in Injana and Mukdadiya Formations across five traverses vertical to the strike of bedding, two types of orthogonal extensional joint sets (ac & bc) and five types of shear joint systems (hk0>a, hk0>b, h0l>a, h0l>c & 0kl>c) were distinguished.
Locating the maximum principal stress axis (σ1) from the acute bisectors of shear joints clarified that the most prevalent Paleostress directions are NE-SW and NW-SE. The main principle stress (NE-SW) was represented in the form of (ac) tension joints and (hk0>a & h0l>a) shear joints, whereas the secondary principal stress (NW-SE) was represented in the form of (bc) tension joints and (hk0>b) shear joints. The (h0l>c & 0kl>c) shear joints developed by the extensional phase were associated with NE-SW and NW-SE compressive stresses.
The formation of joints in multi-tectonic stages (i.e. different directions of maximum principal stress σ1) in the study area could be attributed to the oblique collision between the Arabian plate and Eurasian plate, as well as the counter clockwise rotation of the former relative to the latter. Conclusions 1-Joints formed in different tectonic stages, in the study area are attributed to oblique collision of the Arabian and Eurasian plates, in addition to the counter clockwise rotation of the former relative to the latter plates. 2-Most probably, the growth of Pulkhana anticline started in the Middle Miocene as fold-related fault and developed in late Pliocene with influences by the conjugated strike slip fault. 3-Paleostress analysis for fracture structures indicated that the studied area was subjected throughout it geological history to the compression stresses (NE-SW trend) which was perpendicular to Pulkhana anticline axis. 4-Pulkhana anticline undergoes more than one tectonic stress regime, and this can be noticed from the different values of stress directions in the study area. the best evidence enhancing this scenario is the existence of (hkl) joints in the study area. 5-The study of joints in the NE limb of Pulkhana anticline clarified that the most prevalent paleostress directions are NE-SW and NW-SE. 6-The NE-SW stress direction is considered as a primary compressive stress resulted from the oblique collision between the Arabian plate and Eurasian plate, which seems responsible for the initial folding in the study area. Whereas the NW-SE stress direction is considered as a secondary compressive stress developed during the relaxation event succeeding the primary compressive pulse. 7-The NE-SW extensional stress is considered as a releasing phase that is associated with the final uplift of the fold. 8-The NW-SE extensional face is considered as an extensional stress related to the primary NE-SW compressive stress. 9-The hkl joints could be resulted from the local stresses. Finally, a detailed seismic reflection section is recommended to achieve a better view of the stratigraphy and to indicate a conclusive evidence of the occurrence of the strike slip movement. | 3,470.6 | 2020-11-28T00:00:00.000 | [
"Geology"
] |
Energy dependence of J /ψ production in Au + Au collisions at √ s NN = 39 , 62 . 4 and 200 GeV
The inclusive J /ψ transverse momentum spectra and nuclear modification factors are reported at mid- rapidity ( | y | < 1 . 0) in Au + Au collisions at √ s NN = 39, 62.4 and 200 GeV taken by the STAR experiment. A suppression of J /ψ production, with respect to the production in p + p scaled by the number of binary nucleon–nucleon collisions, is observed in central Au + Au collisions at these three energies. No significant energy dependence of nuclear modification factors is found within uncertainties. The measured nuclear modification factors can be described by model calculations that take into account both suppression of direct J /ψ production due to the color screening effect and J /ψ regeneration from recombination of uncorrelated charm–anticharm quark pairs.
The inclusive J/ψ transverse momentum (pT ) spectra and nuclear modification factors are reported at midrapidity (|y| < 1.0) in Au+Au collisions at √ sNN = 39, 62.4 and 200 GeV taken by the STAR experiment.A suppression of J/ψ production, with respect to the production in p + p scaled by the number of binary nucleon-nucleon collisions, is observed in central Au+Au collisions at these three energies.No significant energy dependence of nuclear modification factors is found within uncertainties.The measured nuclear modification factors can be described by model calculations that take into account both suppression of direct J/ψ production due to the color screening effect and J/ψ regeneration from recombination of uncorrelated charm-anticharm quark pairs.
I. INTRODUCTION
The Relativistic Heavy Ion Collider (RHIC) was built to investigate strongly interacting matter at high tem-perature and energy density in the laboratory through high-energy heavy-ion collisions.At extremely high temperatures and baryon densities, a transition from the hadronic phase of matter to a new deconfined partonic phase, the Quark-Gluon Plasma (QGP), is predicted by Quantum Chromodynamics (QCD) [1].It has been proposed that the color potential in quarkonia could be screened by quarks and gluons in the QGP [2].Quarkonia are bound states of charm-anticharm (cc) or bottomantibottom (b b) quark pairs.As a consequence, quarkonium production cross sections in heavy-ion collisions divided by the corresponding number of binary nucleonnucleon collisions, N coll , are expected to be suppressed compared to those in p + p collisions if QGP is formed in heavy-ion collisions.
The J/ψ is the most abundantly produced quarkonium state accessible to experiments.Over the past twenty years, J/ψ suppression in hot and dense media has been a topic of growing interest.Various measurements of J/ψ have been performed in different collision systems and at different energies, and indeed a suppression of J/ψ production has been observed [3][4][5][6].A similar centrality dependent suppression was found at SPS (S+U √ s N N = 19.4GeV [7], Pb+Pb √ s N N = 17.2 GeV [8] and In+In √ s N N = 17.2 GeV [5]) and at RHIC (Au+Au √ s N N = 200 GeV [9,10]) for midrapidity, even though the temperature and energy density reached in these studies are significantly different [11].Furthermore, a stronger suppression at forward rapidity (1.2 < |y| < 2.2) compared to midrapidity (|y| < 0.35) was observed at RHIC [9].
These observations indicate that effects other than color screening are important for J/ψ production.Among these effects, J/ψ production from the recombination of cc [12] was suggested to explain the suppressions at SPS and RHIC [13].With the higher temperature and density at RHIC, the increased contribution due to regeneration from the larger charm quark density could compensate for the enhanced suppression.This could also explain a stronger suppression at forward rapidity at RHIC where the charm quark density is lower compared to midrapidity [12][13][14][15].In addition to the color screening and regeneration effects, there are also modifications from cold nuclear matter (CNM) effects and other final state effects, such as nuclear parton distribution function modification [16], initial energy loss [17], Cronin effect [18], nuclear absorption [19] and dissociation by co-movers [20].The suppression due to these effects has been systematically studied experimentally via p+A collisions [21][22][23][24][25][26][27][28][29].However, the extrapolation from p+A to A+A is still model dependent.
The nuclear modification factor of J/ψ production in Pb-Pb collisions at √ s N N = 2.76 TeV has been measured at the LHC [30][31][32].In comparison with results from RHIC in Au+Au collisions at √ s N N = 200 GeV, the J/ψ production is significantly less suppressed, which suggests significantly more recombination contribution at LHC energies.The measurement of J/ψ production at forward rapidity (1.2 < |y| < 2.2) in Au+Au collisions by the PHENIX experiment at √ s N N = 39 and 62.4 GeV indicates a similar suppression level as that at √ s N N = 200 GeV [33].Measurements of J/ψ invariant yields at different collision energies at RHIC in different centralities at mid-rapidity can shed new light on the interplay of these mechanisms for J/ψ production and properties of the medium.
In this letter, we further study the collision energy dependence of J/ψ production and test the hypothesis of these two competing mechanisms of color screening and regeneration in the hot medium.We present measurements of the J/ψ production at midrapidity (|y| < 1) with the STAR experiment in Au+Au collisions at √ s N N = 39, 62.4 and 200 GeV using data collected during 2010 and 2011 running at RHIC and study the nuclear modification factors at these energies.The data sample used in this analysis (RHIC Run 2011) is different from the previous published results [10] (RHIC Run 2010) for Au+Au collisions at √ s N N = 200 GeV.
II. EXPERIMENT AND ANALYSIS
The STAR experiment is a large-acceptance multipurpose detector which covers full azimuth with pseudorapidity of |η| < 1 [34].The Vertex Position Detector (VPD) was used to select Au+Au collisions that were within ±15 cm of the center of the STAR detector [35].The total numbers of 0-60% central minimum-bias events that are used in this analysis are 182 million, 94 million, and 360 million for 39, 62.4 and 200 GeV, respectively.The J/ψ is reconstructed through its decay into electron-positron pairs, J/ψ → e + e − (branching ratio Br(J/ψ → e + e − )= 5.97± 0.03% [36]).The primary detectors used in this analysis are the Time Projection Chamber (TPC) [37], the Time-of-Flight (TOF) detector [38], and the Barrel Electromagnetic Calorimeter (BEMC) [39].The TPC provides tracking and particle identification via the ionization energy loss ( dE/dx ) of charge particles.The TOF [38] measures the velocity of particles, which greatly improved electron identification at low p T .This detector, combined with the TPC [37], clearly identifies electrons by rejecting hadrons in the low and intermediate p T range (p T < 1.5 GeV/c).The BEMC [39], a lead-scintillator calorimeter, is used to improve the electron identification at high p T (p T > 1.5 GeV/c).The electron identification method is similar to Ref. [10,40].
Collision centrality was determined from the uncorrected charged particle multiplicity dN/dη within |η| < 0.5 using a Monte Carlo (MC) Glauber model [41].The dependence of dN/dη on the collision vertex position V z and the beam luminosity has been included to take acceptance and efficiency changes on the measured dN/dη into account.For each collision centrality, an average impact parameter, b , average number of participants, N part , and average number of binary collisions, N coll , were related to an observed multiplicity range.Centrality definitions in Au+Au collisions for √ s N N = 39, 62.4 and 200 GeV are summarized in Table I.
The daughter tracks of the J/ψ candidates are required to have at least 25 out of the 45 possible TPC hits, and a distance of closest approach (DCA) from The nσ e cut for electron identification is −1.5 < nσ e < 2. The combination of these cuts enables the identification of electrons and positrons over a wide momentum range [10].The electron sample purity integrated over the measured p T region is over 90%.Our measurement of J/ψ covers the rapidity range |y| < 1 due to the STAR acceptance and decay kinematics.The J/ψ signal is extracted by subtracting combinatorial background reconstructed from the unlike-sign mixed-events spectrum.The like-sign and mixed-events distributions are obtained as follows: 1) Like-sign: Electrons (or positrons) of the same charge sign are paired within the same event.
2) Mixed-events: Events are categorized according to the position along the beam line of the primary vertex and centrality of the event.Electrons from one event are paired with positrons from other random events from an event pool with similar global features such as collision centrality and vertex position.The vertex position is divided into 20 bins and the event centrality into 10 bins to ensure that the mixing is done using tracks from similar conditions.
The invariant mass distribution of e + e − pairs before and after the combinatorial background subtraction in 0 -60 % central Au+Au collisions are shown in Fig. 1 for √ s N N = 39, 62.4, and 200 GeV.The mixed-event background is normalized to the like-sign distribution in a mass range of 2.0 -4.0 GeV/c 2 and the normalized shapes show close agreement.For the results reported in this paper, we use the mixed-event method for the combinatorial background subtraction.The mass distribution of e + e − is fitted by J/ψ signal shape obtained from MC simulation, which includes the resolution of the TPC and bremsstrahlung of the daughter electrons in the detector, combined with a straight line for residual background.
The residual background mainly comes from the correlated open charm decays and Drell-Yan processes.The raw J/ψ signal is obtained from bin counting in the mass range 2.7 -3.2 GeV/c 2 after combinatorial and residual background subtraction.The fraction of J/ψ counts outside of the mass window was determined from the J/ψ MC simulated signal shape and was found to be ∼ 5%.This was used to correct the number of J/ψ counts.The modified J/ψ signal shape due to internal radiation was also considered and has been treated as a source of systematic uncertainties (∼ 5%) in yield extraction. Signalto-background ratios for these three energies are observed to be 0.62, 0.39, and 0.04, respectively for 0 < p T < 3 GeV/c (39 and 62 GeV) and 0 < p T < 5 GeV/c (200 GeV).The J/ψ invariant yield is defined as (2) where N J/ψ is the uncorrected number of reconstructed J/ψ, N EV T is the number of events in the relevant Au+Au centrality selection, A is the detector's geometric acceptance times its efficiency (about 0.05 ∼ 0.12 for 0 < p T < 5 GeV/c), and ∆p T and ∆y are the bin width in p T and y, respectively.Acceptance and efficiency corrections (TPC and BEMC related) are estimated by MC simulations with GEANT3 package [42].Some of the efficiency corrections such as TOF and dE/dx related cuts are extracted directly from data [43].
The systematic uncertainty on the efficiency correction obtained from MC simulations is estimated by comparing the difference for the particle identification cut distributions between simulation and data.In order to account for the contributions from radiation losses and correlated background in yield extraction procedure, the mass window and methods for signal counting have also been varied to evaluate the uncertainties.The total systematic uncertainties in the integrated p T range are 20%, 11%, and 10% at √ s N N = 39, 62.4, and 200 GeV, respectively.
Table II contains a summary of the contributions from
III. RESULTS
The J/ψ invariant yields as a function of p T in Au+Au collisions at √ s N N = 39, 62.4, and 200 GeV for different centrality bins are shown in Fig. 2. As expected, the J/ψ invariant yields are larger in Au+Au collisions at larger center-of-mass energies.Results from the current measurements (year 2011) are compared with the published results from data taken in 2010.Nuclear modification factors (R CP , R AA ) are used to quantify the suppression of J/ψ production.R CP is a ratio of the J/ψ yield in central collisions to peripheral collisions (centrality: 40-60%) and defined as follows: where N coll and dN/dy N coll are the average number of nucleon-nucleon collisions and J/ψ yield per nucleon- nucleon collision in a given centrality, respectively.dN/dy is obtained from the integration of the J/ψ p T spectrum.Due to the limited p T coverage of the measurements, the extrapolation of the p T spectrum is done by the two following functions: where a, b, n, h and l are free parameters.The difference between these two functional fits has been taken as a source of systematic uncertainty.Note that R CP reflects only relative suppression -if the modification of J/ψ yield in central and peripheral bins is the same, R CP is equal to 1.The R CP , as a function of the average number of participant nucleons ( N part ), for Au+Au collisions at √ s N N = 39, 62.4 and 200 GeV, are shown in Fig. 3.
Note that the peripheral bin selection is 40 -60% central Au+Au collisions for these three energies.The systematic uncertainties for R CP are mainly from TPC tracking cuts.Systematic uncertainties originating from yield extraction, BEMC and TOF related cuts, and nσ e cuts, are negligible or mostly cancel.Significant suppression is observed in central Au+Au collisions at 62.4 GeV, which is similar to 200 GeV.
R AA is obtained from comparing J/ψ production in A+A collisions to p + p collisions, defined as follows: where d 2 N AA /dp T dy is the J/ψ yield in A+A collisions and d 2 σ pp /dp T dy is the J/ψ cross section in p + p collisions.The nuclear overlap function is defined as T AA = N coll /σ pp inel , where σ pp inel is the inelastic cross section in p + p collisions and is equal to 34 ± 3, 36 ± 3 and 42 ± 3 mb for 39, 62.4 and 200 GeV [44], respectively.If there are no hot or cold nuclear matter effects, the value of R AA should be unity.
To obtain R AA at √ s N N = 39 and 62.4 GeV, we have to derive the J/ψ cross section in p + p collisions because there are no measurements available for the p + p references at STAR for these two energies.There are several p + p measurements from fixed target p+A experiments [45][46][47] and from Intersecting Storage Ring (ISR) collider experiments [48,49] near these two energies.However, the p T shapes from Ref. [48] and Ref. [49] at 63 GeV are inconsistent with each other and the cross section measurements at 39 GeV are comparable to (or even larger than) that at 63 GeV.Therefore, we use the cross section derived in Ref. [50] as our p + p reference baselines for √ s N N = 39 and 62.4 GeV.In Ref. [50], the world-wide experimental data on J/ψ cross sections and kinematic distributions in p + p and p+A collisions at √ s = 6.8 -7000 GeV are examined in a systematic way.The authors explore the √ s dependence of the inclusive cross section, rapidity and transverse momentum distributions phenomenologically and develop a strategy for the interpolation of the J/ψ cross section and kinematics at RHIC energies.This approach is found to describe the world-wide J/ψ data reasonably well.With this strategy, the predicted J/ψ cross section times branching ratio at √ s = 39 and 62.4 GeV in mid-rapidity are Br(J/ψ → e + e − )dσ/dy| |y|<1.0= 9.0 ± 0.6 and 17.6 ± 2.1 nb, respectively.
With the derived p + p references for 39 and 62.4 GeV, and the measured p + p baseline at 200 GeV [40,51], we obtain the R AA of J/ψ for p T > 0 as a function of N part in Au+Au collisions at √ s N N = 39, 62.4, and 200 GeV, as shown in Fig. 4 (a).The differential R AA in J/ψ p T is shown in Fig. 4 (b).The measurements from SPS [5,7,8] and LHC [52] and the expected R AA with complete ψ(2S) and χ c melting and no modification of the J/ψ yield [53] are also included for comparison.Suppression of J/ψ production is observed in Au+Au collisions from 39 to 200 GeV with respect to the production in p + p scaled by N coll .For R AA as a function of N part , no significant energy dependence is observed within uncertainties from 17.2 to 200 GeV.For the J/ψ R AA as a function of p T , significant suppression is observed at low p T (p T < 2 GeV/c) from 39 to 200 GeV.The modification of J/ψ production is consistent within the systematic uncertainties for these collision energies.The ALICE [52] points are also shown for comparison.In comparison with PHENIX results at forward rapidity [33], the suppression of J/ψ shows no rapidity dependence at √ s N N = 39 nor 62.4 GeV within uncertainties.
As shown in Fig. 5, theoretical calculations [13] with initial suppression and J/ψ regeneration describe the data within 1.6 standard deviation.The R AA results as a function of collision energy for 0-20 % centrality are also shown in Fig. 6.Since ALICE data show no significant centrality dependence, we think it is appropriate to use the available 0-10% data at 2.76 TeV [52].Theoretical calculations are also included for comparison.The calculations include two components: direct suppression and regeneration.The direct suppression represents the "anomalous" suppression of primordial J/ψs due to CNM and color screening effects.According to the model calculations, the R AA is about 0.6 for central collisions with only CNM effects.The regeneration component is responsible for the contribution from the recombination of correlated or uncorrelated cc pairs.The feed-down to J/ψ from χ c and ψ has been taken into account in the calculations.No significant energy dependence of R AA for 0-20 % centrality is observed at √ s N N < 200 GeV.
As the collision energy increases the QGP temperature increases, thus the J/ψ color screening becomes more significant.However, in the theoretical calculation [13], the regeneration contribution increases with collision energy due to the increase in the charm pair production, and nearly compensates the enhanced suppression arising from the higher temperature.The higher R AA at ALICE may indicate that the surviving J/ψs are mainly coming from the recombination contribution.The model calculation describes the energy dependence of J/ψ production from SPS to LHC.
IV. SUMMARY
In summary, we report on recent STAR measurements of J/ψ production at midrapidity in Au+Au collisions at √ s N N = 39, 62.4 and 200 GeV.Suppression of J/ψ production, with respect to the production in p + p scaled by the number of binary nucleon-nucleon collisions, is observed at these three energies.The observed suppression is consistent with the suppression of directly produced J/ψ mesons.No significant energy dependence of the nuclear modification factor (either R AA or R CP ) is found within uncertainties.Model calculations, which include direct suppression and regeneration, reasonably describe the centrality and energy dependence of J/ψ production in high-energy heavy ion collisions.The error bars represent the statistical uncertainties.The boxes represent the systematic uncertainties.The shaded bands indicate the uncertainties from N coll and the uncertainties for the derived baselines for 39 and 62.4 GeV [50].The ALICE points are from [52].The ratio of feed-down J/ψ from higher chamonium states to inclusive J/ψ is from [53].The results of "RHIC run 10" are from [40] and [10].
FIG. 1 .
FIG.1.The e + e − invariant mass distribution of J/ψ candidates (black open circles), like-sign combinatorial background (blue dashed line), mixed event combinatorial background (red solid line), and J/ψ candidates with mixed event background subtracted (black solid circles) in Au+Au collisions at √ sNN = 39 (a), 62.4 (b), and 200 GeV (c) for centrality 0 -60 %.The J/ψ signal shape from a MC simulation is combined with a linear residual background and is fitted to the combinatorial background subtracted data (black solid line).
FIG. 2 .FIG. 3 .
FIG. 2. J/ψ invariant yields in Au+Au collisions at√ sNN = 39, 62.4 and 200 GeV as a function of pT for different centralities.The error bars represent the statistical uncertainties.The boxes represent the systematic uncertainties.The STAR published results are from Refs.[40] and[10].
FIG. 5 .
FIG. 5.The results of J/ψ RAA as a function of Npart, in comparison with model calculations [13], for Au+Au collisions at √ sNN = 200 (a), 62.4 (b) and 39 GeV (c), respectively.The error bars represent the statistical uncertainties.The boxes represent the systematic uncertainties.The shaded bands indicate the uncertainties from N coll and the uncertainties in the derived baselines for 39 and 62.4 GeV [50].Solid lines are J/ψ modification factors from model; dash-dotted line are suppressed primordial production; dashed line are regeneration component.
TABLE I .
Summary of centrality bins, average number of participants Npart , number of binary collisions N coll , and impact parameter b from MC Glauber simulation of Au+Au at √ sNN = 39, 62 and 200 GeV.The errors indicate uncertainties from the MC Glauber calculations.
15e primary vertex of less than 3 cm.Low momentum (p < 1.5 GeV/c) electron and positron candidates are separated from hadrons by selecting on the inverse velocity, |1/β − 1| < 0.03, where β is the velocity measured in the TOF normalized by the speed of light.The cut value is determined using a three standard deviation window.At high momentum (p > 1.5 GeV/c), a cut on the ratio of momentum to energy deposited in towers from BEMC (0.3 < pc/E < 1.5) is used to suppress hadrons.The electron and positron candidates are then identified by their specific energy loss ( dE/dx ) in the TPC.More than15TPC hits are required to calculate dE/dx .The normalized dE/dx is defined as follows: nσ e = ln( dE/dx m / dE/dx th e ) R dE/dx (1) where dE/dx m and dE/dx th represent measured and theoretical values, respectively, and R dE/dx is the experimental ln(dE/dx) resolution.
TABLE II
. The contributions of systematic uncertainty sources for 39, 62.4 and 200 GeV. | 5,139.4 | 2016-07-26T00:00:00.000 | [
"Physics"
] |
PROCESSING TREE POINT CLOUDS USING GAUSSIAN MIXTURE MODELS
While traditionally used for surveying and photogrammetric fields, laser scanning is increasingly being used for a wider range of more general applications. In addition to the issues typically associated with processing point data, such applications raise a number of new complications, such as the complexity of the scenes scanned, along with the sheer volume of data. Consequently, automated procedures are required for processing, and analysing such data. This paper introduces a method for modelling multi-modal, geometrically complex objects in terrestrial laser scanning point data; specifically, the modelling of trees. The model method comprises a number of geometric features in conjunction with a multi-modal machine learning technique. The model can then be used for contextually dependent region growing through separating the tree into its component part at the point level. Subsequently object analysis can be performed, for example, performing volumetric analysis of a tree by removing points associated with leaves. The workflow for this process is as follows: isolate individual trees within the scanned scene, train a Gaussian mixture model (GMM), separate clusters within the mixture model according to exemplar points determined by the GMM, grow the structure of the tree, and then perform volumetric analysis on the structure.
INTRODUCTION
Laser scanning has been adopted for many traditional surveying and photogrammetric applications.Examples of these applications include: modelling for industrial and engineering applications, deformation monitoring, volumetric analysis, topographic surveys.However, the use of laser scanning is moving beyond such traditional applications to be adopted for an increasing number, and variety of applications, including cultural and heritage recording, archaeology, asset management, and modelling vegetation.Subsequently, in addition to the pre-processing issues typically related to laser scanning, such as registration and calibration, the complexity of the scenes being scanned, the volume of data, and the potentially complex and heterogeneous nature of the objects in the scene all need to be accounted for.This has lead to the need for semi-automated, and automated tools to aid in analysis undertaken in these new applications.To achieve a greater level of automation, a number of techniques used in fields such as signal processing, machine learning and computer vision can be adopted to point cloud data.This paper details an example of one such technique, combining feature extraction and machine learning for the modelling of trees from point clouds captured in a forestry landscape.To determine the model for a tree, a number of feature sets based on Principal Component Analysis (PCA) performed on local neighbourhoods are considered, combined, and used to train a multi-modal modelling technique, a Gaussian Mixture Model (GMM).The use of a mixture model is appropriate for two reasons: first, a multimodal approach is required due to the differing properties of the components of a tree (leaf, branches, and trunk), and second, the model can be used to cluster the tree into its component parts.This second property can be leveraged to cluster the tree data at a point level, which can then be applied to context based region growing, thus enabling the structural analysis of the tree by removing points corresponding to leaves.From this clustering, the points belonging to the main tree structure can be identified.These points can then be further processed to model the structure of a tree.Such a case is presented based on examining horizontal cross sections of the tree and extracting a graphical representation in the form of a tree graph.This can then be used as context for further analysis.An example provided is the examination of the volume of the main tree structure, where the graphical representation is used to segment the tree into parts to allow for simple fitting of cylinders through RANSAC to approximate the carbon content of a tree.
Tree modelling
There is much interest in modelling the properties of trees and forest through the use of laser scanning (van Leeuwen et al., 2011).A large portion of previous work has concentrated on airborne laser scanning (ALS).This is because large areas of forest and vegetation can be captured quickly.Information extracted includes the canopy layers, type and number of trees, and heights (Hyyppä et al., 2006).As resolution has increased and waveform modelling became more common, methods were developed for extracting individual trees.However, this still lacks in point density compared to terrestrial systems.While it could Terrestrial laser scanning, and mobile laser scanning offered an increase resolution in point density.It has been use in conjunction with ALS to model the properties of the trees and match them with those observed from the ALS point data to infer more precise attributes of an entire forrest (Lindberg et al., 2012).The modelling of trees from TLS data can be categorized as either extracting the skeleton of the tree, or partitioning the tree into simple shapes.These shapes normally consist of cylinders and conic sections (Pfeifer et al., 2004;Chaperon and Goulette, 2001).Skeletonisation methods have been predominately based on extracting the tree graph representation from oct-trees (Bucksch et al., 2009), from regularly gridded voxels and rasters (Gorte, 2006), and from extracting the medial axis representation (Su et al., 2011).Meshing techniques can also be applied to to represent the tree and for calculation of the volume and surface area, but benefit from the use of context such as the skeleton of the tree (Xu et al., 2007).
A majority of existing techniques are designed to work on single trees.In forestry applications the data is comprised of multiple trees.Often each tree has to be isolated before applying the various modelling techniques.In airborne applications trees can be isolated through examination distribution of the points in terms of density and height.This delineates the crowns of the trees, and allows individual trees to be isolated (Hyyppä et al., 2006).While the density of the TLS is not consistent, a similar technique can be applied.This is performed by extracting the points with the highest distance from the extracted digital terrain model, and performing the crown segmentation on these points if there is sufficient density (Brolly et al., 2009).Alternatively, a slice can be taken through the data for a given interval above the ground and below the canopy.This is taken high enough above the ground so as not to include shrubs and low level vegetation, and low enough so as to not include the upper level of the canopy.The majority of the remaining points are sampled from the tree trunks.A circle or cylinder detection algorithm can the be applied to find the tree trunks, and isolated the component trees (Brolly et al., 2009).
Feature selection
In most cases, automated processing methods used features that can be categorised as being based on either geometric or spectral information.Geometric information is classed as information derive from the point position and measurement in the 3D coordinate space.Spectral information is primarily derived from the the intensity return of the laser, or from co-registered imagery for either an external or integrated camera.In the case of this paper, the features used for classification will be derived from the geometric information.
With the intensity return, it is based on factors such as signal strength, wavelength, surface reflectance and the range and incident angle.It has been shown that it can be combined with RGB channels to perform spectral classification, similarly to vegetation index used in remote sensing (Lichti, 2005), especially for near-infrared lasers.However, because the return value is dependent on the range and surface incident angle, it is not consistent for surfaces over a scene or when data is collected from multiple setups unless it can be corrected (Kukko et al., 2008).
For colour information, there is often a temporal difference between the capture of the points and the imagery.If the structure being sampled is static, then this often has little effect.For trees however, factors such as wind often changes the position of the upper canopy, making it much more dynamic in nature.As such, there is often discrepancies in the matching of the pixels to the points for the upper layers of the trees, especially for leaves.In addition, for forest scenes, the high presence of shadows (and their movement) can also effect the colour information, making it inconsistent between multiple setups.Finally, the small size of the leaves and details in the upper canopy, and the difference in resolution of the laser beam and the camera pixels can lead to over saturation of the colour information for such points.Consequently, this paper focuses on examining classification using simple derived geometric information on the point neighbourhood.
Geometric information is usually extracted from a local neighbourhood of points to approximate the true geometric properties for a point.This can then be used to derive higher levels of information.This is usually done in three ways; local surface fitting, geometric primitive or prototype fitting, or principal component analysis (PCA).Each method produces similar or complimentary results.
Local surface fitting involves fitting a surface to the point neighbourhood, normally a polynomial and generally based on the local surface orientation at the neighbourhood.The surface is restricted to 2 nd order surface on the assumption that the neighbourhood is small enough to be modelled by a simple surface, thus preventing over fitting due to errors and noise.The surface parameters are then examined to extract measures of curvature, directions of change, and surface types such as inflections, and local maxima and minima (Crosilla et al., 2008).
Fitting local geometric primitives, using methods such as Random Sample Consensus (RANSAC), is most applicable to the analysis of structured information, for instance industrial scenes.The process involves fitting geometric primitive surface types, including planes, cylinders, cones, tori and spheres, to seed neighbourhoods to determine the best descriptor class for the surface.This works best for man-made, regular scenes compared to naturally occurring free-form surfaces.
The third method for extracting geometric information is to use PCA on the local neighbourhood of points.This involves examining the eigenvalue decomposition of the the covariance matrix.The eigenvalues and eigenvectors define the linear combinations of elements that describe ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-5/W2, 2013 ISPRS Workshop Laser Scanning 2013, 11 -13 November 2013, Antalya, Turkey the maximum variation of the points distributed throughout a neighbourhood.The underlying surface and structure highly effects the sampling and distribution of points, and this information is represented through the eigenvalues and eigenvectors.Consequently, PCA is suitable for describing and classifying points based on the surrounding local neighbourhood.Information that can be approximated comprises: the surface normal, curvature, texture, and shape, if the point is close to a boundary, edge or corner point, and the local surface orientation and coordinate system.PCA has also been applied to the surface normals to examine underling shape in the form of tensor voting, and the determination of the principal curvature directions.
CLASSIFICATION
The scanner used to capture the information was a Leica C10 scan station.This is a time of flight instrument, with a quoted point precision of 6mm and a range of 135m (Leica Geosystems HDS, 2008).The region of capture was in a national park close to Walpole in Western Australia during spring, containing old growth forests, primarily consisting in this case of giant Red Tingle trees.These trees can range in height to approximately 40m, as is the case in the example presented.Multiple setups were used around the area to capture the point coordinate data.This was to ensure adequate coverage and density.The data was downsampled to a maximum point spacing of 0.02m to reduce the amount of redundant data to process.The goal of classification in this paper is to class the points as either leaves, trunk and branches, or unknown.The type of trees in this region maintain their leaves all year round.As such, to get an accurate representation of the main tree structure, data captured from leaves need to be classified and removed.
The next section outlines the geometric features examined, and how they were applied to the classification process.A Gaussian mixture model (GMM) was used to cluster the points into different models based on features.Tree model were then examined and combined to generate the different classes.
Features from Principal Component Analysis
A variety of different geometric features and their combinations were explored.These geometric features were derived from performing PCA over a local neighbourhood of points within a set radius.The first step is to calculate the covariance matrix of the local neighbourhood of coordinate points, described as: p i is the i th point in the neighbourhood of k points, with p denoting the centroid of the neighbourhood, or the mean coordinate value.The eigenvalues are found by decomposition into the form given by: The simplest information that will be used are the eigenvalues themselves.As mentioned, they represent the distribution of the points in the local neighbourhood, and the variance in the direction of the associated eigenvectors.
The smallest eigenvalue, λ 0 , represents the variance of the points to a planar surface, with the surface normal direction represented by e 0 .For branches, trunks, ground points and other smooth surfaces, it will be small.For regions of high variations, such as neighbourhoods containing leaves, shrubs, it will be high.For non-planar surfaces, this value is affected by the neighbourhood size.As the neighbourhood size increases, so will the variation in the normal direction, and hence the value for λ 0 will also increase.This can be compensated for by dividing the value by the total population variation, to get the curvature approximation κ (Pauly et al., 2002) as denoted by: The other eigenvalues λ 1 and λ 2 denote the variance in the tangential direction of the best fit planar surface.If both are large, then the points are distributed across the neighbourhood in both directions.This is common for surfaces such as the ground and trunk, where the surface is large enough.When the structure is narrower, it causes the distribution to be elongated, resulting in a smaller λ 1 value.Such occupancies include branches and narrow trunks where the neighbourhood radius is less than the radius of the trunk or branch.Twigs and sticks will have small λ 0 and λ 1 values, and most of the neighbourhood will be distributed in the direction of e 2 , denoted by a large λ 2 value.
In addition, ratios and differences between the eigenvalues can also be used (Gumhold et al., 2001).These describe the neighbourhood distribution in terms of the relation between the eigenvalues.Locally planar smooth surfaces such as the tree trunk will have a low difference λ 2 − λ 1 and a high difference in λ 1 − λ 0 .A highly linearly distributed surface/structure such as a branch will have a high difference λ 2 − λ 1 and a low difference in λ 1 − λ 0 .A highly curved or randomly distributed surface such as a neighbourhood containing leaves will have a low difference λ 2 − λ 1 and a low difference between λ 1 − λ 0 .
PCA can also be performed on the normal directions n of the points in a local neighbourhood (Jiang et al., 2005).The covariance matrix is formed as: with the eigenvalue decomposition performed as in equation 2. The eigenvalues are examined in a similar fashion, except instead of encapsulating the distribution of points, it denotes the change in surface.A high λ flat surface or wide trunk).
Neighbourhood size
The size of the neighbourhood will determine the resolution of the features extracted.Smaller neighbourhoods enable smaller structures to be detected, but will suffer from more noise due to the lack of redundancy.For larger neighbourhood sizes, the increase redundancy reduces the effects of noise, but at the cost of losing finer resolution detail.Because of this, the feature values at different radii are used.This allows clustering to take into account different resolutions of the underlying structure.In this case, the neighbourhood size were used for radii of 0.1m, 0.2m, 0.3m, 0.4m and 0.5m.Large surfaces such as trunks exhibit more consistent feature values.Smaller structures such as branches exhibit changes in the feature values, and become somewhat more consistent for higher radius values (assuming a single structure within the neighbourhood).
Where the structure undergoes change or is not consistent, there is little consistency.
Classification and clustering using Gaussian Mixture Modelling
Based on the features described in the previous section, the points were clustered using a Gaussian mixture model (GMM) (Reynolds, 2008), an unsupervised parametric clustering method.A GMM provides a representation of the distribution of points in a given feature space.This is done by assuming that the overall distribution is formed by a mixture of n Gaussian distributions in the feature space.The attributes of these n Gaussian distributions are then determined such that their convolution of the n Gaussian distributions best models the overall distribution of the points in the feature space.Each of these Gaussian models can then represent a local cluster of points in the feature space.
Different feature values were used as an input for the GMM.This included utilising only the eigenvalues at fixed neighbourhood sizes, utilising all the features described in section 3.1 at fixed neighbourhood sizes, and finally combining the features over varying neighbourhood sizes.This is briefly outlined in table 1. Six clusters were chosen to best model the combined distribution of the points and to encapsulate the different properties between the tree trunk, branches, small twigs and sticks, leaves and random noise.
The number of clusters that contain points from the main tree structure, points sampled from leaves, and a mixture is provided in table 1. Example of these clusters using all the features and multiple neighbourhood radii are presented in figure 1 From examination of the resulting clusters, performing GMM on the features over multiple neighbourhood radii appeared to differentiated the different attributes of the tree better than a fixed radius.This is because the fixed radius value heavily influences the feature values, as outlined in section 3.2.However, when examining multiple radius values the feature space comprises of fifteen dimensions, which is a high dimensionality feature space.To reduce this, PCA was applied to the feature space to reduce its dimensionality.Only those combinations that were deemed to contribute significantly were kept, which in this case comprised of six linear combinations of the original feature dimensions.The results from the GMM on the reduced feature space is presented in figure 1.
The clusters were then examined to see which class they contained based on classes for leaves, tree trunk and branches, and unknown.This was done through manual examination, and the examination of the mean value of each cluster in the feature domain and whether they were close together and described the aspects of each class.Where multiple clusters contained the same class, the Gaussian model for
TREE STRUCTURE MODELLING
From the classification of the points into leaves and tree structure, the process of modelling the tree and the various attributes such as volume, surface area, and the structure and shape of the tree is simplified.Techniques such as skeletonisation and surface meshing generally perform better on a tree structure when leaves are removed.In this section, the extraction of two attributes from the classified data is examined; the tree structure and the approximate volume of wood.
Tree Structure generation
The method used to generate the skeleton in this case was based on examining horizontal slices of the tree from ground A similar approach of fitting ellipses has been applied for identify pipes and their paths in industrial settings (Mapurisa and Sithole, 2012).The ellipse method works well for the main trunk and the initial branching sections.This is because there is few occlusions and the points in the cluster represent a good sampling of the elliptical cross section.For higher up in the canopy, the ellipse fitting exhibits poor performance.This is due to the organic nature of the structure, high incidents of occlusion, limited surface sampling and the radii of the branch approaching the noise level in point sampling.The last makes it difficult to differentiate between the local neighbourhood variance being the result of sampling noise or the radius of the branch.Since the path of the structure is of importance, the smallest ellipse that encapsules the points is fitted, and its centre used to predict the path of the tree structure.
The centre of the ellipse/cluster was used as a node in the graph.An edge joining the nodes between consecutive layers was created if the ellipses for the two nodes overlapped.From this graph representation, a cyclic detection algorithm was applied to identify cycles in the graph, and remove them by merging common nodes in the cycle that were on the same layer.The results are presented in figure 3, with figure 3(a) showing the found ellipses, and figure 3(b) the final tree skeleton representation.This skeleton representation provides context for the volume analysis.
The slicing horizontally through the data will not produce the must optimal cross section of the branches.This could be modified that the cross section at a give point is nominally parallel to the surface of the canopy, or it could be defined locally for a cluster based on the direction of the axes of the tree branch for lower levels in the graph.
Volume measurement for carbon capture
The volume of the tree is an important attribute.It is often used for studies on carbon capture and to predict the effect Using this method resulted in an approximate volume of 34.3m 3 .This value can be seen as conservative when compares to the volume calculated for the modelling of the tree using more complex representations such as meshes or cylindrically objects.This can be demonstrated as in figure 4(a), where the tree is represented by manually fitting cylinders and pipe elements to the data, and results in a volume of 67.174m 3 .The previous allometric method only encapsulated approximately 49.1 percent of this volume.However, this manual procedure is quite labour intensive.
Consequently, a more automated approach was applied using the extracted skeleton.The points were segmented into individual components based on the paths from the tree graph between branching nodes.For each component a cylinder fitting routine was applied to model that section of the tree by cylinders.The results of this approach is illustrated in figure 4(b).The volume of the combined automatically fitted cylinders was determined to be 74.457m 3 .
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-5/W2, 2013 ISPRS Workshop Laser Scanning 2013, 11 -13 November 2013, Antalya, Turkey Care must be taken when taken into account the cylinders fitted to segments in the upper layers.In these cases the radii of these smaller cylinders are close to the point spacing and the error in their sampling.Because of this, any instance where the cylinder radius was under 5cm were not included.
CONCLUSION AND FUTURE WORK
This paper contains a methodology for classifying and processing point cloud data from trees.The techniques were applied to data captured from a forest of Red Tingle trees in the Southwestern part of Australia using multiple setups of a Leica C10 scan station.The classification was performed using Gaussian mixture model as an unsupervised clustering method to separate the leaves from the rest of the tree structure.Several features were examined, with the eigenvalues calculated from multi-resolution local neighbourhoods producing the best results.A skeletonisation method based on examining the data at different vertical intervals was applied to the points classified as belonging to the tree structure.This was to provide context for fitting cylinder sections to the point cloud to calculate the volume.It was demonstrated that a significant amount of the volume was not being capture using traditional allometric approaches.Future work will concentrate on extending the modelling of multiple trees in a forest scene consisting of varying types.This will also include the examination of changes in volume and structure over time.
change in curvature in two directions (such as a branch intersection or regions containing leaves), a high λ (n) 2 and a low λ(n) 1 denotes a change in only one direction (such as a branch or trunk), and a low λ no change in surface curvature locally (such as for a ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-5/W2, 2013 ISPRS Workshop Laser Scanning 2013, 11 -13 November 2013, Antalya, Turkey
Figure 2 :
Figure 2: merged classes representing (a) leaves and (b) the trunk and branches from the Gaussian mixture model
Figure 1 :
Figure 1: Merged clusters from GMM representing (a) main trunk, (b) branches, (c) leaves and (d) mixed points from trunk and brancheslevel up through the canopy iteratively.At each stage, the points in each horizontal slice were clustered and an ellipse was found for each cluster.The ellipse was based on either best fit ellipse if the cluster was significant, or on the minimally bounding ellipse if the fitting routine failed or the cluster was too small.
Figure 3 :
Figure 3: (a) tree represented by ellipses at vertical intervals, and (b) the tree represented by the extracted skeleton of ecosystems on climate change.A simple allometric for calculating the volume of trees is described in Dean and Wardell-Johnson (2010) and approximates the volume of the tree based on the diameter of the tree at breast height and the height.
Figure 4: (a) manual cylinder fitting and (b) cylinder fitting by region growing on the segments ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-5/W2, 2013 ISPRS Workshop Laser Scanning 2013, 11 -13 November 2013, Antalya, Turkey identify and separate out different trees, it was difficult to adequately model various attributes such as structure, surface area and volume.
Table 1 :
Different combinations of features and neighbourhood sizes tested for with GMM. | 5,895.4 | 2013-10-16T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Current Insights and Latest Updates in Sperm Motility and Associated Applications in Assisted Reproduction
Spermatozoon is a motile cell with a special ability to travel through the woman’s reproductive tract and fertilize an oocyte. To reach and penetrate the oocyte, spermatozoa should possess progressive motility. Therefore, motility is an important parameter during both natural and assisted conception. The global trend of progressive reduction in the number and motility of healthy spermatozoa in the ejaculate is associated with increased risk of infertility. Therefore, developing approaches for maintaining or enhancing human sperm motility has been an important area of investigation. In this review we discuss the physiology of sperm, molecular pathways regulating sperm motility, risk factors affecting sperm motility, and the role of sperm motility in fertility outcomes. In addition, we discuss various pharmacological agents and biomolecules that can enhance sperm motility in vitro and in vivo conditions to improve assisted reproductive technology (ART) outcomes. This article opens dialogs to help toxicologists, clinicians, andrologists, and embryologists in understanding the mechanism of factors influencing sperm motility and various management strategies to improve treatment outcomes.
Introduction
The human spermatozoon is an extremely specialized motile cell with a highly condensed nucleus and scanty cytoplasm. Even though transcriptionally and translationally inactive, it has precise metabolic pathways which are fundamental for fertilization to take place. After its production in the seminiferous tubules, the sperm undergoes maturation in the epididymis and then travels through the female reproductive tract to facilitate the transfer of paternal genome into the oocyte. Spermatozoa, which are deposited in the vagina during coitus, must reach the site of fertilization-namely ampullary site of the uterine tube (also known as fallopian tube). To reach the ampulla, in addition to the self-propelling properties of spermatozoa (forward progressive motility), the female reproductive tract assists in this process, which is regulated by the female reproductive hormones. Therefore, among all the semen parameters, sperm motility is considered to be a strong predictive marker of male fertility potential [1]. Based on studies with excised human uterus and tubes, it is estimated that human spermatozoa travel an average distance of approximately 19 cm and undergoes several physiological and biochemical changes before meeting an oocyte [2].
Based on the pattern of movement and velocity, spermatozoa can be graded as progressively motile, non-progressive (exhibiting only lateral head displacement), and immotile. As per the recent guidelines of the World Health Organization [3], the reference values for human semen characteristics are specified in Table 1. Men with ejaculates showing less than 40% total motile or 32% progressively motile spermatozoa are considered to be asthenozoospermic, a condition characterized by disorders in sperm motility [4]. Asthenozoospermia is considered as one of the predominant contributing factors for male infertility [5]. The purpose of this review is to recapitulate the physiology and signaling cascade of sperm motility, common disorders and factors which affect motility, importance of motility in assisted reproductive technology (ART), and possible approaches to improve motility in spermatozoa.
Structure of Human Spermatozoa
Sperm motility is controlled by its complex, structural and molecular signaling mechanisms. Broadly, spermatozoa are divided into three main parts-the head, which contains nuclear material; the tail or flagellum, which contains the machinery needed to propel the spermatozoa forward; and neck or connecting piece, which connects the head to flagellum (Fig. 1). The flagellum can be further segregated into the midpiece (containing cellular organelles like mitochondria), principal piece, and end piece [6].
The pivotal part of the sperm flagellum is the axoneme, which originates at the connecting piece and terminates at the end piece. It is composed of 9 microtubule doublets and a central pair, commonly termed the 9+2 arrangement (Fig. 1). These 9 microtubules are controlled by nexin links that connect to the central pair by radial spokes. Inner and outer axonemal dynein arms, which are key to acquiring motility in sperm, project from the microtubule doublets. The dynein arms help in sliding the microtubule doublets by consuming adenosine triphosphate (ATP) [7].
In mammalian sperm, the axoneme is covered by accessory structures, such as the outer dense fibers (ODFs), fibrous sheath (FS), and mitochondrial sheath (MS). In the midpiece, the axoneme is surrounded by ODFs and MS. In humans, the MS is spirally wound around the axoneme, which provides energy in the form of ATP required for sperm motility. In the principal piece, the axoneme is surrounded by ODFs and FS. The ODFs are petal-shaped structures that lie directly above the axoneme microtubule doublets which progressively decrease in diameter from base to tip of the principal piece. It is the principal piece that renders shape and flexibility to the tail. In addition to this, the principal piece provides room for signaling proteins that regulate motility and those involved in capacitation and hyperactivation. No accessory structures between the axoneme and plasma membrane are present in the end piece [8].
Risk Factors Affecting Sperm Motility
Human ejaculate is highly heterogenous with respect to motility, morphology, and other functional characteristics of spermatozoa. Globally, about 20 to 30% of infertility cases are due to sperm-related problems in men of reproductive age [9]. There is strong evidence to suggest that lifestyle and other environmental factors contribute considerably to semen disorders leading to male infertility (Fig. 2). Even though these factors affect different semen parameters, in the context of this article, we focus mainly on the important factors that are known to affect sperm motility.
Varicocele
Varicocele is a common chronic pathology in men, caused by abnormal dilatation of veins in the scrotum that leads to impairment of regular semen parameters. A systematic review and meta-analysis revealed that varicocele is strongly correlated with poor semen profile [10]. A compromised testicular microenvironment due to elevated levels of highly reactive oxidants and reduced levels of antioxidant is commonly observed in this condition [11]. Plenty of evidence in the literature suggests that varicocele is associated with poor sperm motility [10][11][12]. A high percentage of inactive mitochondria [13], abnormal expression of mitochondrial proteins [14], decrease in ATP levels [13], and altered calcium signaling cascade [15] in spermatozoa of men with varicocele has been reported in the literature. Significant decrease in kinematic parameters, such as curvilinear velocity (VCL), straight line velocity (STR), and amplitude of lateral head displacement (ALH), was observed in men with varicocele [16]. However, it remains inconclusive whether the surgical procedures like varicocelectomy improves the sperm motility [17][18][19]. Further, efforts to improve the semen characteristics using [20,21].
Genetic Abnormalities Associated with Sperm Motility Disorders
Sperm motility depends upon the flagellar structure and function. Several reports indicate the association of poor motility with genetic defects [22][23][24]. The most common conditions are primary ciliary dyskinesia (PCD) and Kartagener syndrome. These are autosomal recessive disorders with an incidence of 1 in 20,000 and 1 in 30,000, respectively [24,25]. In these conditions, spermatozoa lack motility due to defective dynein arms, with half of the cases having defects in the formation of the central pair complex and radial spokes. Lack of sperm motility is observed in 90% of PCD disease conditions and involves the outer and inner dynein arms or both of the PCD-associated genetic mutations of the dynein genes [23]. Dysplasia of FS is one of the structural flagellar abnormalities observed in spermatozoa, characterized by hyperplasia and hypertrophy of the FS. In this condition, the midpiece is invaded by the hypertrophied FS, and the annulus is predominantly not formed [22]. Dysplasia of FS was shown to have a familial predisposition in 20% of cases. However, to date, there is no consensus about the genetic background of dysplasia of FS [26].
Mitochondrial DNA Mutations and Sperm Motility
Mitochondria are an important source of energy required for sperm motility. Usually, abnormalities of the MS or mitochondrial membrane integrity are associated with sperm motility disorders [27]. Deletions or mutations in mitochondrial DNA are correlated with elevated oxidative stress, sperm immotility, and male infertility [28,29]. In addition, researchers have identified polymorphic mutations in genes encoding the oxidative phosphorylation complexes and transfer RNA of mitochondrial DNA associated with low sperm motility [30]. A missense mutation (C119941) in the mitochondrial ND4 (NADH dehydrogenase 4) gene has also been reported as the reason for low sperm motility [31]. Further studies are necessary to unravel the genetic association between sperm motility using advanced techniques like whole exome sequencing or appropriate animal models.
Antisperm Antibodies
It has been postulated that antisperm antibodies (ASAs), an autoimmune condition, can significantly affect male fertility Fig. 1 Structure of mature human spermatozoon and cross section of flagella at various segments of the tail due to poor sperm motility [32]. Roughly, 6 to 11% of male patients with infertility are known to have ASAs in their seminal plasma [33]. ASAs are known to hinder progressive motility and block sperm-egg interaction. This autoimmune condition can either be spontaneous or idiopathic and is mostly found in homosexual men and patients with varicocele, testicular trauma, mumps, orchitis, congenital absence of the vas, and spinal cord injury, as well as in those who have undergone vasectomy [34]. The mechanism by which ASAs cause reduced sperm motility is mainly due to the entangling of the spermatozoa at specific regions (head to head, head to midpiece, head to tail, or non-specific binding), due to the binding of immunoglobulins to sperm surfaces. Several methods to reduce ASA in the semen have been explored. Corticosteroid treatment [35], proteolytic enzyme treatment [36], use of immunobeads [37], and immunomagnetic sperm separation methods [38] have shown significant beneficial role. However, in assisted conception and assisted reproduction, the sperm-washing process may be sufficient to get rid of ASAs [39,40].
Sexual Abstinence
Human spermatozoa gain motility potential during their epididymal transit, which is around 2-11 days [41]. During their transport and storage in epididymis, spermatozoa undergo series of physiological and biochemical changes to acquire fertilizing ability. Considering these facts, WHO has recommended 2-7 days as an ideal abstinence time for assessing the semen parameters [3]. However, it is important to note that prolonged sexual abstinence can lead to accumulation of spermatozoa in epididymis, which has limited ability to provide a conducive environment to spermatozoa for long time [42]. Elevated oxidative stress and poor antioxidant defense in the epididymal microenvironment may compromise the sperm parameters under such circumstances. Studies suggest that spermatozoa are highly vulnerable to oxidative stress due to the elevated level of polyunsaturated fatty acids (PUFA) in the spermatozoa membrane [43,44]. A study undertaken by Comar et al. [42] in 2458 men reported a significant negative effect of abstinence on sperm viability, motility, and mitochondrial membrane potential. Considering the poor sperm Fig. 2 Factors which affect human sperm motility quality with prolonged abstinence, discrepancies between researchers over the ideal abstinence for therapeutic insemination procedures continue. Shen et al. [45] reported that ejaculates collected from men with short abstinence (1-3 h) period compared to 3-7 days of abstinence showed increased sperm concentration and higher percentage of motile spermatozoa. Better sperm velocity, progressiveness, and hyperactivation were observed when the abstinence period was 2 h compared to 4-7 days [46]. This was true for oligozoospermic men as well. Dupesh et al. [47] reported that < 24 h abstinence in oligozoopermic men had highest percentage of progressively motile sperm and normal morphology. However, these studies reported lower sperm count and volume. Contrary to this, Elzanaty et al. [48] reported higher percentage of motile spermatozoa at 4-5 days as compared to 2-3 and 6-7 days of abstinence.
Lifestyle and Demographic Factors Related to Sperm Motility
Environmental and lifestyle factors have shown to affect semen quality. Most of the studies have indicated strong correlation between alcohol intake and decreased sperm motility [49][50][51], whereas few studies have shown no effect [52,53]. Studies have shown that abstinence from alcohol consumption can reverse the adverse effect of alcohol on motility [54,55]. Tobacco inhalation is another common lifestyle factor known to contribute to compromised semen parameters. Inhalation of large number of toxins from tobacco smoking can affect spermatogenesis and semen quality, including motility. Even moderate smoking was shown to have significant adverse effects on progressive motility [56]. Tobacco smoke contains nicotine as the main hazardous chemical along with traces of tar, carbon monoxide, polycyclic aromatic hydrocarbons, and heavy metals [57]. However, an in vitro experiment demonstrated that nicotine and cotinine are not responsible for the decrease in motility. Hence, other components, such as carbon monoxide, hydrogen cyanide, alcohols, ammonia, volatile hydrocarbons, aldehydes, and ketones may result in decreased sperm motility [58]. The decrease in motility could also be due to the epididymal dysfunction in smokers or elevated oxidative stress in the testicular environment [57]. High malondialdehyde (MDA) and protein carbonyl levels and low levels of glutathione S-transferase (GST) and reduced glutathione (GSH) were reported in seminal plasma and spermatozoa of smokers [59]. Other factors such as high body mass index [60], meat intake frequency [61], intense physical activity [62], prolonged cell phone [63] and laptop usage [64], and lack of sleep [65] are considered as potential risk factors for decrease in sperm motility. Therefore, lifestyle modification such as consuming nutritious diet, regular exercise, and withdrawal from substance abuse, smoking, and alcohol consumption can improve semen parameters considerably [54,66].
Drugs Affecting Sperm Motility
There is sufficient evidence in the literature indicating the deleterious effect of chemotherapeutic drugs on spermatogenesis in cancer patients [67][68][69]. While the negative effect of chemotherapeutic drugs on sperm production is well documented, their effect on sperm motility remains unclear. Animal studies have demonstrated that anticancer drugs such as vincristine, cisplatin, and cyclophosphamide impair epididymal function, thereby affecting sperm motility [70]. Literature indicates that other commonly used medications have considerable adverse effects on sperm motility. In vitro studies have shown that psychotropic drugs (imipramine hydrochloride, desmethylimipramine, chlorpromazine, trifluoperazine, and nortriptyline hydrochloride) act as potent inhibitors of sperm motility [71]. Antiepileptic drugs (phenytoin, carbamazepine, and valproate) had adverse effects on motility both in vivo and in vitro [72]. Consumption of high amounts of acetaminophen (commonly known as paracetamol), an antipyretic, has also shown to decrease sperm motility [73]. Lansoprazole, a proton pump inhibitor used to treat gastric illness, has shown to reduce the motility due to its calcium quenching effect or decreased Na + -K + -ATPase activity [74]. Moderate consumption of aspirin, a non-steroidal anti-inflammatory drug (NSAID), is known to demonstrate similar effects in young men [75]. In addition, regular consumption of recreational drugs, such as marijuana, is shown to affect spermatogenesis as well as sperm motility [76]. However, there are no clear reports in the literature to suggest whether the effects of these drugs on motility are reversible or irreversible.
Radiation
Radioactivity (natural or by human activity) is an inevitable element surrounding humans. Exposure to radioactivity may be primarily due to occupational environments (mine fields, medical setups, flights at altitude of above 10,000 m) or patients who receive radiation as a part of diagnostic or therapeutic procedure. Some geographic locations may naturally have high radioactivity in their surroundings in the form of gases, such as radon or radionuclides in rocks [77]. Testes are considered extremely sensitive to radiation-induced damage. Earlier studies have shown that exposure to radiation can drastically affect motility and morphology and cause intense vacuolization in human spermatozoa [77,78]. Studies conducted in people exposed to radiations from the atomic bombings of Hiroshima and Nagasaki [79] and the Chernobyl incident [80] have revealed poor motility in the spermatozoa of ejaculates from these men. Even though the mechanism behind the radiation-induced defective sperm motility is not clearly elucidated yet, significant reduction in the expression of cation channel of sperm associated1 (CatSper1) and cation channel of sperm associated2 (CatSper2) genes [81], and other sperm motility-associated proteins were observed in mice [82]. Kesari et al. [83] reported that as low as 850 MHz of non-ionizing radiation impaired sperm motility in human. The decrease in motility and its recovery from radiation-induced assault is dose-dependent. Exposure to a threshold of 0.1 Gy of ionizing radiation caused significant decrease in sperm parameters, which was reversible after 9-18 months. However, exposure to > 3 Gy caused permanent infertility [84]. Considering the extreme sensitivity of the testicular tissue to radiation-induced damages, it is a common practice to use lead shields to minimize the exposure to testes during radiotherapy.
Heat Exposure
Scrotal temperature is 2-5°C lower than the core body temperature in mammals, which is essential for normal spermatogenesis to take place. It is suggested that high heat exposure may perturb regulation of intrascrotal temperature and increase intratesticular temperature, both of which have drastic effects on semen quality [85]. A study performed on mice demonstrated that heat exposure deleteriously affected sperm motility and morphology and resulted in delayed conception [86]. Gong et al. [87] demonstrated that heat stress decreases sperm motility by downregulating mitochondrial activity and decreasing ATP levels. Transient scrotal hyperthermia was shown to cause reversible reduction in proteins required for spermatogenesis, gamete interaction, and motility [88]. Decreased antioxidant level, mitochondrial degeneration, and alteration in protein expression pattern have shown to be associated with poor motility [89]. Heat stress is also known to cause dephosphorylation of glycogen synthase kinase-3α (GSK), a negative regulator of sperm motility and interference in mitochondrial remodeling. Therefore, men exposed to higher temperatures due to their occupation (bakers, foundry workers, welders) [90] and other factors which increase the intratesticular temperature such as sedentary work habits [91], wearing tight under garments [92], and frequent sauna use [93] may have an increased risk of defective sperm motility.
Environmental Factors and Sperm Motility
Due to a rapid increase in industrialization and urbanization, our environment is highly polluted by various natural and synthetic chemical agents generated by industrial or agricultural activities. Environmental contaminants, especially those with endocrine-disrupting function, are suspected to interfere with normal spermatogenesis and decrease the semen quality and human fertility. The published data available in the literature show that various environmental chemicals, such as pesticides, polychlorinated biphenyls [94], bisphenol A [95], glycol ethers [96], perfluoronated compounds [97], dioxins and dioxin-like compounds [98], phthalates [94], heavy metals [99], dichloro-diphenyl-trichloroethane [100], and plasticizers [101] have adverse effects on sperm motility.
Psychological Stress
Psychological stress is an "emotional experience" accompanied by several biochemical, physiological, and behavioral changes or responses. During the events of stress, corticosterone elevation suppresses testosterone and inhibin levels, thereby causing alteration in testicular microenvironment [102]. Studies have shown a negative effect of psychological stress with sperm progressive motility [103]. Stress can affect male fertility through different mechanisms, mostly through altering testosterone secretion and through disruption of the blood-testis barrier [104]. Inhibition of the hypothalamicpituitary-gonadal axis via the inhibitory effect of gonadotropin-inhibitory hormone [105] and activation of the hypothalamic-pituitary-adrenal axis by producing an inhibitory effect on hypothalamic-pituitary-gonadal and Leydig cells, consequently impairs spermatogenesis [106]. The effect of psychological stress on reduced sperm motility could also be due to increased nitric oxide (NO) level. Excessive NO generated during psychological stress can produce peroxinitrite radicals (ONOO − ) that causes oxidative damage and mitochondrial dysfunction, thereby causing reduced motility [107]. Nevertheless, it is encouraging to note that the impact of psychological stress on sperm motility or quality seems to be modifiable and reversible [104].
Infections and Sperm Motility
Microbial infection is also known to affect reproductive outcome. Sexually transmitted diseases caused by bacterial, fungal, and viral pathogens can significantly decrease semen quality and can be a contributing factor for male infertility [108]. Presence of small amount of these pathogens is shown to decrease sperm motility. Experimental evidence suggest that bacteriospermia decreases sperm motility significantly due to bacterial infections, leucocyte accumulation (leukocytospermia), antibody buildup, inflammation and oxidative stress [109]. Chlamydia trachomatis [110] and Ureaplasma sp. [111] infections are reported to affect sperm motility. Similarly, Burrello et al. [112] reported that infections caused by Candida albicans, a pathogenic yeast, decreased sperm motility significantly by reducing mitochondrial membrane potential and increasing apoptosis of human spermatozoa in vitro. Pathogens like hepatitis B virus [113], human papillomaviruses [114], herpes simplex viruses [115], and adeno-associated virus [116] were associated with significant reduction in sperm parameters, especially progressive motility. A recent report suggests that infection with SARS-Cov-2 coronavirus in men can lead to low sperm count and poor motility for 90 days following infection [117]. It is not explicit if sperm motility improves following any specific (antibacterial/antiviral/ antifungal) therapy in men with these infections. However, Garolla et al. [118] observed improvement in progressive motility in HPV-infected men after HPV adjuvant vaccination.
Signaling Mechanisms Involved in Sperm Motility
Motility is a complex physiological property of spermatozoa, which is dependent upon many extrinsic and intrinsic factors (Fig. 3). Several complex signaling pathways contribute to sperm motility such as cyclic adenosine monophosphate (cAMP)/protein kinase A and phosphoinositide 3-kinase signaling, which are mediated through calcium ion (Ca 2+ ), bicarbonate ion (HCO 3 − ), or both [8]. The less investigated DAG-MAPK (ERK1/2) [Diacylglycerol-mitogen activated protein kinase (extracellular signal regulated kinase 1/ 2)] pathway is also involved in sperm motility signaling. This pathway is regulated at the membrane level by ion channels, such as CatSper and voltage-dependent calcium channel, and inhibited by Ca 2+ -ATPase, which promotes the Ca 2+ influx process. Moreover, HCO 3 − , through the sodium (Na + )-bicarbonate (Na + -HCO 3 − ) co-transporters, enhances the activation of downstream soluble adenylate cyclase (sAC) along with calcium, which promotes motility through elevation of cAMP (Fig. 3). Intracellular sperm pH regulation is also governed by hydrogen ion (H + ) efflux and other ions, thereby activating the opening of CatSper and increasing the intracellular Ca 2+ reservoir [119]. Hence, sperm motility is interconnected and associated with different physiological changes that are solely dependent upon the intracellular signaling pathways and post-translational modifications.
Ca 2+ as a First Messenger in Achieving Sperm Motility
Ca 2+ is a fundamental messenger, which regulates capacitation, acrosome reaction, and hyperactivated motility. In human sperm, calcium influx is modulated by various mechanisms, such as increase in membrane permeability by loss of cholesterol from the sperm membrane, depolarization, inhibition of Ca 2+ -ATPase pump, activation of voltage-dependent calcium channels, and CatSper [8]. Compared to 100 to 200 nM resting calcium concentration that is needed for normal motility, an increase in the intracellular calcium level is needed for the spermatozoa to attain hyperactivated motility in the female reproductive tract [120]. The primary role of calcium is to activate sAC, which in turn further activates downstream signaling molecules. Inhibition of calcium influx by blocking Ca 2+ channels has been demonstrated to cause male subfertility by preventing acrosomal exocytosis in humans [121].
Role of cAMP
Cellular level of cAMP, a second messenger, is controlled by adenylate cyclases, which catalyze the conversion of ATP to cAMP with the release of inorganic phosphate [122]. Reduced level of cAMP is associated with reduced motility and infertility. G-protein-activated transmembrane adenylate cyclase and sAC are the two types of mammalian adenylate cyclases. Even though both types of adenyl cyclases are present in spermatozoa, motility appears to be solely regulated by sAC. sAC is activated by Ca 2+ and bicarbonate directly and acts as a sensor for ATP, Ca 2+ , and HCO 3 − /carbon dioxide/pH at different intracellular sites. Importantly, sAC undertakes the task of converting ATP to cAMP, a secondary messenger that activates the protein kinase A pathway.
Role of HCO 3 − in Regulation of Sperm Motility
During the journey of sperm in the female reproductive tract, it is the bicarbonate ion that creates an alkaline environment for spermatozoa to achieve hyperactivated motility. Bicarbonate is transported into sperm through the sodium-bicarbonate cotransporters which is essential for capacitation and also a direct activator of sAC [123]. Upon its entry into the cell, it increases the intracellular pH and causes hyperpolarization of the membrane. Apart from the voltage-gated proton channel and Na + /H + exchanger, transport of bicarbonate into the sperm contributes significantly to the regulation of pH [119]. Hence, Ca 2+ and HCO 3 − concentrations act through the sAC/cAMP/ protein kinase A pathway to achieve hyperactivated motility. Levels of bicarbonate lower than the physiologic level in the ejaculate have also shown to cause reduction in sperm motility [124]. Sperm functional changes, such as capacitation and acrosome reaction, are imperative for successful fertilization, which is also regulated by HCO 3 − . As early as 1 min after bicarbonate exposure to spermatozoa, a peak in cAMP level can be observed, which rapidly evokes frequent flagellar beats and decreases beat asymmetry [125].
Protein Tyrosine Phosphorylation
An increase in protein tyrosine phosphorylation is a hallmark of capacitation and hyperactivated motility in human spermatozoa. Most pathways studied in sperm motility belong to a family of protein tyrosine's that get inevitably phosphorylated during the event of hyperactivation. Phosphorylation of both serine/threonine and tyrosine proteins in human spermatozoa has been reported during capacitation [126]. Among these, the tyrosine kinases Src fibrous growth factor receptor 1 (FGFR) and Abelson murine leukemia (ABL1) are known to be well associated with tyrosine phosphorylation in mammalian sperm. A-kinase-anchoring proteins (AKAP4), calcium binding tyrosine phosphorylation regulated proteins (CABYR), heat shock protein 90 (HSP90), and 95 kDa FS proteins that are present in the sperm flagellum have been defined as targets of tyrosine kinases [8].
Sperm Motility and Infertility
Human ejaculate is highly heterogenous with respect to the types of cells present, motility pattern, and the quality of spermatozoa. Presence of immotile sperm in the ejaculate is not unprecedented and can arise because of testicular and/or epididymal dysfunction due to various risk factors, as discussed earlier. Clinically, presence of motile and morphologically normal sperm provides evidence for fertility potential among infertile patients. Based on the results from a study conducted on 4500 normozoospermic men from 14 different countries, the baseline for normal sperm characteristics was established by World Health Organization [127]. Like oligozoospermia [128], asthenozoospermia [29,126] is also strongly correlated with infertility which suggests that motility is an equally important semen parameter to achieve pregnancy.
Importance of Motility in Planning Therapeutic Insemination Procedures
To decide upon effective treatment for correcting infertility, infertility specialists depend upon semen parameters of male partner. Motility is one such parameter which plays an important role in deciding the appropriate therapeutic insemination option for the infertile couple. In general, to recommend intrauterine insemination (IUI), one should be able to extract at least 5 million motile sperm from the ejaculate; in vitro fertilization (IVF) is recommended when 2 to 5 million motile sperm can be extracted; and intracytoplasmic sperm injection (ICSI) is recommended when samples yield less than 2 million motile spermatozoa. Kinematic parameters, such as straight line velocity (VSL) and curvilinear velocity (VCL), have prognostic value in predicting the fertilization potential of spermatozoa [129,130]. If the spermatozoa have a VCL greater than 65 μm/s and straight line velocity (STR) greater than 40 μm/s, IVF should be considered. If the velocities are lower than these values, to improve the fertilization rate ICSI is recommended, even if there is adequate percentage of motile spermatozoa to perform IVF [131].
Motility and IUI Pregnancy
Sperm motility as a predictor of pregnancy in patients undergoing IUI has been a topic of discussion. Several studies have confirmed that the total progressively motile sperm count in fresh ejaculate does not have any prognostic value in predicting pregnancy outcome in IUI cycles [132][133][134]. However, the number of inseminated progressively motile spermatozoa (NIPMS) was considered a better predictive marker [135]. To achieve the best pregnancy rate in IUI, at least 5 million motile spermatozoa are thought to be essential [136]. A systematic review conducted by Ombelet et al. [137] proposed that IUI can still be tried with an NIPMS of more than 1 million before directing the patient to IVF. However, the pregnancy rate in such circumstances is expected to be low. In a retrospective study comprised of 1166 couples undergoing IUI cycles, Lemmens et al. [138] found that pregnancy probability significantly decreased when the NIPMS was less than 1 million. In the case of insemination with cryopreserved semen samples, total number of motile sperm less than 20 million significantly decreases pregnancy rate [139], possibly due to the poor functional competence of frozenthawed spermatozoa.
Motility and IVF Pregnancy
It has been established that spermatozoa having at least 30% motility and 15% progressive motility are required to perform IVF [140]. Sperm motility is known to have a strong correlation with IVF success and pregnancy outcome [141]. Superior sperm kinematic parameters are also considered to improve IVF outcome. The percentage of motile spermatozoa with an average path velocity (VAP) between 10 and 20 m/s were known to significantly increase success rates during IVF [142]. Donnelly et al. [141] reported that values for VAP, VSL, and VCL were significantly higher in samples that produced > 50% fertilization, indicating positive correlation between progressive motility and fertilization outcome. Contrary to these reports, Moghadam et al. [143] reported that motility did not enhance fertilization rate or improve pregnancy outcome through IVF. Further, with the advent of ICSI, the use of conventional IVF practice has drastically reduced [144].
Motility and ICSI
Motility is an important parameter in ICSI, as it helps the embryologist in picking a viable spermatozoon for microinjection, especially in case of absolute asthenozoospermia or if the spermatozoa are retrieved by testicular sperm aspiration. Apart from poor fertilization due to injection of non-viable spermatozoa into the oocyte, lack of motility in the sample may have an indirect negative effect on the fertilization outcome due to the delay in completion of microinjection procedure. Identifying a suitable viable spermatozoon is challenging which may potentially cause delay in completion of the ICSI procedure. Bartolacci et al. [145] in a recent retrospective study of 1266 ICSI cycles reported that low sperm motility and concentration compromise fertilization and blastocyst rates but have no impact on the implantation potential of the obtained blastocysts or rate of top quality blastocyst formation. These results are consistent with a study conducted by Mazzilli et al. [146] that included 1219 couples undergoing ICSI cycles with preimplantation aneuploidy tests. It was proposed that poor sperm motility could lower fertilization rates and impair the developmental competence of early embryos but had no effect on pregnancy rate or euploidy of the obtained blastocysts, whereas Miller and Smith [147] reported that defective motility is not linked to poor fertilizing ability in ICSI. It is instead related to developmental arrest at the cleavage stage (day 3 embryos) or decreased rate of blastocyst formation. Sperm motility was also shown to be positively associated with the quality of the sperm nucleus [148], thus showing an added benefit to selecting the most motile sperm.
Improvement in Sperm Motility In Vivo
Efforts to ameliorate testicular sperm output or semen quality have been explored in the past with various approaches, however, with minimum success. Oral supplementation of synthetic drugs, vitamins, trace elements, and other natural compounds have been used historically for enhancing sperm motility in men (Table 2). Among these, the most widely used approaches are based on mitigating the oxidative stress in the testicular microenvironment using antioxidants. Few studies have shown the beneficial effects of oral supplementation of antioxidants or trace elements in boosting sperm motility in infertile men. Antioxidants such as vitamin E [149], coenzyme Q 10 [150], L-carnitine [151], vitamin C [152], and lycopene [153], alone or in combination with trace elements like selenium [154] or zinc [155], have demonstrated improvement in sperm motility after oral administration. However, there are contradictory reports as well [156,157]. In a recent article, Tsounapi et al. [158] reported significant improvement in sperm motility by using avanafil or combination of avanafil plus Profetil (mixture of micronutrients-L-carnitine, L-arginine, coenzyme Q 10 , vitamin E, zinc, folic acid, glutathione, and selenium). Pharmacological agents such as pentoxifylline [159] and avanafil [158], which are inhibitors of phosphodiesterase (PDE), and clomiphene citrate [160], an antiestrogenic molecule that increases endogenous serum follicle-stimulating hormone (FSH), luteinizing hormone (LH), and testosterone, are proven to enhance sperm motility in vivo.
Natural compounds and crude plant extracts (individually or as multiherbal formulations) have also been tried extensively with impressive improvement in motility. In a triple blinded randomized clinical trial conducted on 100 idiopathic infertile men, Azgomi et al. [161] reported that extracts from Withania somnifera root improved sperm motility by 57%, similar to that of pentoxifylline. Even though there are not many studies on motility enhancement in human with plant extracts, several animal studies suggest the potential use of extracts in improving sperm motility. Nayak et al. [162,163] have shown that ethanolic extract of Moringa oleifera leaves improves sperm motility in mice treated with cyclophosphamide. Similarly, other plant extracts like Ruta chalepensis, Croton zambesicus, Shengjing (a Chinese formula of plant extracts), Panax ginseng, Nigella sativa oil, Phoenix dactylifera, Punica granatum juice, Asparagus recemosus, Tribulus terrestris, Mucuna pruriens, and Lepidium meyenii are reported to increase sperm motility in animal models and humans [164]. Agrawal et al. [165] reported the use of Speman (The Himalaya Drug Company), a multiherbal formulation, which increased sperm motility in men with oiligozoospermia.
Improvement in Sperm Motility In Vitro
Unlike other semen parameters, sperm motility is accessible to modulation under in vitro conditions, which serves as an advantage, especially for ART. A wide variety of compounds have been screened for motility enhancement in vitro (Table 3), among which the most popular agents are PDE inhibitors. Compounds like 8-methoxy isobutyl methyl xanthine (8-MeO-IBMX), rolipram, RS-25344, sildenafil, tadalafil, dipyridamole, isobutyl methyl xanthine (IBMX), ibudilast, tofisopam, etazolate hydrochloride, and papaverine were shown to increase sperm motility [166,167]. Among all PDE inhibitors tested so far, caffeine and pentoxifylline are the two nonspecific PDE inhibitors that have been used most frequently as motility stimulants for human spermatozoa [166]. But, since caffeine and pentoxifylline are known to induce premature acrosome reaction, their clinical use as sperm motility enhancers has been limited [168]. Tardif et al. [167] screened 43 commercially available compounds with reported PDE inhibitor activity, among which 6 compounds (dipyridamole, ibudilast, tofisopam, etazolate hydrochloride, papaverine, and 8-MeO-IBMX) were able to significantly increase the percentage of total and progressive motility in human spermatozoa.
Apart from PDE inhibitors, treatment of human sperm with cAMP analogues, such as dibutyryl cAMP [169], adenosine, 2-deoxyadenosine [170], or activator of adenylate cyclase enzyme, such as forskolin [171], have shown a significant increase in total motility for a short duration. However, no significant difference was observed when spermatozoa were incubated over longer periods in vitro with dibutyryl cAMP or forskolin. Aitken et al. [171] reported that exposure of cryopreserved human spermatozoa to 2-deoxyadenosine resulted in significant increases in percentage of motility. However, there is limited information available in the literature on the potential application of these compounds in ART setup. Considering the role of protein kinases in the sperm motility pathway, LY294002, an inhibitor of phosphoinositide 3-kinase, was screened for its motility enhancement property. Several studies have shown potential stimulating effect of this Pentoxifylline PDE inhibitor, increased cAMP, decreased ROS [185,186,159] Clomiphene citrate Binding to estrogen receptor in hypothalamus; increased follicle-stimulating hormone and luteinizing hormone levels [187,188] Drugs along with bioactive compounds Clomiphene citrate + vitamin E Not known [189,190] Pentoxifylline + zinc + folic acid PDE inhibitor and antioxidant [191] Pentoxifylline + L-carnitine PDE inhibitor and decreased ROS [192,187] Bioactive compounds alone or in combination Vitamin C Decreased ROS [193] Zinc Increased metallothioneins and decreased oxidative stress [194,155] Selenium Not known [156] Coenzyme Q 10 Decreased ROS [150,195] L-Carnitine Increased GPX4 expression [196] Zinc + folate Not known [197] Selenium + vitamin E Increased GPX4 expression, decreased oxidative stress [154,198] Selenium N-Acetyl-cysteine Not known [199] Fertilovit Decreased ROS [200] Herbal extracts Withania somnifera Enhanced enzymatic activity in seminal plasma, decreased oxidative stress [161,201] Tribulus terrestris Not known [202] Mucuna pruriens Activated antioxidant defense system and physiologic stress [203] Lepidium meyenii (maca) Not known [204] Speman (multiherbal formulation) Not known [205] cAMP, cyclic adenosine monophosphate; PDE, phosphodiesterase; ROS, reactive oxygen species compound on motility in humans [172]. However, a contradictory report showed notable differences in their potency [168].
Various physiologic agents, such as progesterone, thyroxin, and Müllerian inhibiting substance, were also tried and shown to improve sperm motility in vitro [173]. However, incubation of spermatozoa with Müllerian inhibiting substance led to inhibition of protein tyrosine phosphorylation, capacitation, and acrosome membrane exocytosis. Similarly, Moosavi et al. [174] reported an increase in sperm motility after incubation of rat spermatozoa with human chorionic gonadotropin, but the study lacks detailed investigation to understand the mechanism of action.
Conclusion
Last several decades have seen a steady decline in sperm output and their functional properties such as motility in human mainly due to change in environmental and lifestyle factors. Therefore, adapting to a healthy lifestyle pattern may help in minimizing the loss of fecundity in men. Motility is a major determining factor for the successful pregnancy outcome, emphasizing the importance of research in the field of motility enhancement. Efforts to improve the sperm motility in ejaculated spermatozoa by empirical treatments with hormones, antioxidant supplements, and natural products have not shown consistent results. Considering the advantage of ex vivo manipulation of motility using pharmacological agents, specifically phosphodiesterase inhibitors, further extensive research in this aspect may prove beneficial to medically assisted or artificial insemination procedures. High-throughput screening approaches can accelerate identification of novel sperm motility enhancing agents. Further, it is essential to confirm that these motility enhancers do not exert any adverse effects on the developing embryo.
Acknowledgements Open access funding provided by Manipal Academy of Higher Education, Manipal.
Authors' contributions RD, RSH, HA, and SK wrote the manuscript and drafted the studies. RD, SK, and RSH performed the database search. HA helped with the references. GK and YZ contributed significantly to the conception and design of the study. GK and YZ supervised the research and finding of the work. SKA and NK reviewed and critically evaluated the manuscript. All authors read and approved the final draft for submission.
Compliance with Ethical Standards
Conflict of Interest The authors declare that they have no conflict of interest.
Code Availability Not applicable
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 8,962.8 | 2020-12-07T00:00:00.000 | [
"Medicine",
"Biology"
] |
Bioethical topics in the works of Kvirin Vasilj (1917 – 2006)
Kvirin Vasilj (1917 – 2006), an authentic thinker of the 20th century, in his philosophical deliberation, he touches on various aspects of human existence, including those that are today identified as bioethical challenges. Thus, bioethics is present in his deliberations, although the term bioethics as such is not found in any of his six hundred works, and they often relate to the meaning, quality, the beginning and the end of human life. Between these two endpoints of an individual’s existence, Vasilj places a considerable emphasis on the very practical dimensions of duration, nature protection, quality of life and more. It should also be noted that Vasilj often uses these themes as a basis on which to present or explain some anthropological, even ontological, issues.
Introduction
is a 20 th -century thinker whose philosophy touches between Kant's cognitive and ethical system on the one hand and the neo-scholastic system on the other. He published most of his books and articles abroad, to also become more present in domestic philosophical circles with several books at the end of the last century. In his works, his intellectual continuity is evident 1 , and in the centre of his interest is a man. 2 Man is, as defined by Kvirin Vasilj 3 in one of his definitions, a physical and reasonable (i.e. cognitive and intellectual) free and speaking being. It is exactly philosophy that helps man to live in a human way, which should be its definition, the highest and final goal. 4 Therefore, this philosopher does not seek to establish a certain philosophia perennis. However, he seeks to develop a philosophy of life, that is, philosophia vitae 5 , with the aim of affirming man's original and inimitable (rational) life principle. However, consistent thinking from his own thinking positions that he claims to be based on experience, Vasilj, as a philosopher, was not prevented from perceiving the nature as an inexhaustible source of questions (it is inherent to man to ask questions -animals and God do not ask questions) and, as a theologian, of great secrets. He is convinced that it will remain so forever because man finds (and does not create) life in the world and in himself. 6 Although Kvirin Vasilj in his articles and books does not mention by name the discipline that studies human action in relation to the living things, and even a man himself, bioethics, the content of this term can be found scattered throughout Vasilj's inscriptions. About ethics itself has been written elsewhere 7 , and the research has begun here on how Vasilj approaches the realisation of life in animals and plants (the second part of the article). However, it appeared that for the understanding of his approach to what was said, it was necessary to expose how Vasilj understands and analyses the human being, as well as the philosophical structures that precede him or those that result from that analysis (the first part of the article).
Life origin as a special essence in a man
Kvirin Vasilj accepts the results of biology in describing experienced facts, studying the internal structure of living beings, reproduction, ambience and more. He points out that nature rules as if it did not care about individuals but about the preservation of the species, so in this context, he talks about mutations in existing living beings that are within certain limits actually possible, both the positive ones (such as different adjustments) but also the negative ones (such as cancerous changes) and as such, they are often a consequence of external influences. He says it is not possible to know where the historical boundaries are of changing and developing of one living being into the other, but also that in nature, there are some maximums beyond which it does not exceed, so living beings can evolve to a certain degree that the species can no longer surpass. Hence, he writes about "the fourth dimension of living beings", i.e. the internal time of species development. When he looks at the man in this context, he feels that the man is at the top of developing his physical capabilities. 9 On the other hand, when Kvirin Vasilj discusses living beings as a philosopher, which is much more often than his above observations, he then claims that human beings have two constitutive but different origins of the essence: one that represents an already completed essence and the other essence in becoming, i.e. the one that is created by the assimilation of the matter. In support to the claim, he states the apparent multiplicity of action of living beings by which physical and chemical elements connect into a single whole, so that the living organism causes (produces) itself in its own physical component. As it is logically impossible for the whole in the same living being to produce parts and the parts to produce the whole, that is, that something is its own cause and effect, Vasilj concludes that there is a special life origin in the living organism. That origin is a new and original essence in relation to the matter, which according to the author, escapes the knowledge of physics precisely because it is inherently different from other physical cause, but not necessarily unrealistic simply because man has no direct mental maturity about it. The compatibility of this life origin and the material component is that it enables new chemical syntheses in the material component. It elevates it to a higher level of existence and action. In contrast, the material component, in addition to carrying it in itself, the ability to connect with the life origin into a living being, it also enables it to interact actively with other beings. 10 Certainly, the question of how and why the life origin is differently realised in the order of plants, animals and humans, but also in these orders in uncountable varieties, should be addressed. Vasilj might also be aware that this question should be answered: "The life force, as an inexhaustible artist, gives shape to all living beings, colour to all flowers, design to all leaves. Life is a divine chemist that gives taste to fruits and spices, fragrance to flowers. It changes water and carbonic acid into sugar and wood, thus releasing oxygen, needed for animals to breathe. The life force is an excellent mathematician who, with the help of the minimum of materials achieves the maximum." 11 The author's second proof of the existence of a special intangible origin in living beings comes from his statement that it is in principle not possible to produce living beings from inanimate matter in scientific laboratories. It would be easier to produce animals from ants, this author claims, than to produce a living cell from dead matter, or to produce gold from other materials than to produce life without the "germ of life" in inanimate matter. Living beings, therefore, have more essence in them than the sum of elements of which they are built, Vasilj deliberates further. Therefore, he cannot justify in any way the claim that life comes from inanimate matter by some (accidental) synthesis. 12 From the listed points of view, Vasilj also analyses a man and warns that his complexity, like the complexity of other living organisms, is not only reflected in the macromolecular structure, but also in the special way in which molecules are hierarchically integrated into individual organs and finally in a whole. A certain life origin is responsible for that. At the level of this life origin, according to our philosopher, the destiny of a man is decided, not at the level of the structure of his genes. In the book The Philosophy of Humanization and Dehumanization, he refers to Dawkins's book The Selfish Gene: "According to Dawkins, there are no living molecules because of man, but man exists because of living molecules, which have no awareness of their own reality. That would be the highest purpose of man's existence and the greatest reason for his being. That would be man's worldview and outlook. If a lot of absurdities were to follow from this negative biology about a man's life and his actions, maybe Dawkins did not even think about it: he does not have to pay attention to philosophical theories when he knows it all by virtue of his facts. It is then that Nietzche is right, that only great people have the right to life, as in them these molecules came to their highest expression and to their highest perfection." 13 Furthermore, from the postulate on the exclusive life origin in the matter, Vasilj deduces that the appearance of a man does not necessarily presuppose the existence of some transitional form, hominid animals. Moreover, in principle, it is not possible to interpret the origin and development of human life from some transitional form due to 1) human origin constituting essence in itself and not a real change of the existing essence, 2) human origin in its essence cannot originate from actions of natural causes and 3) human origin as the origin of man necessarily participates in the formation of his own physical components. Well, by bringing his thought to the extreme limits, Vasilj may claim that no living being, who is not already human, can develop a body that could be equally the same to the human body -from the beginning, it has to be shaped from the man himself and in the manner of man in order to have a human life origin. Thus, Vasilj rejects theories about the development of men from hominid animals. 14 Vasilj refers to, as he calls it, »biological philosophy«. It offers answers to the question of what life is, and how it came to be, what the difference between living and nonliving beings is. Their thinking and real mistake are that they claim that what is not happening now, surely happened in the time of which man has no immediate knowledge. Thus, according to this author, the premise of the general evolution of life from some simple, primal cell is not some established scientific fact, but a hypothesis about the appearance of life on Earth in general which does not represent a set of proven facts but a set of assumptions which are valid just as much as the principles underlying them. 15 And continues: "Furthermore, these very same people, who do not recognise the causal relations between natural beings, as we have no direct intellectual intuitions about them, find such causal relations between historical beings only on the basis of temporal relations, although the analogy of experience and first principles of cognition with real value preclude us from also interpreting those temporal relations as causal relations." 16 Further to the claims on man as the unity of life and material origin, Vasilj wonders when this union of higher and lower origin occurs and answers that this union should be placed to the very beginning. Namely, only from a human fertilised cell, a human being can develop, and to Vasilj this is a signal that in it, from the very beginning, there is some origin which seems different, e.g. from a fertilised cell of a monkey. Therefore, Vasilj rejects the view the life origin of a human would merge at some point with some developed human organism in which all preparations for such act were completed. Life origin of a human, from the very beginning, is involved in the production of its own body. Therefore, a fertilised cell contains in itself the origin of biological life in the manner of a whole, so it should be observed and respected as a complete human being, and the principle that it is not allowed to directly kill an innocent man should be applied to it. Vasilj expresses disagreement with abortion in several of his works, despite the immediate benefit of the procedure itself. The aforementioned thesis that he began to discuss, brings Vasilj to the insistence on responsible parenthood. It consists of the fact that partners should not give birth to as many children as they can conceive, as children need to be raised, educated, directed in life. The author does not dispute the artificial control of marital acts when partners have a reason for that as, he continues, if the marital act is to be open to the transfer of life, it should not always be open to conception. 17 It should also be noted that Vasilj criticised the encyclical Humane vitae on several occasions at the time when it was published. 18 Vasilj, without any exception, insists on the need to respect the "innocent human life which is in harmony with other people" at a very stage of its development. If this principle is denied, the absolute value, human life is exposed to destruction. In the modern civilisation, to his great regret, Vasilj recognises only the principle of selfishness, which is strong enough that every moral evil is transformed into physical gain or at least is presented weak and without lasting consequences, an attitude that quickly "animalises" the man who is otherwise in danger of becoming the worst animal of all animals. He concludes, with regret, that human society has so far not found a way to defend human life effectively. 19
Life origin as a special essence in plants and animals
Kvirin Vasilj admires the phenomena in nature: the organisation of inanimate matter, the diversity of plants and animals. It is from botany and zoology (e.g. spawning of eels, migration of birds and similar). Vasilj gives numerous examples to illustrate some of the most complicated philosophical settings. All of these many different connections and factors put a man into a state of annoyance and confusion. 20 That is why Vasilj says this: "Namely, however deeply our mind penetrates the secrets of reality, however seriously he becomes aware of the existence of physical agents and their laws of action, he will never have immediate intuitions about the metaphysical finality in the world, but will always remain subject to the riddle of admiration and wonder, that nature achieves by mechanical action -from the point of physical finality it signifies the sequence of a mere chance -the same results, which would, according to the understanding of [meta]physical finality, could be only achieved by the harmonious participation of an extraordinarily intelligent and powerful being with their agents in the physical order." 21 Advocating human rights in which the life origin is realised, Vasilj is aware of environmental issues and also advocates plants and animals whose very existence is called into questions by the very man's insatiable greed. He resolutely condemns any torture of animals, although, for the sole purpose of a higher good, he agrees in principle with experiments on animals, as he elaborates further, what disorder and suffering are from the view of the animal also has some order and meaning from the point of a man. 22 He also casually presents some of his own assumptions about the value of animals, which was why they were sacrificed to deities in primitive cultures. 23 Noticing how man also applies on himself experiences acquired in interactions with flora and fauna to find out something about his own origin and his cognition, Vasilj suggests that man relies on his own rational experience about himself as man deduces the life and habits of animals and plants, from his own perspective and analogy pursuant to the habits of his own species. 24 "It is both pleasant and fun to fish, to socialise with animals and plants in nature, to climb mountains as a vacation. But to turn it into a purely human vocation 20 Vasilj, K. (1966), 331-333. would mean to make life boring and painful and deprive it of the greatest spiritual assumptions and pleasures, that the development of his own spirit prepares for and gives him. Digging through earth and space allows man his physical life. But measuring in the depths of a human being, in accordance with logical and ethical laws, enables a man to live his spiritual life. Even the deepest and most fertile furrows are not in the earth but in the human spirit." 25 Our author reminds man that he is one with nature: he is, like any other thing, subject to the law of gravity; as an animal he feeds on the fruits of the earth, grows and multiplies; like the animal itself and has the sense of comfort, observes the same objects. That sensory origin of which a man shares with all nature, in man it further demonstrates an extraordinary creative power: animals are not able to prepare food artificially, to protect themselves from natural disasters artificially or to be treated in this way; animals may, however, endure hunger or thirst, but for the most part, they cannot consciously give up food or water; they permanently live in the same way, while man is able to act not only in a new but also an unexpected and opposite way; an animal expresses its inner mood through movements, sounds and similar, while man has a developed, articulate language, through which he can communicate his thoughts; animals do not have any religion, they are not able to philosophise, to experience something in the form of being, and therefore cannot consciously rise above physical things; although noticing objects, they probably have no experience of beautifying; ultimately, the man remembers all the dead. 26 Here listed differences between man and animals are important and far-fetched differences that are not reflected only in a particular organisation of the matter they absorb, in the sensory apparatus itself, but rather in the specific principle involved in each act of cognition -reason. (Wanting to point out this difference, Vasilj uses expressions depending on the context and complexity of the text: mental maturity, intellectual intuition, the special principle of life, spiritual origin, human spirit, and sometimes a person. Understandably, he is also aware that these are not synonyms for the term reason, although it is obvious that he uses it in that sense. The term reason is predominantly used here, even when it is obviously a matter of cognition as the marks of reason are not questioned as much as the very fact of its existence and exclusivity.) It is precisely the reason that is a great treasure that makes man unique in the universe. The reason is the concentration of essence different from all matter, 25 Vasilj, Kvirin (1984), Politika. Politička teorija obzirom na osobnu i društvenu odgovornost, Chicago: ZIRAL, 150.
26 Vasilj, K. (1966), 524;Vasilj, K. (1968), 28, 86, 136;Vasilj, Kvirin (1972), Sloboda i odgovornost. Ćudoredni zakon, Rim: Izdanja Ranjeni labud, 93, 118, 200, 300;Vasilj, K. (1996), 99-100;Vasilj, Kvirin (1976), Marksizam i kršćanstvo. Razgovor s drom Brankom Bošnjakom, piscem knjige "Filozofija i kršćanstvo", München -Barcelona: Knjižnica Hrvatske revije, 120; Vasilj, Kvirin (1979), Ljepota i umjetnost, Chicago: ZIRAL, 47, 116, 140;Vasilj, K. (1978) constitutive origin in the very concept of man. Thanks to reason, when a man and an animal look at the same thing and produce some sensory image in their imagination, man, in the very act of seeing, in principle sees more than an animal, as in addition to the sensory image, he also forms the notion of that thing, intuits the reality of being. Therefore, Vasilj states the principle that every act of a man's sensory observation is connected with the mental maturity into an indivisible whole. Therefore, the principal distinction between man and animal is manifested immediately in the first beginnings of his cognition. It is this huge distinction, a huge leap that goes beyond all space capabilities and dimensions. Whoever claims otherwise should be consistent, so if man's cognition is initially equated with the one of animal, that is, with sensory observation, then it must remain at the level of animal capabilities at its end, and this is what nobody wishes nor can accept for themselves. 27 Among other examples illustrating this, we highlight this sentence: "Monkeys, because of their greater resemblance to man, are a living mirror, in which a human person can see its insignificance and minuteness as an animal being." 28 Animals cannot think as they are unable to form concepts and ideas, Vasilj claims contrary to those who are convinced that at least higher animal species are capable of thinking. If an animal were able to think, then it would probably be able to know through reasoning, and therefore to raise its culture and civilisation. Moreover, animals show no such progress in their way of existence. Animals are neither selfish nor selfless because they have no reason to limit their selflessness or multiply their selfishness, they do not care about others when they are hungry and thirsty, they do not show any special respect for anyone's life, but they live for themselves and others only insofar as the life of the other living being is inextricably linked to their life (pack, cubs). Animals cannot experience the magnificence of great things as they have no sense of comparison. With this, the vocalisation of some animals cannot be called a conversation with a greater or smaller number of established signs that animals express their condition. Furthermore, the fact that animals are able to imitate one another, including a man, is not proof of their ability to think, but merely an expression of events at the level of sensory action. 29 Therefore: "A man may think that he is equal to animals only because he is fundamentally different from animals." 30 In the same sense, he continues elsewhere: "The animal is not a threatened and restricted man from the outside. Therefore, it is completely inappropriate and unrealistic effort of some biologists to interpret the formation of concepts and ideas to the way of reproduction of human beings through sexual or asexual conception, because we form our concepts based on the mental maturity -every man for himself -while the reproduction of human beings at their biological level takes place completely unconsciously: we do not understand the laws of cognition by means of biological laws, but seek to understand biological laws in accordance with some logical and cognitive laws of human thought and maturation. A human person with inherited genes does not inherit the knowledge of his parents, but rather needs to acquire it from the beginning, to deepen and expand it." 31 Vasilj notes with regret every time that many people reduce their knowledge more and more to just what their sensory eyes see and notice. Tragically, they reduce themselves to the level of lower living beings who only have sensory powers. "However, the more a human mind progresses in its knowledge of natural beings, the less it held to its own being and therefore to the intrinsic value of a human person. In his thoughts, he reduced the human person more and more to the level of unreasonable animals and was almost proud of that and finally interpreted his origin from a simple matter." 32 The reason for degradation of the human being is pleasure, as man, thanks to his mind, is able to develop and perfect his animal element far more than the animal itself: for example, enjoying food, reproduction, and similar. On the other hand, a man, with his unreasonable use of the environment, has created an unhealthy environment for the existence of plants, animals and himself. In enormous greed for getting rich, he did not know or did not want to establish a balance between his serving to nature and respecting the nature as a whole of natural beings, who, ultimately, enable the realisation of his own being, Vasilj finds. Moreover, greed is the result of the condition in which man, his spiritual pursuit for infinity, moved to the sensory need for space. Hence, he cannot, like an animal, be content with the most necessary part of the earth, but to strive to possess the whole land, and here, at least according to Vasilj, he cannot satisfy nor fulfil the meaning of his existence. 33 Our author recounts an event from a zoo where they advertised that the most bloodthirsty animal could be seen in a special cage behind bars. When visitors would walk in, they would see their own reflection in the mirror behind bars. Vasilj also wonders if a man is the most bloodthirsty animal. His answer is affirmative for all the cases when man ceases to live as a spiritual being in the broadest sense of the word: 31 Vasilj, K. (1978), 39-40. 32 Vasilj, Kvirin (1988, Ljudski razum protiv sama sebe, Hrvatski katolički glasnik, 47 (1), 2-3, p. 3. 33 Vasilj, K. (1984), 159-160; Vasilj, K. (1984a), 403;1986, 195. he opens wide the door to the slavery of man to man, things and himself, turning into the most bloodthirsty animal. He finds vivid examples in the history of the 20th century when, in the name of national and ideological selfishness, more than a hundred million innocent people were killed, by the very people who believed that a man has developed from animals and dies like all animals. 34 It is a "physical man" who, according to the principles of utilitarianism, finds no reason for the existence of old and sick people. 35 "Hence the ever-diminishing price of human life; we are already reading suits in worldwide newspapers, that some individuals are forced to spend money to support old people, who no longer contribute to the human society: these people do not see the meaning of love for their fellow human nor its value, so they think, if money is invested for the elderly, terminally ill, mentally and physically impaired, to invest their property into nothing. These people do not realise that precisely the attitude towards the most powerful, most neglected persons of the human society is the proof of the culture or non-culture of a man and his community. But it is not possible to see the exaltation of love and sacrifice within pure physical knowledge." 36
Conclusion
This presentation has shown that Kvirin Vasilj, a kind of a 20 th -century thinker, will not and cannot deny the results of contemporary research in the field of biology and related sciences. It has been shown that in many of his works, he focuses his attention on living beings, plants, and animals, both generally and individually. He is aware that a man cannot escape the influence of nature, that a man by his physical component is completely dependent on nature -without nature, there is no man. Moreover, this philosopher is ready to claim that a man, as we know him, is at the height of his development.
However, as much as he is willing to say that nature and man are subject to change, he is willing to state that there are some "immovable" elements which, -although active in the constitution and action of the living beings -are not "moved" by nature itself, and therefore not of material origin, and therefore unavailable to natural and scientific research methods. 34 Vasilj, K. (1999), 19, 30;Vasilj, K. (1987), 79; Vasilj, K. (1984), 157;Vasilj, Kvirin (1956) No matter how much Vasilj mentioned and respected living beings for the very fact that they are "alive", he, it seems, does that by emphasising the distinction between a man and an animal, to protect the exclusivity of a special origin within the man himself -the reason, that condensation of the essence of the different from other matter, constitutive origin in the very concept of a man. Thus perceived, bioethical topics of Kvirin Vasilj are in their essence anthropological topics. Moreover, the author of this article is free to state, what is today called bioethics, can be reduced to ontology in Vasilj's system of thought without many leftovers. Thus, bioethical themes become the backdrop under which, who knows how many times, there is the reprise of the ontological question about the world and the man. 37 Vasilj's consistency in insisting on the exclusive principle of life in man forced him to deny the existence of transient forms, hominid animals (because the life origin is a constituent component in the making of a physical component), and to also express their disagreement with the hypothesis of the emergence of living and inanimate matter, justifying that it cannot be concluded from causal relationships on the basis of temporal relations. From the aforementioned constitutionality of a life origin from the very conception (as it is already present recording by which a man will be formed and who will synthesise a substance in their own special way) Vasilj advocates the effective protection of human life from conception to natural death.
The fact that the true reality (especially the life origin), which Vasilj sees through the power of reason in nature is not reachable by senses, causes in this philosopher a fear for people not to reduce their knowledge only to that directly recognisable and thus tragically reduce themselves to the level of everyday living beings who only have sensory powers. If, however, a man moves his evident spiritual pursuit of infinity to material, to space, there is greed thriving in him, which makes the environment unhealthy for the existence of plants, animals, and the man himself (environmental issues), excruciating situations of wars and disagreements between people and nations. 38 Hence, Vasilj reminds that the excavations of the latitudes of the earth and universe should not be a substitute for the excavation of depths of the human being in accordance with logical and ethical laws. 39 In conclusion, it may be said what has already been said: some systems of thinking, lack of human knowledge on the physical processes in organisms by the introduction of intangible origins of existence, and consequently, dogmatise human ignorance of the same processes. On these petrified foundations, very solid structures are being built not only of religion but also of philosophy. One who moves into them really has the impression of being protected from the meaninglessness of existence, even from the relentless disintegration of the physical component of one's own being. However, if such a sequence of thought is rejected in its newly painted foundations, its results need not necessarily be rejected: in the man himself, it would be useful to have a somewhat different origin than the rest of the living/animal world and to direct the view to that firm point in nature and the universe -to himself, the man who is conscious of himself and others. With such, at least part displacement of man's existence from nature to non-nature, unlocked are unprecedented possibilities of development of his spirit. Experience shows that this is not necessarily bad for the man, but if it not necessarily bad, then it is not necessarily untrue. Man trapped in nature emerges and disappears together with it, confirms the thought of Kvirin Vasilj, and from the consciousness of the dependency on nature, man necessarily tends to conquer it, and in this relentless struggle can become an animal worse than all other animals. | 7,219.8 | 2020-12-22T00:00:00.000 | [
"Philosophy"
] |
Invading grass-like alga transforms rippled sand bars into bumpy muddy flats: arrival of a game changer in the Wadden Sea?
In the wake of biological globalization, translocated species of high bio-engineering capacity increasingly change bottom topography of sedimentary coasts. A Vaucheria taxon (Xanthophyceae) of unknown origin is spreading at the transition between intertidal and subtidal zones, while resident Vaucheria -species are confined to the upper shore in the Wadden Sea (European Atlantic). Near the island of Sylt, dense turfs of green filaments rapidly expanded over an area of 180 ha within 3 years. The unicellular filaments reach about 5 cm out of and 5 cm into the sediment. Felted rhizoids provide firm anchorage. Dry phytomass (up to 208 g m -2 ) was similar to that of intertidal seagrass beds. Residual filaments overwinter in the sediment and give rise to renewed growth in late spring. In addition, oospores germinate. Fine particles are trapped by the turf during summer, generating laminated cohesive mud. Muddy hummocks arise up to 20 cm above ambient sand flats, alternating with troughs but gradually merge into coherent and pertinacious plateaus of mud. This shift in bottom topography and sediment composition may potentially change the mud balance of tidal basins, and the capacity of tidal flats in catching up with accelerating sea-level rise.
Although smaller and not closely related to Caulerpa (Chlorophyceae), the genus Vaucheria (Xanthophyceae) shows some striking similarities. Species identity within both genera needs clarification, both have siphonous green thalli, form extensive mats, which may accumulate sediment. In coastal areas, species of Vaucheria tend to grow in dense turfs, and may give rise to conspicuous hummocks (Krieg et al. 1988;van de Vijsel et al. 2020), superficially resembling cushions of filamentous green algae or mosses. In the North Sea, Vaucheria-species are common but usually inconspicuous, occur at upper shores in muddy estuaries or in salt marshes, and generally have received little attention (but see Simons 1975;Polderman 1979a, b;Christensen 1987;Krieg et al. 1988;van de Vijsel et al. 2020).
In 2020, we discovered two taxa of Vaucheria thriving at and below low tide level where none were found before in the well-studied Wadden Sea (eastern North Sea, European Atlantic). Rybalka et al. (in press) identified one as V. cf. velutina, genetically distinct from V. velutina C. Agardh, 1824(syn. V. thuretii Woronin, 1869 growing in nearby salt marshes at the upper shore. No niche extension from the upper shore had taken place. Instead, we assume an introduction from distant coasts where populations of V. velutina also have been reported from lower shores, i.e., Florida (Gallagher and Humm 1981) or the southern Pacific (Womersley 1987;Wilcox 2012). Presumably, the cosmopolitan morphotaxon V. velutina comprises a complex of hidden species awaiting revision by employing chloroplast DNA-sequencing (Andersen and Bailey 2002). The same applies to V. longicaulis Hoppaugh, 1930, also discovered in the Wadden Sea for the first time, spreading at the lower shore next to V. cf. velutina. In NW-Europe it has been found since the 1990s at the British Channel (Christensen 1996) and in the Rhine delta (Stegenga et al. 2006). The latter authors regard V. longicaulis as introduced to The Netherlands. At Sylt in the northern Wadden Sea, V. cf. velutina is growing in summer and V. longicaulis in autumn and early winter (Rybalka et al. in press). The latter species remained confined to rather small patches in 2020, and we focus on V. cf. velutina in this study because it spread over vast areas, displaced lugworms and provided habitat to small fauna , and generated conspicuous muddy hummocks, which is the focus of this study.
In the Wadden Sea, a large proportion of tidal flats consists of rippled sand, maintained by the permanent reworking of lugworms, Arenicola marina . Lugworms dwell in U-shaped burrows and recycle the upper layer of sediment 10-20 times per year through their guts (Cadée 1976), prevent clogging of the interstices of sand with fine particles and organic material, and increase sediment permeability (Volkenborn et al. 2007). Around low tide level, large individuals of relatively low abundance prevail (Reise et al. 2001). At this "senior home" of lugworms, we encountered the invading V. cf. velutina, generating muddy hummocks.
With its arrival, a completely new mode of life is spreading in a habitat where other macroalgae tend to be rare because solid substrate for attachment is scarce. V. cf. velutina overcomes this obstacle by rhizoids penetrating deep into the sediment, providing firm anchorage. The alga monopolizes space, traps and fixes suspended matter, which may initiate a cascade of ecological change. We hypothesize permanent changes in bottom topography and sediment composition where V. cf. velutina took over. We specifically ask, (1) How did the initial spread proceed? (2) What algal properties could modify sedimentation processes? (3) Which sediment properties changed where V. cf. velutina had established? (4) How are arising bedforms changing over time? Finally, we speculate what may be the overall effect of this invading alga on the ability of the sedimentary bottom to catch up with accelerating sea-level rise in the wake of global warming. In a companion paper we deal with cascading effects on resident benthos.
Study site
The Wadden Sea comprises the largest coherent belt of sedimentary tidal flats globally, ranging from meso-to macrotidal and from estuarine to fully marine conditions . Tidal basins are sheltered by a chain of barrier islands. In the northern Wadden Sea, the List tidal basin at the Danish-German boundary comprises an area of 400 km² ( Figure 1). Of these, 33% are tidal flats, 57% are shallow subtidal flats down to 5 m, intersected by deep channels and an inlet with 10% of the area (Gätje and Reise 1998). Mean tidal range of semi-diurnal tides is almost 2 m, and salinity is between 26 and 32 PSU. Water temperatures fluctuate between 0 and 20 °C with an increase of 1 °C since the 1980s (Johannes J. Rick pers. comm.). Parameters of eutrophication (N, P) increased until the late 1990s and since then have declined again.
Sediments are primarily sandy with a few mud flats at southern and northern fringes of the basin where causeways connect the two barrier islands with the mainland (Pejrup et al. 1997). Intertidal seagrass beds declined since the 1930s (Dolch and Reise 2010) while former mussel beds became invaded by introduced Pacific oysters ( Reise et al. 2017), followed by several exotic macroalgae of which Sargassum muticum has formed extensive kelp beds in the shallow subtidal zone (Schories et al. 1997;Lang and Buschbaum 2010). The mainland is under intensive agricultural use and islands are crowded with tourism. The tidal basin is part of National Parks (Germany: Nationalpark Schleswig-Holsteinisches Wattenmeer; Denmark: Nationalpark Vadehavet) and is included in the UNESCO Wadden Sea World Heritage Site.
How did the initial spread proceed?
Observations on Vaucheria at the lower shore commenced early in June 2020 when it was first discovered. We measured areal spread of Vaucheriabeds using a GPS 72H (Garmin) while walking along the circumference of individual beds. This was only possible when spring low tides and offshore winds coincided. Otherwise, lower boundaries of beds remained submerged. The local tide gauge at List harbor provided tidal elevations, and we used daily water level forecasts given by the Federal Maritime and Hydrographic Agency (BSH). At the site Blidsel Bay, we circumnavigated the bed on June 17, June 26 and September 19, 2020. Other beds were located and circumnavigated during exceptional low tides in September 2020.
To reconstruct when these Vaucheria-beds commenced developing we used two approaches. The occasionally taken aerial shots of the site at Blidsel Bay were inspected for absence or presence of dark shading of the exposed sand bar. This served as a hint for the first appearance of the formation of a Vaucheria-bed. As a second approach, we took 24 sediment cores with a cross-section of 10 cm² to a depth of 20 to 25 cm at the central part of the bed at Blidsel Bay, and visually inspected those for the occurrence and depth of layers composed of dead Vaucheria-filaments. Additional cores taken outside this established Vaucheria-bed never revealed any such dead filaments.
What algal properties could modify sedimentation processes?
We quantified growth of V. cf. velutina by counting density of green filaments, by measuring green filament lengths, and by estimating dry weight in August 2020 at Blidsel Bay. We distinguished between growth at the higher edge and lower central part of the bed already present at June 17, and in the area that became vegetated until June 26, named pioneer belt. For counting density, we gently drilled a corer with sharpened lower edge and a cross section of 2 cm² down to 5 cm depth. Obtained sediment cores were washed through a 500-μm mesh, retaining the intertwined filaments. Of these, green siphons of > 10 mm length were counted under 10x magnification. Six cores were from vegetated patches in the pioneer belt (August 12, 2020), six from vegetated hummocks at the edge of the old bed and four from the continuously vegetated central part of the bed (August 15, 2020).
In addition to filament density, length of filaments above sediment surface may have effects on sedimentation processes. On September 02, 2020, we clipped siphons at sediment-water interface, and determined lengths to nearest mm for 30 filaments at the pioneer belt, vegetated hummocks and the continuously vegetated central part of the bed (same as for density of filaments).
In an attempt to estimate phytomass at Blidsel Bay on August 08, 2020, we took 4 sediment cores of 50 cm² to a depth of 5 cm from the pioneer belt, vegetated hummocks and the continuously vegetated central part of the bed, then washed with tap water through a 500-μm mesh. We separated the retained algal tufts of Vaucheria from other algae (mainly Rhizoclonium riparium) and removed entangled worm tubes manually. Four samples from each of the three habitats were dried for 3 d at 80 °C, and then were weighed.
Which sediment properties changed where V. cf. velutina had established?
At Bildsel Bay in June 2020, we took cores with a cross section of 10 cm² from high hummocks and from an ambient lugworm sand flat 50 m north of the Vaucheria-bed. Water and organic content for the upper 6 cm of sediment was measured by weight loss after drying at 80 °C and ignition at 500 °C, respectively. Relative penetration depth was estimated by dropping a weight of 500 g (40 mm in diameter) from 1 m height. Salinity was measured with an ATC refractometer.
For sediment analyses, cores with a cross section of 10 cm² were taken to a depth of 36 cm from a Vaucheria-hummock and from ambient lugworm sand flat 50 m north from the Vaucheria-bed at Blidsel Bay in June 2020. Cores were sliced into an upper 1 cm layer and from thereon into intervals of 5 cm each. Subsamples were chemically treated following the procedure used in Hass et al. (2010) in order to remove carbonate and organic matter prior the grain-size analyses, performed using a CILAS 1180L (3P Instruments GmbH & Co. KG, Odelzhausen, Germany) laser-diffraction particle sizer (range: 0.04-2500 µm). Statistics of grain-size results (based on volume percentage data) were calculated using the program GRADISTAT (version 8.0) (Blott and Pye 2001) and are based on Folk and Ward (1957).
How are arising bedforms changing over time?
Height of muddy hummocks and plateaus vegetated by Vaucheria was estimated relative to ambient sediment surface using a folding rule bent at right angle. We measured in the pioneer belt, and at the high edge and lower central part of the bed already established when first encountered in early June. At Blidsel Bay and Kampen, most measurements are from August/September 2020. The latter was the only site accessible for further measurements in winter (February/March) and early spring (May 2021). We base most of our interpretation of bedform succession on qualitative notes and on photographs taken when walking around and across all beds we encountered. Concomitantly, we took qualitative notes on the state of above ground algal filaments. In parallel, we kept samples of V. cf. velutina in covered petri dishes at a windowsill under room temperature (usually 16 to 18 °C), exchanging seawater weekly. The same we did with vegetated sediment cores kept under seawater. This was an attempt to learn about seasonal growth, reproduction, overwintering and regrowth in spring. This is essential for understanding the persistence of the new bedforms.
Initial spread of Vaucheria at the lower shore of Sylt
Abundant filaments of V. cf. velutina at the lower shore were first encountered in the List tidal basin east of the island of Sylt early in June 2020, at the site Blidsel Bay (Figures 1, 2). The population was spreading until autumn 2020. In September the total area with dense growth in coherent beds was estimated to at least 180 ha, primarily occurring at four sites, 1 to 4 km seaward from shorelines ( Figure 1). These beds of Vaucheria were positioned + 0.2 m to −0.5 m relative to mean low tide level. This corresponds to water depths of 2 to 3 m at high tides. At tidal channels, some growth was even observed down to 1 m below mean low tide level. Spring tide exposure lasted for up to 2 h, while during neap tides and onshore winds these Vaucheria-beds remained submerged.
A major new spread of young growth into ambient sand flats appeared in June, then continued more slowly until September 2020 and more than doubled the entire area of the Vaucheria-bed at Blidsel Bay ( Figure 2). Beds of Vaucheria at the lower shore were visible from the air as dark shading on (red) and until September 2020 (yellow), measured by walking along boundaries between bed and ambient bare sand flat with GPS. Underlying aerial image is from August 31, 2008. Black patches near the Vaucheria-bed are from beds with mixed mussels and oysters, while other dark patches further away mostly stem from Sargassum-kelp anchored by clumps of oysters. Photo by AWI. exposed sand flats. However, shading was too similar to that of other algae or seagrass for detection or mapping purposes. Nevertheless, comparisons between images from 2020 and the years before suggest that at Blidsel Bay the bed of Vaucheria was already present in 2019 and that initial growth commenced in 2018, while images from 2017 and earlier showed bright, bare sand at the same site.
At this bed, one or two layers of dead, horizontally deposited Vaucheriafilaments occurred in sediment cores (see Figure 5, right core). Mean sediment depth of such layers was 83 mm (22 out of 24 cores) and 125 mm depth (16 cores). On average, thickness of the upper layer was 26 mm and of the lower layer 13 mm. The filament fragments were rather short (32 ± 10 mm, n = 10), and included occasionally rhizoids and oospores. At Blidsel Bay, buried layers of dead siphons occurred not in bare ambient sand and not in the pioneer belt (see Figure 2). At Kampen, one such layer occurred but none at the two beds in the southern part of the tidal basin (see Figure 1).
Properties of V. cf. velutina, which could modify sedimentation processes
A characteristic feature of beds of V. cf. velutina are pointed tufts formed by adjacent filaments bending with their tips towards each other during low tide exposure (Figure 3). Upright filaments enveloped by a film of water join each other like poles of a tipi tent. This easily distinguishes V. cf. velutina from filamentous green algae in the field. In August 2020, density of these green filaments increased from the pioneer belt towards the central bed at Blidsel Bay (Figure 4). Density reached up to two siphons per mm² or more than 1 million per m².
While rhizoid penetration depth in the sediment was generally between 30 to 50 mm in established beds (Figure 3), length of green portions was more variable with season and site. In early September 2020, mean length of green siphons clipped at sediment-water interface was 20 ± 3 mm [range 13-25] at the pioneer belt and increased towards the central bed at Blidsel Bay (Figure 4). There mean length of green siphons was 67 ± 8 [range 47-80] mm. Adding below ground portions, maximum length may reach up to 130 mm. Diameter of green siphons was 123 ± 14 μm [range 100-140] (n = 18) and tapered at colorless to slightly pink rhizoids to 40 μm or less. Single branching occurred in 28% of green siphons (usually about 40% of total length), while multiple branching into thin rhizoids was the rule.
Dry phytomass of Vaucheria at Blidsel Bay, calculated per m² for August 2020 was 74 ± 16 g in the pioneer belt, 116 ± 26 g on high hummocks, and 208 ± 34 g on plateaus of the central bed.
Sediment characteristics
Sediment underneath dense turfs of Vaucheria was soft and muddy, while it consisted of rippled sand, reworked by lugworms Arenicola marina in ambient areas ( Figure 5). Underneath hummocks with Vaucheria, sediment was laminated, visible by layers of varying blackness below a thin brownish surface layer of a few millimeter. This contrasted with ambient sand flats where the brownish surface layer extended between 10 to 30 mm depth, followed by a rather homogenous light to dark grey sediment ( Figure 5 left core). Digging within Vaucheria-beds elicited a strong smell of sulfide, while there was only a faint smell in the bioturbated bare sand. In the upper 60 mm of sediment cores, water content at low tide exposure was higher at hummocks covered by Vaucheria (29%) than at adjacent lugworm sand flat (21%). Organic content was 4-fold higher (1.6% weight loss after ignition) at hummocks compared to ambient lugworm flat (0.4%). Penetration depth of a dropped weight was 4-times deeper at Vaucheria-hummocks (38 ± 3 mm, n = 12) than at ambient lugworm sand flat (10 ± 1 mm, n = 6). Salinity above sediment and at depth of > 20 cm was the same, both, within the Vaucheria-bed and ambient sand flat, indicating absence of freshwater seepage from below.
Vaucheria-hummocks were composed of cohesive mud while bare ambient areas consisted of loose sand. Mean grain size was homogenously distributed over depth intervals at the lugworm sand flat (146 ± 5 μm), while at Vaucheria-hummocks means were smaller and more heterogeneous (114 ± 30 μm). Accordingly, sediment was well sorted at the lugworm flat and poorly sorted at hummocks. Mean grain size variation between layers was most pronounced in the upper 16 cm of cores, with 149 ± 4 μm and 104 ± 36 μm at lugworm sand flat and Vaucheria-hummock, respectively. Mud (silt) content was twice as high and highly variable for vertical intervals at hummocks (16 ± 11%, n = 8) compared to lugworm sand flat (8 ± 1%, n = 8).
Bedforms changing over time
Bedforms of V. cf. velutina undergo succession ( Figure 6). We infer this from observations at the bed in Blidsel Bay. Here new growth commenced in June 2020 beyond edges that had formed in 2019, and where sediment cores with two layers of dead filaments suggest that the innermost part of this bed became established already in 2018. This amounts to a 3-year time span.
Faint growth on bare sand flat started synchronously over a wide belt in June 2020. At the outer edge growth of small filaments preferentially occurred on sand waves with parallel longitudinal axes all across the sand flat (0.5 to 1.5 m wide and several m in length). By growing on these barely discernible ridges, V. cf. velutina enhanced deposition and stabilized the sand. As side effect, these developing shallow ridges deflected water flow, Figure 7. Sandy lugworm flat partly covered by bumpy bed of Vaucheria cf. velutina in June, and after a mosaic of young growth and bare patches had spread until September 2020. Below: Sandy lugworm flat and Vaucheria-bed with sharp boundaries facing north and east before and after winter 2020/21. Although raised mud had turned bare in winter, boundaries with ambient sand remained distinct. Photos by K. Reise. causing enhanced scouring at interspaces between ridges. This pattern developed into dome-shaped ridges covered by a dense turf, alternating with bare troughs. In the shelter of such elongated hummocks modest plateaus with turfs alternated with bare patches (Figures 6, 7). These were of irregular size and shape. Gradually and with delay, bare patches became vegetated too. Differences in elevation had increased to 93 ± 25 mm [range 50-130] (n = 42). Such bumpy bottom topography was still apparent in the old bed, particularly at the former edge (Figures 6, 7). There, mean height of hummocks above water level during low tide was 81 ± 13 mm [range 60-110] (n = 30). Troughs in between were of variable depth: 118 ± 46 mm [range 50-260] (n = 60) below low tide water level. These heights and depths added up to a mean difference of about 20 cm with a range of 11 to 37 cm.
Edges of Vaucheria-beds often formed a distinct boundary to bare ambient sand flat, particularly where facing north or east (Figure 7), often accompanied by scouring along edges. From the high hummocks along edges, surface topography smoothed towards the interior bed. Dome-shaped hummocks flattened into plateaus. The mosaic of plateaus and troughs also became more spacious. Differences in height decreased to about 5 cm. There, V. cf. velutina colonized troughs until coverage by filaments became even.
In winter 2020/21, elevations and depressions in the Vaucheria-bed were almost bare of filaments but persisted (Figure 7). Two storms with wind speeds up to 11 Bft, and a long fetch from southeast, hit the muddy elevations. This caused erosion at edges of hummocks and the plateaus became rippled. At erosion scarps, buried filaments resurfaced, usually mixed with other algae, which deposited in the Vaucheria-bed during summer. In February 2021, 12 d with drifting ice shoals of up to 1.5 m in thickness also hit the mud deposits generated by V. cf. velutina. Ice scouring left traces as if off-road vehicles had crisscrossed Vaucheria-beds. Storm and ice events decreased differences in elevation between hummocks/plateaus and troughs to 97 ± 15 mm [range 70-120] (n = 42) by the end of February 2021 where in summer 2020 hummocks were twice as high. Nevertheless, the basic bumpy pattern persisted.
In August 2020, plenty of spherical oogonia (280 to 320 μm in diameter), sessile on old siphons, were observed. Antheridia did not develop. In October 2020, green siphons above sediment surface turned brownish, mainly due to attached diatoms. Siphons above sediment surface broke off and mud remained bare during winter. However, when bare sediment from beds was brought to the lab, covered with seawater, positioned on window sill, and under room temperatures, new green thalli emerged within days. These grew out of pale or brownish, buried old siphons with small green portions. From bare mud collected in early December 2020, siphons were 71 ± 21 mm [range 36-102] (n = 10) in length with a green portion of 48% after 48 d under lab conditions. Assuming linear growth over this interval, daily length increments of green siphons were 0.7 mm. Under similar conditions, green growth from hibernating siphons collected March 31 in 2021 amounted to 0.5 mm d -1 after 14 d (6.9 ± 1.5 mm [range 4-11] (n = 60) in length).
Spring 2021 was unusually cold. We encountered first green filaments on muddy hummocks not before May 11. At May 24, mean length of green filaments was 14 ± 3 mm [range 7-17] (n = 25) with a mean filament density of 11 ± 5 cm² [range 5-17] (n = 4) on hummocks at Kampen. This is about ten times less than in August 2020 at Blidsel Bay. Water temperature was 10 to 11 °C. In June, brownish oospores found in the sediment at both sites, germinated.
Discussion
For the first time, growth of introduced Vaucheria cf. velutina at the lower shore of the Wadden Sea is described. This Vaucheria persists in large muddy beds, and a rapid spread occurred in 2020 on sandy flats. Turfs of Vaucheria trap silt upon sand and raise the bottom by up to 20 cm in 3 years. This may change balance and distribution of mud in tidal basins, and may have implications for adaptations to accelerating global sea level rise.
Layers of dead Vaucheria-filaments in the sediment as well as aerial images indicate that spread in the form of coherent beds commenced in 2018. Although at rather inaccessible sites, their conspicuousness as raised beds precludes that these went unnoticed for many years if being present. We assume that V. cf. velutina constitutes a recent invader to the lower shores at the island of Sylt and to the entire Wadden Sea (Rybalka et al. in press). Why and how could this alga establish and spread at the lower shore where none have been before? What is the effect on the sedimentary environment and bottom topography?
Establishment
For macroalgae, rippled sand reworked by lugworms constitutes an unfavorable and unreliable substrate to live upon. Occasionally green algae occur. This may happen when drifting algal strings became stranded in feeding pits of lugworm burrows (Reise 1983). In summer 2020, we also observed green algae of the genus Ulva growing attached to abundant cockles Cerastoderma edule or their empty shells. Also tube caps of Lanice conchilega on low intertidal sand flats offered suitable substrate. The phenomenon of filamentous green algae covering otherwise unsuitable sandy flats in the Wadden Sea was common when the level of eutrophication was high but then declined again since the late 1990s (van Beusekom et al. 2009).
In contrast to these green algae, V. cf. velutina does not require help of other species for anchorage in loose sand and is thus more likely to persist once established. Biomass of Vaucheria at maximum growth in August was in the same magnitude as that of ephemeral green algal mats (i.e., Fletcher 1996;Pihl et al. 1999; or seasonal maxima for intertidal seagrass beds (i.e., Philippart 1995;Auby and Labourg 1996;Nacken and Reise 2000). Young pioneering growth of V. cf. velutina occurred primarily on slightly elevated, elongated sand ridges and was rare at vales in between. This indicates that weather dependent, temporary sediment stability or net deposition facilitate establishment while erosional conditions are inhibitory. Erosion may also explain that often in front of high bed boundaries new growth remained absent. These elevated edges of Vaucheria-beds may channel tidal flow in parallel to their front, and this entails scouring which results in unfavorable conditions for the establishment of new growth in Vaucheria (see Figure 7 for erosion pits along edges).
At ridges where deposition dominates over erosion, young growth appeared first and this also might had been the window of chance for the very first propagules of V. cf. velutina that had arrived in List tidal basin. Such conditions at the lower shore could have been provided in the shelter of beds with mixed mussels and oysters. These occur in the vicinity of the Vaucheria-bed with traces of a dense turf already in 2018 (see Figure 2). Before the formation of a first coherent bed, there may have been several years of marginal existence at the brim of extinction, until particularly benign consecutive conditions facilitated development of a large population. Only after that, the probability of getting extinct again decreased towards zero. Although winter conditions were harsh, spring 2021 unusually cold, and re-growth on the existing Vaucheria-beds retarded, all beds persisted.
Vaucheria cf. velutina as ecosystem engineer
The stabilization of loose sand is an obvious property of densely growing filaments of V. cf. velutina. With 1 to 2 green siphons per mm², reaching down to 5 cm into the sand with branching and felted rhizoids, and with green filaments up to 8 cm above sediment surface, sand grains cannot be moved freely anymore by waves or currents, and hydrodynamics are diverted away from the sediment surface. Fonseca et al. (1982), Koch et al. (2006) and Marin-Diaz et al. (2020) described this for seagrass. This vascular plant is larger than Vaucheria but otherwise of similar growth with roots and slender leaves. Also similar to rhizomes of seagrasses such as Zostera noltei in the Wadden Sea, filaments of V. cf. velutina overwinter in the raised mud beds and give rise to re-growth in spring. Although no antheridia occurred, oospores germinate next summer, suggesting parthenogenesis. The relative importance of re-growth from overwintering thalli and germination from oospores needs further research.
As a corollary of dense turfs reducing and deflecting tidal flow, V. cf. velutina changed the entire sediment characteristics from loose sand to cohesive mud. Flocks of fine particles with organic material are trapped from turbid tidal waters by a dense canopy of filaments which also prevents resuspension and thus accumulates otherwise transient suspended matter as known from seagrass beds (Chen et al. 2007;Wilkie et al. 2012). The algal turf enriches sand flats with fine and organic particles which on bare sand flats with bioturbating lugworms would not be retained. Besides changing grain size distribution and sediment biogeochemistry, the increased net deposition raises the sediment surface. This is particularly evident at the edge, and may be reinforced by lugworm bioturbation on the adjacent sand flat . Wendelboe et al. (2013) measured high export rates of fine particles when sediment reworking of lugworms combined with high current velocities. For Vaucheria, mud deposition is not detrimental. As observed on windowsill, green filaments increase their length by almost one mm per day. This would be sufficient to overcome suffocation by accumulating sediment. The gradual change from loose sand to cohesive mud may reinforce the durability of the rooted turf at the lower shore where currents and waves can be strong.
Comparison with other intertidal Vaucheria-beds
The ridge-and-runnel pattern which emerged along outer fringes resembles bedforms described for mats of V. compacta on upper mud flats in the Westerschelde estuary in the Netherlands (van de Vijsel et al. 2020;van de Vijsel 2021). Further inside, an irregular mosaic of plane plateaus covered by algal turf alternate with bare pits of bumpy topography due to feeding funnels and fecal mounds of lugworms (Arenicola marina) (for further details see Reise et al. 2022). The elevational differences between plateaus and pits increased over the course of summer. However, lugworms did not manage to keep pits bare but merely slowed down the succession. Nevertheless, the contest between sediment stabilizing Vaucheria and sediment destabilizing Arenicola seems to be the underlying process of the mosaic pattern inside the new belt. Such a biogenic pattern was not observed by van de Vijsel et al. (2020) at upper muddy shore. While the outer fringe with elongated hummocks and runnels was initiated by hydrodynamics and then evolved further by scale-dependent positive feedbacks as described by van de Vijsel et al. (2020), the mosaic of turfs and bare pits at the inner belt was caused by a contest between bioengineering species .
The interior part of the bed remained completely submerged even during spring low tides because the outer edges were higher ( Figure 6). However, in autumn 2020 a runnel became incised, dewatering towards a deep channel at the eastern boundary of this Vaucheria-bed at Blidsel Bay. At extreme low tides, the ebbing water in that runnel even formed small cascades where breaching the elevated edge. One may expect that with increasing areal size and age of Vaucheria-beds, such runnels develop further and become more. Similar self-organized patterns were observed and modelled by van de Vijsel (2021) for Vaucheria-beds on upper shore mud flats.
Vaucheria compacta-mats as described from mud flats in the Elbe estuary (Schulz-Steinert and Kies 1996) and in the Westerschelde estuary by van de Vijsel et al. (2020) lack a deep anchorage by long rhizoids. These mats may even be rolled off like a carpet by an occasional rough sea (Hartog 1959;Frank Perk pers. comm.). Spread on loose sand at the lower shore with stronger hydrodynamics than at upper shore mud flats would thus be impossible for this species. We conclude that the extensive beds of V. cf. velutina at the lower shore of Sylt could only establish and expand there because of its deep anchorage. Another precondition is a sufficient supply of mud particles which can be trapped and accumulated by the dense turf to form elevated beds.
Balance of mud
In the Wadden Sea, the main supply of mud originates from the North Sea and accumulates on high intertidal flats and supra-tidal salt marshes (Oost et al. 2021). Rectangular systems of brushwood groins combined with ditching have enhanced mud deposition, gradually creating marshland. This has been consecutively embanked in the course of centuries. Thus, progressive land claim has detached mud deposits from the sediment dynamics in the Wadden Sea. Land claim combined with estuarine channel deepening for navigation, has led to an amplification and asymmetry of tides, increasing turbidity and further promoting the landward transport of silt. Mud supply, transport and deposition have turned problematic for ecosystem function and adaptation to sea level rise.
The appearance of mud accumulating Vaucheria-beds at the lower shore intercepts landward deposition of mud. This new sink for mud may be similar to that of seagrass, mussel and oyster beds. Compared to other tidal basins in the Wadden Sea, trapping efficiency of the List tidal basin for mud is extremely low due to physiographic and hydrodynamic properties (Pejrup et al. 1997;Pedersen and Bartholdy 2006). Adding to the recent spread of American razor clams and Pacific oysters (Reise and van Beusekom 2008), Vaucheria may reduce turbidity to the advantage of pelagic primary production and unhampered food uptake by filter feeders. Permanent stabilization of mud at the lower shore by Vaucheria could expand tidal flats and confine channels, adding to the effects of estuarine biofilms (Brückner et al. 2021).
Another aspect is the distribution of mud deposition in tidal basins which occurs predominantly at the upper landward fringe and sometimes along tidal divides (Friedrichs 2011;Oost et al. 2021) and this may be crucial for keeping pace with sea level rise (Madsen et al. 2010;Braat et al. 2017;Becherer et al. 2018;Benninghoff and Winter 2019). Vaucheria cf. velutina spreading at the lower shore has the potential to alter this pattern. Depending on the net mud supply to tidal basins, Vaucheria may either increase the net accumulation rate of mud or initiate an internal redistribution of mud from upper to lower shores. Which one of these processes would prevail has implications on the capacity of the tidal zone to keep up with an accelerating sea level rise in the wake of global warming.
Conclusions
A novel habitat of bumpy mud deposits emerged at the intertidal-subtidal transition zone, when an introduced Vaucheria-alga established near the island of Sylt in the Wadden Sea in 2018. Green filaments are thin but dense, trapping fine sediments in summer, and the felted mesh of pink rhizoids consolidates the trapped deposits. This raises the sediment surface up to 20 cm above ambient flats. Although smoothed somewhat by rough winter conditions, the mud deposits persist where once sandy flats prevailed. Colorless to brownish filaments hibernate below the mud surface and give rise to new growth in spring, while germinating oospores may accomplish extensions of algal beds. This new mode of biogenic mud accumulation at the lower shore may contribute to water clarity, and facilitate depositional processes in response to sea-level rise. If algal spread continues, this may become a game changer for the Wadden Sea ecosystem. | 8,289.4 | 2022-01-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Scintillation light detection in the long-drift ProtoDUNE-DP liquid argon TPC
ProtoDUNE-DP is a 6x6x6 m3 liquid argon time-projection-chamber (LArTPC) operated at the Neutrino Platform at CERN in 2019-2020 as a prototype of the DUNE Far Detector. DUNE is a dual-site experiment for long-baseline neutrino oscillation studies, neutrino astrophysics and nucleon decay searches. The light signal in these detectors is crucial to provide precise timing capabilities. In ProtoDUNE-DP, scintillation light produced by cosmic muons inthe LArTPC is collected by the photomultiplier tubes (PMTs) placed up to 7 m away from the point of interaction. The scintillation light production and propagation processes are analyzed and compared to simulations, improving the understanding of some liquid argon properties.
ProtoDUNE Dual Phase detector
The Deep Underground Neutrino Experiment (DUNE) aims to address key questions in neutrino physics and astroparticle physics [1,2,3]. DUNE will consist of a near detector placed at Fermilab close to the production point of the muon neutrino beam of the Long-Baseline Neutrino Facility (LBNF), and four 17 kt liquid argon time-projection chambers (LArTPCs) as the far detector in the Sanford Underground Research Facility (SURF) at 1300 km from Fermilab [4].
The ProtoDUNE Dual Phase (DP) detector [5,6] operated at the CERN Neutrino Platform in 2019-2020 to demonstrate the LArTPC DP technology at large scale as a possibility for the DUNE far detector. ProtoDUNE-DP has an active volume of 6×6×6 m 3 corresponding to a total LAr mass of 750 t, being the largest DP LArTPC ever operated. In ProtoDUNE-DP the argon ionization creates electrons which drift vertically thanks to an electric field. The ionization charge is then extracted, amplified, and detected in gaseous argon above the liquid surface, allowing a good signal to noise ratio and a fine spatial resolution. The scintillation light signal is collected by a photon detection system (PDS) to provide a trigger, and to determine precisely the event time, with possibility to perform calorimetric measurements and particle identification. Two Cosmic Ray Tagger (CRT) planes were added to opposite walls of the ProtoDUNE-DP cryostat to trigger on muon-tracks passing through both CRTs.
The PDS of ProtoDUNE-DP [7] is formed of 36 8-inch cryogenic PMTs, R5912-02MOD from Hamamatsu [8,9], placed below the cathode grid. As the PMTs are not sensitive to 127-nm light, a wavelength shifter is placed to convert this light to the PMT wavelength detection range. A light calibration system (LCS) is installed to obtain an equalized PMT response [10]. A dedicated light acquisition and calibration software was developed for ProtoDUNE-DP [11].
2. ProtoDUNE-DP photon detection system performance ProtoDUNE-DP collected cosmic-ray data for 18 months, from June 2019 until November 2020 in different conditions of electric fields. A total of 130.7 million events were acquired with a duration of 675 hours.
All 36 PMTs were operational since the beginning of the data taking and the basic performance of the PDS system is validated. A time accuracy among the channels better than 16 ns is measured. The low noise in the baseline of the signals, 0.6 ± 0.1 ADC, is remarkable as the baseline presents very small fluctuation and was stable with time. At a gain of 10 7 , the single photo-electron (SPE) amplitude is 7 ± 2 ADC counts, implying a signal-to-noise ratio greater than 11.
The main goal of the LCS is to calibrate the PMT response by determining the PMT gain during the operation of the detector to measure the collected light charge in photo-electron (PE) units. The gain calibration method, based on measuring the SPE charge at a given voltage, is described in [10]. During operation, PMTs are biased at the HV required to achieve the target gain according to the calibration results. It has to be noted that the PMTs are switched on and off every day (sometimes several times on the same day). Despite this, PMT gains were quite stable with time, being 9% the standard deviation of the gain at 1500 V for all the data taken during operation for the 36 PMTs, .
The LAr scintillation light is produced at 127 nm, a wavelength for which most photosensors are not sensitive, and fluorescent materials are introduced to shift the photon wavelength towards the visible range. ProtoDUNE-DP performs this task through a mixed system of 30 PMTs covered with polyethylene naphthalate (PEN) foils, and 6 PMTs directly coated with tetraphenyl butadiene (TPB). While TPB is broadly used, PEN is a novel material, never used before in such a large scale experiment and which efficiency is not well known. The PEN sample used in ProtoDUNE-DP is transparent and biaxially oriented. It has been installed as round foils of 240 mm diameter and 0.125 mm thickness placed tangent to the PMT glass surface. TPB was deposited over the PMT polished surface, using a dedicated evaporation system developed by the ICARUS experiment [12].
The relative photon detection efficiency of the PEN PMTs versus the TPB PMTs is experimentally determined by the light charge detected (S1 signal). The average charge collected on the PMTs for these events is around 200 PEs on TPB PMTs, and 50 PEs on PEN PMTs. This means that on average, TPB PMTs detect four times more photons than PEN PMTs. Additionally, a simple model is proposed to compute the relative WLS efficiency of both materials taking into account the geometrical differences between both systems. As a result, it is estimated that TPB will produce three times more visible photons than PEN, for the same amount of incident VUV photons.
The scintillation light emission in LAr, called S1 signal, has a characteristic time dependence. Waveforms are well described as a sum of three exponential functions convoluted with a Gaussian function to represent the detector response. The scintillation time profile should have two components, from the decay to ground state of singlet (τ f ast ) and triplet (τ slow ) argon excimers, but an intermediate component (τ int ) is added in order to improve the convergence of the fit as reported also by other LAr experiments [13]. Figure 1 shows the average waveform of muon-like events in the absence of drift field for one PMT and Fig. 2 the evolution of the average τ slow during the operation of ProtoDUNE-DP. The value of the τ slow = 1.46 ± 0.02 µs has a small dispersion over time and indicates a high LAr purity at the ppb level.
Light production and propagation in LAr
The study of the scintillation light production, propagation and collection in a LArTPC is performed with data acquired with the CRT-trigger system to profit from the off-line reconstruction of the track trajectory. Analyses are mainly based on the correlation between The suppression of the electron-ion recombination process due to the drift field implies the reduction of the primary scintillation light production. The ProtoDUNE-DP operation faced issues that impacted the TPC field conditions, as a short circuit limited the cathode HV to 50 kV and the drift field was reduced to the top part of the drift (along ∼1 m). Even so, the light levels detected with the PDS at null drift field and at 50 kV are compared to roughly quantify such a light yield decrease in Fig. 3. It is obtained that, at least, 17% of the scintillation light detected in absence of drift field comes from electron-ion recombination.
An approximate average field is associated to the light-yield ratio to verify the empirical Birks' law, see Fig. 4. Despite the relatively large uncertainty of the ProtoDUNE-DP result, a fair agreement is found with previous works [13,14,15].
The size of ProtoDUNE-DP, the longest drift-distance LArTPC ever operated, allows for an unprecedented study of the light propagation. The Rayleigh scattering length (RSL) has a quantitative impact on the amount of light collected, so an evaluation of the RSL value is carried out by comparing the measured light signals with the light predicted by the MC simulation testing two lengths (61.0 cm and 99.9 cm). The light attenuation modeling with an exponential function in the distance parameter allows the measurement of the overall attenuation in data and MC, and so the evaluation of the agreement for the different simulated configurations.
In Fig. 5, the light charge as a function of the distance from the muon track to the PMT is fitted to an exponential decay and the data-MC ratios for the two simulations are also presented. Looking at the distribution shape, the agreement between data and the 99.9-cm MC sample is better than with the 61.0-cm value. The attenuation length values obtained are presented in Table 1, and it is observed that the data sample also agrees better with the 99.9-cm MC sample. The measured attenuation length is higher than the RSL, so the light is expected to undergo Rayleigh scattering before being deeply attenuated due to, for example, absorption by LAr impurities or detector elements. This relatively long light path before a significant light quenching or absorption is achieved in ProtoDUNE-DP thanks to the excellent LAr purity and the absence of material inside the LAr active volume.
Conlcusions
ProtoDUNE-DP is a 6×6×6 m 3 LArTPC, operated at CERN in 2019-2020 to fully demonstrate the dual-phase technology for DUNE, the next generation long-baseline neutrino experiment. The photon detection system formed of 8-inch cryogenic PMTs collected cosmic-ray data in stable conditions. The slow scintillation constant, and therefore the LAr purity, was monitored during the whole data taking. ProtoDUNE-DP used PEN as wavelength shifter for the first time in a large scale experiment and a comparison with the widely used TPB is carried out. In ProtoDUNE-DP, it is obtained that, at least, 17% of the scintillation light detected in absence of drift field comes from electron-ion recombination verifying the expected trend of the Birks' law. The size of ProtoDUNE-DP, the longest drift-distance LArTPC ever operated, allows for an unprecedented study of the light propagation. The data are in agreement with a 99.9-cm Rayleigh scattering length. Cosmic muons were detected at a distance from the PMTs up to 6 m. Table 1) and plotted as a solid line. Bottom panel: Data-MC ratio for the two previous MC samples. | 2,346.4 | 2021-06-29T00:00:00.000 | [
"Physics",
"Engineering"
] |
Envelope Glycoprotein Internalization Protects Human and Simian Immunodeficiency Virus-Infected Cells from Antibody-Dependent Cell-Mediated Cytotoxicity
ABSTRACT The cytoplasmic tails of human and simian immunodeficiency virus (HIV and SIV, respectively) envelope glycoproteins contain a highly conserved, membrane-proximal endocytosis motif that prevents the accumulation of Env on the surface of infected cells prior to virus assembly. Using an assay designed to measure the killing of virus-infected cells by antibody-dependent cell-mediated cytotoxicity (ADCC), we show that substitutions in this motif increase the susceptibility of HIV-1- and SIV-infected cells to ADCC in a manner that directly correlates with elevated Env levels on the surface of virus-infected cells. In the case of HIV-1, this effect is additive with a deletion in vpu recently shown to enhance the susceptibility of HIV-1-infected cells to ADCC as a result of tetherin-mediated retention of budding virions on the cell surface. These results reveal a previously unappreciated role for the membrane-proximal endocytosis motif of gp41 in protecting HIV-1- and SIV-infected cells from antibody responses by regulating the amount of Env present on the cell surface. IMPORTANCE This study reveals an unappreciated role for the membrane-proximal endocytosis motif of gp41 in protecting HIV-1- and SIV-infected cells from elimination by Env-specific antibodies. Thus, strategies designed to interfere with this mechanism of Env internalization may improve the efficacy of antibody-based vaccines and antiretroviral therapies designed to enhance the immunological control of HIV-1 replication in chronically infected individuals.
L entiviral envelope glycoproteins, including those of the human and simian immunodeficiency viruses (HIV and SIV, respectively), have unusually long cytoplasmic domains compared to those of other retroviruses. Although the function of this domain is not fully understood, it is known to contain sequences important for regulating Env trafficking in HIV-1-and SIV-infected cells (1)(2)(3)(4)(5). Perhaps the best characterized of these is a highly conserved binding site for the clathrin adapter protein 2 (AP-2) in the membrane-proximal region of the gp41 cytoplasmic domain (CD) (6,7). Amino acid substitutions in this tyrosine-based motif (YXX⌽, where ⌽ represents any hydrophobic residue and X represents any residue) increase Env expression on the surface of infected cells and Env incorporation into virions (1,(7)(8)(9). This motif is also required for optimal HIV-1 infectivity (10) and for SIV pathogenesis in macaques (11).
We hypothesized that by regulating steady-state Env levels on the cell surface prior to the assembly and release of infectious virus, gp41 CD-dependent endocytosis may reduce the susceptibility of infected cells to Env-specific antibodies. Previous studies have shown that Vpu-mediated downregulation of tetherin and Nef-mediated downregulation of CD4 protect HIV-1-infected cells from antibodydependent cell-mediated cytotoxicity (ADCC) by limiting Env exposure on the cell surface (12)(13)(14)(15). Here, we show increased susceptibility to ADCC in cells infected with HIV-1 and SIV mutants carrying substitutions that disrupt the membrane-proximal AP-2 binding site in the gp41 tail. Greater susceptibility to ADCC correlates with higher levels of Env on the cell surface, indicating that endocytosis of Env may be another mechanism by which virus-infected cells evade the antibody responses of their hosts.
MATERIALS AND METHODS
Production of mutant viruses. Amino acid substitutions were introduced at key positions of possible trafficking motifs in the gp41 CDs of SIV mac 239 (Fig. 1A) as well as of HIV-1 NL4-3 , HIV-1 NL4-3 ⌬vpu, and HIV-1 JR-CSF (Fig. 1B). Changes were introduced into infectious molecular clones by site-directed mutagenesis, while maintaining the sequences of overlapping open reading frames. As a safety precaution for producing vesicular stomatitis virus G protein (VSV-G) pseudotyped virus, a 2-bp deletion in vif was introduced in HIV-1 JR-CSF , resulting in a premature stop codon followed by a frameshift. After sequence confirmation, plasmids were transfected into HEK293T cells, and virus stocks were produced by harvesting cell culture supernatant at 48 and 72 h posttransfection. Since HIV-1 JR-CSF showed low infectivity, this virus was pseudotyped with VSV-G. Virus concentrations were determined by anti-p24 or anti-p27 enzyme-linked immunosorbent assay (ELISA). Molecular clones were obtained through the NIH AIDS Reagent Program, Division of AIDS, NIAID, NIH, as follows: SIV mac 239 SpX from Ronald C. Desrosiers, pNL4-3 from Malcolm Martin, and pYK-JRCSF from Irvin Chen and Yoshio Koyanagi. The construction of pNL4-3 ⌬vpu was previously described (16).
ADCC assay. ADCC activity was measured as previously described (17,18). CEM.NKR-CCR5 -sLTR-Luc cells, which express luciferase (Luc) under the control of a Tat-inducible promoter, were infected by spinoculation in the presence of 40 g/ml Polybrene. At 4 days postinfection, target cells were incubated with an NK cell line stably expressing either human or rhesus macaque CD16 in the presence of purified IgG from HIV-positive donors (HIVIG), plasma from an SIV-infected rhesus macaque, or eCD4-Ig mim2 , a CD4-Ig fusion with a CCR5-mimetic sulfopeptide (19,20). After an 8-h incubation, luciferase activity was measured. NK cells cultured with either uninfected or infected target cells in the absence of antibody or plasma were used to determine maximal and background luciferase activity, respectively. Antibody concentrations for halfmaximal killing (50% ADCC) and values for the area under the ADCC curve (AUC) were calculated from percent relative light units (RLU), as previously described (17).
Statistical analysis. Fifty percent ADCC titers, AUC values, and gMFIs for Env staining were compared by one-way analysis of variance (ANOVA) with a Holm-Sidak correction for multiple comparisons. For HIV-1 JR-CSF , a one-tailed unpaired t test with Welch's correction was used. Correlations between ADCC measurements and Env levels were assessed by calculating two-tailed Pearson product-moment correlation coefficients.
A mutation in the membrane-proximal endocytosis motif of SIV gp41 increases the susceptibility of infected cells to ADCC.
To assess the influence of potential trafficking motifs on the susceptibility of SIV mac 239 to ADCC, CEM.NKR-CCR5 -sLTR-Luc cells were infected with either wild-type SIV mac 239 or viral mutants carrying a mutation in one of three YXX⌽ motifs conserved among the envelope glycoproteins of SIV isolates (Fig. 1A). The tyrosine residue at position 721 of SIV Env was changed to glycine (Y721G), and tyrosine residues at positions 768 and 795 were changed to phenylalanine (Y768F and Y795F) to maintain the amino acid coding sequence of the overlapping rev open reading frame. Infected target cells were incubated with an NK cell line that constitutively expresses rhesus macaque CD16 in the presence of serial dilutions of plasma from an SIV-infected rhesus macaque, and ADCC activity was measured after an 8-h incubation, as previously described (17). Whereas there was no change in susceptibility to ADCC when tyrosine residue 768 or 795 was changed to phenylalanine, a tyrosine-to-glycine change in the membrane-proximal endocytosis motif (Y721G) significantly increased susceptibility to ADCC ( Fig. 2A), reducing the antibody titer for half-maximal killing by more than 2 orders of magnitude ( In accordance with the ADCC measurements, Env levels on the surface of virus-infected cells were higher for the Y721G mutant (Fig. 2D). Whereas the geometric mean fluorescence intensity (gMFI) of Env staining was 3.9-fold higher for the Y721G mutant ( Fig. 2E) (P ϭ 0.0006), there was no detectable difference in Env staining levels for the Y768F and Y795F mutants (Fig. 2E). Surface expression of Env also correlated with susceptibility to ADCC as measured by the 50% ADCC titer ( Disruption of the HIV-1 gp41 AP-2 binding site enhances the susceptibility of infected cells to ADCC. A corresponding tyrosine-to-glycine substitution at amino acid position 710 of HIV-1 Env (Y710G) was introduced into HIV-1 NL4-3 (Fig. 1B) to determine if disruption of this endocytosis motif also increases susceptibility of HIV-1-infected cells to ADCC. Two additional substitutions were also tested in HIV-1 gp41, including a tyrosineto-phenylalanine change (Y710F), previously shown to retain partial endocytic activity (21), and a leucine-to-alanine change in the C-terminal dileucine motif (L853A), also implicated in the endocytosis of Env (Fig. 1B) (22). ADCC assays were performed on HIV-infected target cells using immunoglobulin purified from the plasma of HIV-infected donors (HIVIG) and an NK cell line expressing human CD16. The Y710G and Y710F substitutions both resulted in significant increases in susceptibility to ADCC compared to that of cells infected with wild-type HIV-1 (Fig. 3A). HIVIG concentrations for half-maximal ADCC activity were 23fold and 9-fold lower for Y710G and Y710F, respectively (both, P Ͻ 0.0001) (Fig. 3B). Likewise, significant differences were detected in AUC values for Y710G (P Ͻ 0.0001) and Y710F (P ϭ 0.0294) (Fig. 3C). The L853A mutant showed an unexpected modest increase in resistance to ADCC, which was significant by comparison of 50% ADCC concentrations (P ϭ 0.0406) (Fig. 3B) but not by comparison of AUC values (Fig. 3C).
As observed for mutations in the SIV gp41 tail, Env levels on the surface of cells infected with the HIV-1 gp41 CD mutants correspond to differences in susceptibility to ADCC (Fig. 3D). Surface expression of Env was 2-fold higher for Y710G (P ϭ 0.0003) and 1.6-fold higher for Y710F (P ϭ 0.0312) than that in cells infected with wild-type HIV-1 NL4-3 (Fig. 3E). However, no difference in Env expression was detected for the L853A mutant (P ϭ 0.9175). Env levels were also strongly correlated with sus- CEM.NKR-CCR5 -sLTR-Luc cells were infected with either wild-type SIV mac 239 or a viral mutant containing the indicated substitution in the gp41 tail. Cells infected with SHIV SF162P3 were used to control for nonspecific killing. (A) Infected target cells were incubated with a CD16 ϩ NK cell line at a 10:1 effector/target ratio in the presence of serial dilutions of plasma from an SIV-infected rhesus macaque. The dose-dependent loss of luciferase activity in percent RLU was used as a measure of ADCC activity, as previously described (17). Error bars represent the standard deviation of triplicate wells, and the dotted line represents 50% ADCC activity. Differences in susceptibilities to ADCC as measured by 50% ADCC titers (B) and AUC values (C) calculated from three independent experiments were compared by one-way ANOVA, corrected for multiple comparisons according to Holm-Sidak. (D) Env levels on the surface of SIV-infected target cells were measured by flow cytometry. The histograms show the fluorescence intensity of Env staining using plasma from an SIV-infected rhesus macaque followed by an anti-human IgG antibody after cells were gated on viable CD45 ϩ CD4 low Gag ϩ cells in comparison to that of nonspecific staining in the absence of SIV ϩ plasma. Differences in gMFIs of Env staining for three separate experiments were compared by one-way ANOVA with a Holm-Sidak correction for multiple comparisons (E). 50% ADCC titers (F) and AUC values (G) correlate with surface levels of Env (Pearson correlation test). ceptibility to ADCC, as reflected by the Pearson correlation coefficients for Env staining versus 50% ADCC (Fig. 3F) and AUC values (Fig. 3G).
Disruption of the AP-2-binding site in gp41 and deletion of vpu have an additive effect on susceptibility to ADCC. Previous work by our group and others demonstrated that the loss of Vpumediated downregulation of tetherin increases the susceptibility of HIV-1-infected cells to ADCC (12,13). To investigate whether the protective effect of the conserved AP-2 binding site is cumulative with that of Vpu, we introduced the Y710G substitution into a strain of HIV-1 NL4-3 carrying a single nucleotide deletion in vpu. This mutation results in several stop codons after the fifth codon of vpu and does not alter Env expression levels (12,16). Whereas HIV-1 NL4-3 Y710G and HIV-1 NL4-3 ⌬vpu showed similar susceptibilities to ADCC, the combined mutations increased susceptibility to ADCC to a greater extent than either one alone (Fig. 4A). HIVIG concentrations required for 50% killing of HIV-1 NL4-3 ⌬vpu Y710G-infected cells were reduced by two orders of magnitude compared to those for wild-type HIV-1-infected cells and 4-fold compared to those for cells infected with either HIV-1 NL4-3 ⌬vpu or HIV-1 NL4-3 Y710G (Fig. 4B). Similar differences were observed by comparison of AUC values; cells infected with the combination mutant were 4.5-fold more susceptible to ADCC than cells infected with wild-type HIV-1 and 1.4-and 2.1-fold more susceptible, respectively, than cells infected with the Y710G and ⌬vpu mutants (Fig. 4C). Changes in Env levels on the surface infected with HIV-1 NL4-3 carrying either no mutation (wild type, WT) or substitutions in the membrane-proximal YXX⌽ motif (Y710G and Y710F) or C-terminal dileucine motif (L853A) of gp41 and tested for susceptibility to ADCC in the presence of the indicated concentrations of HIVIG. The dotted line indicates 50% killing of HIV-infected cells. Data shown are representative of six independent experiments, and error bars represent the standard deviations of triplicate wells. SIV-infected cells were included as a control for nonspecific killing. 50% ADCC titers (B) and AUC values (C) were calculated from six independent experiments, and differences were compared by one-way ANOVA adjusted for multiple comparisons by a Holm-Sidak correction. (D) Surface expression of Env was assessed by flow cytometry using HIVIG after cells were gated on viable CD45 ϩ CD4 low Gag ϩ cells. The shaded area represents nonspecific staining with normal human IgG instead of HIVIG. (E) Differences in Env staining for six separate experiments were compared by one-way ANOVA with a Holm-Sidak correction for multiple comparisons. Env levels were correlated with 50% ADCC titers (F) and AUC values (G) using the Pearson correlation test.
of virus-infected cells as measured by flow cytometry again reflected susceptibility to ADCC (Fig. 4D). Cells infected with HIV-1 NL4-3 ⌬vpu Y710G had 3.2-fold higher cell surface Env levels than wild-type-infected cells, a 1.5-fold increase over each separate mutant. Env staining also correlated strongly with both 50% ADCC titers and AUC values (both, P ϭ 0.0002, Pearson correlation test) ( Fig. 4F and G).
The membrane-proximal endocytosis motif protects primary HIV-1 isolate-infected cells from ADCC. To determine the impact of Env endocytosis on the susceptibility of cells infected with a primary HIV-1 isolate to ADCC, we introduced a corre-sponding tyrosine-to-glycine substitution at position 704 in gp41 of HIV-1 JR-CSF (Fig. 1B). ADCC activity was tested using eCD4-Ig mim2 (Fig. 5A), a CD4-Ig fusion with the CCR5-mimetic sulfopeptide CCR5mim2 that has potent broadly neutralizing activity against HIV-1 (19,20). Although 50% lysis was not achieved in all experiments, AUC values for ADCC activity against HIV-1 JR-CSF Y704G-infected cells were 10-fold higher than for activity against cells infected with wild-type HIV-1 JR-CSF (P ϭ 0.0089) (Fig. 5B). These findings were mirrored by cell surface Env levels (Fig. 5C), which were increased 3.9-fold by disrupting the membrane-proximal AP-2 binding site of HIV-1 JR-CSF gp41 (P Ͻ 0.0001) (Fig. 5D). A significant correlation was also detected between Env levels and AUC values (Fig. 5E) by the Pearson correlation test (P ϭ 0.0029).
DISCUSSION
Here, we show that disruption of the conserved membrane-proximal AP-2 binding site in the gp41 cytoplasmic tail increases the sensitivity of HIV-1-and SIV-infected cells to antibody-dependent cell-mediated cytotoxicity. Greater susceptibility to ADCC correlated with increased surface expression of Env and was additive with a deletion in vpu previously shown to enhance the susceptibility of HIV-1-infected cells to ADCC (12,13,15). These results support a role for the membrane-proximal endocytosis motif of gp41 in protecting HIV-1-and SIV-infected cells from ADCC by minimizing the exposure of the viral envelope glycoprotein on the cell surface prior to the assembly and release of virus particles.
It is important to note that while this study assessed susceptibility to ADCC, the regulation of Env expression on the surface of virus-infected cells may also provide resistance to other Fc receptor (FcR)-mediated functions of antibodies. In addition to ADCC, Env-specific antibodies may contribute to the elimination of HIV-1-and SIV-infected cells in vivo by complement fixation and by FcR-dependent phagocytosis. Thus, Env internalization is likely to have a broader role in lentiviral resistance to antibody responses directed against virus-infected cells than protection against ADCC alone.
Recent studies have also identified other mechanisms by which HIV-1-infected cells evade antibody responses. We, along with others, demonstrated that Vpu protects HIV-1-infected cells from ADCC by counteracting restriction by tetherin (12,13), an interferon-inducible transmembrane protein that inhibits virus release from infected cells (23). By preventing tetherin-mediated accumulation of budding virions on the cell surface, Vpu reduces the binding of Env-specific antibodies capable of directing the killing of infected cells by ADCC. Nef-and Vpu-mediated downmodulation of CD4 also affords resistance to ADCC (14,15). In this case, the loss of these viral gene products enhances the susceptibility of HIV-1-infected cells to antibodies specific for epitopes of gp120 normally occluded in the native Env trimer (14). Thus, CD4 downmodulation by Nef and Vpu appears to prevent the elimination of infected cells by antibodies directed to gp120 surfaces exposed by the formation of gp120-CD4 complexes at the plasma membrane. Together with the results reported here, these observations help to explain why antibody responses directed against virus-infected cells, like other mechanisms of immunity, ultimately fail to contain HIV-1 replication in chronically infected individuals.
We also tested mutations in potential trafficking motifs besides the membrane-proximal AP-2 binding site (24,25). While there is evidence that the conserved C-terminal dileucine motif has some effect on endocytosis of Env in HIV-1-infected cells (22), it does not impair virus infectivity (26) and has little effect on Env internalization by itself (27). Thus, it is perhaps not surprising that we saw no effect on surface Env levels and only small changes in susceptibility to ADCC when it was disrupted. Likewise, although previous findings describe no effect on endocytosis mediated by the additional YXX⌽ motifs at positions 768 and 795 in SIV mac 239 (2, 28), we decided to include them in our study because they are highly conserved among SIV isolates (25). In accordance with existing literature, we observed no significant effect on cell surface Env levels or susceptibility to ADCC when these motifs were disrupted. Our results therefore suggest that these motifs do not play a major role in the resistance of virus-infected cells to ADCC. This study describes a fundamental new role for the highly conserved endocytosis motif in the cytoplasmic tail of gp41 in protecting HIV-1-and SIV-infected cells from elimination by antibodies. Together with recent evidence that Vpu-mediated downregulation of tetherin and Nef-mediated downregulation of CD4 afford resistance to ADCC (12)(13)(14)(15), the picture that emerges is that the primate lentiviruses have acquired multiple complementary mechanisms to reduce the susceptibility of virus-infected cells to antibodies. These findings have important practical implications since they suggest that approaches for preventing Env internalization may enhance the efficacy of antibody-based vaccines and therapies designed to improve the immunological containment of HIV-1 replication in chronically infected individuals. | 4,242 | 2015-08-12T00:00:00.000 | [
"Biology"
] |
OmDet: Large-scale vision-language multi-dataset pre-training with multimodal detection network
The advancement of object detection (OD) in open-vocabulary and open-world scenarios is a critical challenge in computer vision. This work introduces OmDet, a novel language-aware object detection architecture, and an innovative training mechanism that harnesses continual learning and multi-dataset vision-language pre-training. Leveraging natural language as a universal knowledge representation, OmDet accumulates a"visual vocabulary"from diverse datasets, unifying the task as a language-conditioned detection framework. Our multimodal detection network (MDN) overcomes the challenges of multi-dataset joint training and generalizes to numerous training datasets without manual label taxonomy merging. We demonstrate superior performance of OmDet over strong baselines in object detection in the wild, open-vocabulary detection, and phrase grounding, achieving state-of-the-art results. Ablation studies reveal the impact of scaling the pre-training visual vocabulary, indicating a promising direction for further expansion to larger datasets. The effectiveness of our deep fusion approach is underscored by its ability to learn jointly from multiple datasets, enhancing performance through knowledge sharing.
Introduction
Object detection (OD) is one of the monumental tasks in computer vision (CV).Classical OD research has been focusing on improving the detector net-many small domain-specific datasets is much cheaper than creating a single large-vocabulary large dataset (Gupta et al., 2019).
On the other hand, joint training from multiple OD datasets with different labels faces two key technical challenges: (1) taxonomy conflict: each OD dataset is annotated with its pre-defined labels and a classic detector uses a fixed Softmax layer to classify object types (Ren et al., 2015).Such design forbids the possibility of learning from different label sets or dynamically adapting to new classes.(2) fore/background inconsistency: since the label set is different, then an object proposal may be considered as foreground in dataset A, while it is considered as background in dataset B. For example, an object "cat" is annotated in dataset A, but not in dataset B. Our study shows that this greatly hurts the multi-dataset performance of classic detectors since the RPN head is confused by the conflicting ground truth.
To address the above challenges, this work proposes a novel vision-language model, OmDet, for open vocabulary object detection and phrase grounding.The main architecture novelty of OmDet is its latent query-centric fusion module that combines information from visual and text features and the proposed training mechanism that can easily accumulate knowledge from OD/grounding datasets from various domains.Two versions of OmDet is pre-trained, including OmDet V1 which is purely pre-trained on a large number of OD datasets (more than 100 domains), and OmDet V2 which is additionally pre-trained on visual grounding data (Kamath et al., 2021).
The proposed method is evaluated on three downstream tasks: object detection in the wild (ODinW) (Li et al., 2022a), open-vocabulary detection, and phrase grounding (Plummer et al., 2015).Results show that OmDet is able to outperform all prior art, including the powerful GLIP (Li et al., 2022b) that is pre-trained on much larger datasets.Moreover, comprehensive model analysis is conducted to better understand the strength and limitations of OmDet.We conduct controlled study on joint training from four diverse datasets (COCO, Pascal VOC, and Wider Face/Pedestrian) and results show that our method is not only able to learn from all datasets without suffering from label and localization conflicts, but achieves stronger performance than single dataset detectors due to its share of knowledge among tasks.Also, we show that accumulating multiple datasets to expand to large vocabulary OD learning is an effective method to boost OmDet's zero/few-shot ability as well as parameter-efficient training performance (e.g., prompt tuning) In summary, the contributions of this our paper are four folds: • We present OmDet, a novel language-aware OD architecture with Multimodal Detection Network (MDN) that can learn from any number of OD and grounding datasets.
• Experiments show OmDet's state-of-the-art performance on well-known ODinW, open-vocabulary detection and phrase grounding benchmark.
• Experiments confirm the effectiveness of the proposed multi-dataset training by solving the label difference and fore/background inconsistency challenges.
• Experiments show that by scaling up visual vocabulary size via multidataset training, one can improve zero/few-shot and parameter-efficient fine-tuning.
Vision-Language Pre-training
One of the most studied topics of VLP is to pre-train massive image-text pair data.Recent advances in self-supervised learning have enabled models to learn rich representations from large-scale unlabeled data.
For example, CLIP (Radford et al., 2021a) learns to predict which text matches which image, resulting in a versatile model that can perform well on various vision tasks without task-specific supervision.ALIGN (Li et al., 2021) further scales up CLIP by using a noisy dataset of over one billion image alt-text pairs.However, these models mainly focus on vision-based tasks and neglect the interaction between multiple modalities during pretraining.To address this limitation, several studies propose to learn joint multi-modal representations of image content and natural language for vi-sion+language tasks (such as VQA and visual reasoning).Among them, OSCAR (Li et al., 2020), UNITER (Chen et al., 2020) and VILLA (Gan et al., 2020) adopt a two-stage approach: they first use an object detector (e.g., Faster R-CNN (Zhang et al., 2021)) to extract vision features, then they apply a multi-layer transformer (Vaswani et al., 2017) to the concatenation of the visual features and text features to learn joint embeddings.
Some studies propose to model visual input without relying on pre-trained object detectors.For instance, SOHO (Huang et al., 2021) uses a visual dictionary to extract compact image features from a whole image, which enables 10 times faster inference time than region-based methods.Similarly, ViLT (Kim et al., 2021) employs a vision transformer (Dosovitskiy et al., 2020) to capture long-range dependencies over a sequence of fixed-size nonoverlapping image patches, without using convolutional visual features.
Object Detection
Objection detection, one of the predominant tasks in computer vision, aims to detect bounding boxes and classes of object instances.It has significantly evolved through the contributions of massive research in recent years.There are two major categories of detectors: two-stage and one-stage methods.Two-stage methods consist of a region proposal network (RPN) and a region-wise classifier.Classic models include R-CNN (Girshick et al., 2014), Fast R-CNN (Girshick, 2015) and Faster R-CNN (Ren et al., 2015).One-stage methods eliminate the RPN stage and directly make final object predictions on the visual feature maps.Well-known systems include SSD (Liu et al., 2016), Yolo (Redmon et al., 2016) and RetinaNet (Lin et al., 2017b).Recently, end-to-end detectors such as DETR (Carion et al., 2020) have proposed to formulate the object detection task as a set prediction task.However, objection detection is often formulated as a closed-set problem with fixed and predefined classes and cannot handle object detection in the wild.To overcome the closed-set limitation, more realistic scenarios such as Multi-Dataset Object Detection (MDOD) and Open-Vocabulary Object Detection (OVOD) have attracted lots of attention.
Multi-Dataset Object Detection: MDOD focuses on increasing detectable object classes by training a single detector using multiple datasets.Traditional closed-set object detection demands training detectors on datasets with full annotations, and adding a new dataset means costly extra human annotations.Research on MDOD attempts to bypass the closed-set limitation, where a single detector is able to incrementally add object classes by adding new datasets with new classes.Yao et al., (Yao et al., 2020) proposes an MDOD framework with a preprocessed hybrid dataset and a datasetaware focal loss.(Zhao et al., 2020) designs a conflict-free loss to avoid the ambiguity between positive and negative samples.Detection Hub (Meng et al., 2022) unifies multiple datasets with a query-based object detector with natural language embedding.
Open-Vocabulary Object Detection: OVOD, a more ambitious goal beyond the closed-set problem, refers to the capability of only training on annotated datasets and generalizing to unseen novel classes.Recently, OVOD has made such progress with the utilization of multi-modal vision-language pre-trained models (Li et al., 2022b) (Zhou et al., 2022b) (Kamath et al., 2021).RegionCLIP (Zhong et al., 2022) generates pseudo-labels for regiontext pairs from caption datasets to perform regional vision-language pretraining and transfer to OVOD.ViLD (Gu et al., 2021) proposed a two-stage open-vocabulary detector, which distills embeddings from teacher model CLIP (Radford et al., 2021b) or ALIGN (Jia et al., 2021).With inspiration from CoOp (Zhou et al., 2022a), DetPro (Du et al., 2022) introduces a technique to learn continuous prompt embedding that improves the performance of ViLD.OWL-ViT (Minderer et al., 2022) transfers the pre-trained imagetext model to the object detection by adding downstream detection heads and fine-tuning on OD datasets.
Object Detection as Grounding: Phrase grounding refers to the process of identifying the relationship between individual phrases within a sentence and specific objects or regions depicted in an image (Kamath et al., 2021;Deng et al., 2021).GLIP (Li et al., 2022b) proposed that object detection can be viewed as a special case of phrase grounding.The authors of GLIP concatenate object types as a single string and ask the model to ground objects to word spans.This setup enables unified modeling between phrase grounding and object detection, and the resulting system achieves strong performance in long-tail object detection and zero-shot detection.
Unlike previous grounding-based methods, the proposed method is designed to learn from an arbitrary number of object detection (OD) datasets, which does not necessarily need to train on grounding data.This ability is valuable for real-world scenarios, e.g., creating a multi-task OD model that simultaneously learns from many independent OD datasets.
Our Approach
Before getting into the details of the proposed system, we first define the problem formulation.OmDet is designed for language-conditioned detection.Let V be a large vocabulary of object types that OmDet can potentially detect.A task T = {w 1 , w 2 , ...w k } is a set of k object types that the model should detect in its forward path, where w ∈ V .Note that the size of T can be dynamic ranging from 1 to K, where K is the maximum supported number of object types in a single inference run.For the visual grounding setting, T is the query sentence that contains K word tokens.Meanwhile, Let L be a set of natural language labels.In the object detection case, L = T .For the grounding cases, L is the set of entities that appeared in caption T .Then given an input image x, a task T , and a label set L, the model is expected to detect all objects mentioned in T from x. Since T and L are not fixed, an ideal model can dynamically adapt its detection targets conditioned on the task.
Model Architecture
Following the above design principle, OmDet is introduced, a task-conditioned detection network that can learn from infinite combinations of tasks.It is composed of a vision backbone, a task encoder, a label encoder, and a multimodal detection network.The overall structure is illustrated in Fig1.The following will describe each component in detail.
Vision Backbone Starting from the initial image x img ∈ R 3×H 0 ×W 0 (with 3 color channels), let the vision encoder f v be a conventional Convolutional Neural Network (CNN) (Liu et al., 2022) or Vision Transformer (e.g.Swin Transformer (Liu et al., 2021)).The vision encoder generates a lower-resolution visual feature map f ∈ R C×H×W at each output layer.Then Feature Pyramid Network (FPN) (Lin et al., 2017a) is used to aggre-gate information from top to bottom and output a set of visual feature maps {P 2, P 3, P 4, P 5}.
Task Encoder and Label Encoder The term "task" refers to a natural language query designed to expand various text-aware vision tasks; (e.g., "Detect objects: {the specified list of objects that we aim to identify}") The term 'label' refers to the language phrase output that is intended for detection purposes.The task set T = {w 1 , w 2 , ...w k } ∈ R k×V is set of natural language words.Then a task encoder f t or a label encoder f l is a transformer model that encodes the task set T as a natural language sentence, and outputs a set of contextual word embeddings, i.e.
where d is the contextual word embedding dimension size.We use pre-trained transformer-based language models, e.g.CLIP (Radford et al., 2021a) to initialize the task and label encoders.2021), we deploy deep fusion to combine information from the image and current task early on, in order to achieve strong performance.We are inspired by the Sparse-RCNN (Sun et al., 2021) network design and developed an iterative querybased fusion mechanism that fuses text features and visual features into latent queries.Figure 3 illustrates the differences between our method versus prior art.Let Q ∈ R N ×d be a fixed small set of learnable proposal features.The N denotes the number of proposal features.It is a set of high-dimensional (e.g., d = 256) latent features that capture the rich information of a potential instance, by combining data from the vision backbone and contextual task embedding from the task encoder.Also, let B ∈ R N ×4 be a set of learnable one-to-one proposal boxes assigned to each feature.Then given the FPN output and task/label encoder output, the initial MDN operates as the following: where T i is the task embedding at iteration i and L is the label embedding.
Note that MDN can be stacked to iterative refine its output the same as Sparse-RCNN, with the key difference that T i is fused with the proposal feature before the Dynamic Convolution layer and also T i is also iteratively updated at each run of MDN block.This enables the network to learn to adjust the task embedding and the proposal embedding jointly and adapt both object localization and classification heads conditioned on the given task.Figure 2 shows the process by which MDN first combines information between latent queries and language embedding via MHSA, and then infuses visual features with DynamicConv.Note that we can easily adapt MDN to other query-based detectors such as DETR Carion et al. (2020), in which the DynamicConv operation is replaced by a CrossAttention module.
With the utilization of deep fusion between image features and task embedding at MDN, the challenge of fore/background inconsistency is solved.Other models like (Zhou et al., 2022b) (Minderer et al., 2022) try to solve the fore/background inconsistency by training a perfect RPN to find all possible objects, which is hard to achieve.Our method applies deep fusion at an early stage to help the model be conscious of fore/background according to task embedding, and therefore properly switching fore/background among different tasks.To handle the taxonomy conflict, the label encoder is applied to get the text embedding of the target label, then the label embedding is passed to the classification stage to eliminate naming differences.Taxonomy conflict is solved by projecting the target label into embedding space since the same object with different naming will be close to each other.
Model Training
Set Prediction Loss Given the proposed model, it uses set prediction loss (Carion et al., 2020) on the fixed-size set of predictions of classification and box coordinates.Set-based loss produces an optimal bipartite matching between predictions and ground truth objects using the Hungarian algorithm.The matching cost is defined as follows: Here L cls is focal loss (Lin et al., 2017b) of predicted classifications and ground truth category labels, L L 1 and L giou are L1 loss and generalized IoU loss (Carion et al., 2020) between normalized center coordinates and height and width of predicted boxes and ground truth box, respectively.λ cls , λ L 1 and λ giou are coefficients of each component.The training loss is the same as the matching cost except that only performed on matched pairs.The final loss is the sum of all pairs normalized by the number of objects inside the training batch.
Task-Sampling Strategy For object detection datasets, in order to simulate a diverse set of tasks for meta-learning during training and also enforce the model to condition its output on a given task, a novel task sampling strategy is used during training.
1. Let the max size of a given task be K, for an image x from a dataset d in the mini-batch, we first sample k ∈ [1, K] with a uniform distribution.
2. Let the number of unique object types in x be m, if m > k, then only a random subset of k object types are kept and the extra annotations are removed for this mini-batch.If m < k, then additional negative object types are randomly selected from the vocabulary V of dataset d. 3. The model is trained with the above-sampled task and ground truth annotations.
With the above method, each image in every mini-batch will have a different set of tasks to learn from.When we learn from a large-vocabulary object detection dataset, e.g., LVIS, which contains 1200 unique object types, the unique combination of task size then it produces 1.34E43 possibilities, a quite large number.Experiments show that the proposed training strategy serves the purpose well, and yields models that perform task-conditioned object detection.
For learning from phrase grounding dataset, the task T is simply the corresponding caption of the image.The label set L is the set of entities that appeared in the caption.However, since there are only a few entities in each caption, learning of L cls becomes too easy.Therefore, we randomly select from other entities in the dataset to create a label set up to K classes to increase the difficulty of learning.This method is proven to be effective in improving performance on phrase grounding in later experiments.
Comparison to Grounding-based Method
Our proposed architecture, the Multimodal Detection Network, has several strengths over traditional approaches that directly fuse text and vision features.Instead, our model fuses latent queries with text features, leading to the following advantages: Deep fusion for any query-based OD: early VLP work, e.g., ViLD (Gu et al., 2021) and Detic (Zhou et al., 2022b), use shallow fusion for object detection, i.e. use text embedding only for classification, which cannot solve fore/background conflicts.Meanwhile, prior deep fusion models, e.g., MDETR (Kamath et al., 2021) and GLIP (Li et al., 2022b)), use specialized cross-attention architecture to fuse the text and visual features.Our method can be applied to any query-based OD architecture, e.g.DETR, Sparse-RCNN, without the need for model change.
Inference speed and performance: visual grounding MDETR (Kamath et al., 2021) and TransVG (Deng et al., 2021) models encode one class at a time for OD and suffer from slow inferences speed, e.g.10s/image for MDETR.Also, MDETR uses a transformer to fuse images with text, which cannot scale up to multi-scale features due to the complexity of self-attention.Our method deals with fixed-size latent queries, which are independent of visual features.Thus, our method is able to predict many classes with significant speed up with on-par or better performance.
Implementation Details
We implement OmDet with the following settings: For text embeddings, CLIP-B/16 text encoder (Radford et al., 2021b) is used throughout the study.We did not use the prompt template as used in study (Gu et al., 2021), i.e. encoding object names in a template a photo of {}.This is because preliminary studies show no major difference between using versus not using the prompt template.Furthermore, the preliminary study also suggests there are no significant differences between using singlemodal language models, e.g.BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019), versus multimodal-language models e.g.CLIP.We suspect this is because object detection does not involve complex language understanding.
The task and label encoders share the same text encoders.On top of the text encoder, two independent Transformers layers (Vaswani et al., 2017) are used to further dedicated encoding for task input and label input.Study shows that the set encoding is able to improve OmDet's performance.
For visual backbones, both Swin Transformers (Liu et al., 2021) and Con-vNeXt (Liu et al., 2022) are used in the experiments.A standard FPN (Lin et al., 2017a) is used to extract a four-level feature map form the visual encoders.Both backbones are pre-trained on ImageNet 21K data (Ridnik et al., 2021).Preliminary studies found that ConvNeXt usually performs on par or better than Swin Transformers.Therefore, we use ConvNeXt as the default choice.
Lastly, the MDN network utilizes MHSA to fuse information from visual input and text input to latent queries.We equip MDN with 300 latent queries and we use ROIAlignV2 (He et al., 2017) as the ROI Pooler to extract region features from the visual backbone.6 sequential MDN blocks are cascaded to create the final bounding boxes and classification prediction.
Large-scale Pre-training
Two versions of large-scale pre-training are conducted.
Large-scale OD Pre-training (OmDet V1): in this setting, we accumulate a large number (104) of object detection datasets for pre-training to show that OmDet is able to accumulate knowledge from many OD datasets without suffering from fore/background and label inconsistency challenges.Pre-training datasets include COCO (Lin et al., 2014), Object365 (Shao et al., 2019), LVIS (Gupta et al., 2019), PhraseCut (Wu et al., 2020) and Roboflow 100 (Ciaglia et al., 2022) Large-scale OD & Grounding Pre-training (OmDet V2): In the second version, we exclude any images related to COCO and LVIS datasets from pre-training since we will test zero-shot performance on these two datasets.In addition to large-scale OD multi-dataset pre-training, OmDet is able to horizontally expand to the non-OD type of training data.Specifically, we include the GoldG grounding dataset curated by Kamath et al. (2021), which includes 1.3M pairs of image-caption data with grounded entities.Data details are described in Model Training: For OmDet models, the initial learning rate is 5e-5 and it decays at 70% and 90% of total iteration steps by 0.1.ConvNeXt Base backbone is used with a 6-layer MDN head.The batch size is 40 and the maximum number of detections per image is 300 and K is set to 80.All of the proposed models are pre-trained for 36 epochs using NVIDIA A100 GPU cluster and then fine-tuned on the downstream data.
Downstream Tasks
We focus on three types of downstream tasks for evaluation: Object Detection in the Wild: object detection in the wild test a model's ability to adapt to various different domains with drastically different label sets.ELEVATER (Li et al., 2022a) is a new object detection benchmark that is composed of 35 diverse real-world challenging domains with full-shot, few-shot and zero-shot training settings.Note that there are two variations of data used in prior work (Li et al., 2022b).The full version includes 35 domains, which we will refer to as ODinW35, and the second version only includes 13 out of 35 domains, which we will refer to as ODinW13.The evaluation metric is AP.
Open-vocabulary Object Detection: open-vocabulary detection tests models' ability to recognize a large number of objects with types that are not included in the training.Zero-shot performance on COCO (Lin et al., 2014), LVIS (Gupta et al., 2019), and ODinW (Li et al., 2022a) are commonly used as the benchmark.The evaluation metric is AP.
Phrase Grounding: For phrase grounding on Flickr30k (Plummer et al., 2015), we do not further fine-tune the model after grounding pre-training, and just directly evaluate on the Recall@ 1,5,10 metrics.
We provide the detailed settings of the baseline models used in our experiments, including information on the fusion (deep vs. shallow), backbones, number of parameters and pretraining data (Table 3)
Results on Object Detection in the Wild
In our evaluation of ODinW, we compared the zero-shot, few-shot, and full-shot scores of GLIP-Tiny, DyHead-Tiny, DINO Swin-Tiny, and OmDet (Table 4).When compared to state-of-the-art models that were trained with the same backbone size, our OmDetV1-T model achieved the highest AP scores on few-shot and full-shot evaluations.Furthermore, we trained OmDet V1-T and OmDet V1-B under the same settings, but with different backbones.OmDet V1-B outperformed all other models and achieved stateof-the-art results on zero-shot, few-shot, and full-shot evaluations.Note that the word definition of Wiktionary is used as a knowledge source for OmDet v1-B on the zero-shot setting.
Results on Open-Vocabulary Detection
We evaluated the zero-shot performance of different models on several open-vocabulary detection datasets, including COCO Val, LVIS MiniVal, ODinW13, and ODinW35 (Table 5).Our proposed models, OmDetV1-B and OmDetV2-B, achieved competitive results on the evaluated datasets.Specifically, OmDetV2-B significantly outperformed all other models on all datasets, achieving 9 points higher AP than the previous state-of-the-art model (GLIP-B) with the same base backbone on COCO Val.Moreover, our model showed exceptional performance on rare objects in LVIS Mini-Val, outperforming the previous state-of-the-art model OWL-B by almost 8 points (20.8 vs. 27.92).These results demonstrate the effectiveness of our proposed models for open-vocabulary detection tasks, particularly on rare object detection where data scarcity is an issue.Thus, our model achieved the highest performance on ODinW, which is a dataset that contains a large number of rare objects and reflects real-world applications.In the terms APr, APc, and APf, the letters r, c, and f denote rare, common, and frequent, respectively.
Our model achieved a competitive performance on this task, with Recall@1, Recall@5, and Recall@10 scores of 85.1, 96.2, and 97.5, respectively.Our model's performance is only slightly lower than that of the state-of-theart model, GLIP-B (85.7 vs. 85.1) at Recall@1, this is due to our approach of fixed-size latent queries independently over contexts to reduce complexity.However, we achieved higher scores than GLIP-B at Recall@5 and Recall@10 with significantly more efficient computations than the traditional approaches.
These results demonstrate the effectiveness of our OmDet for the task of phrase grounding as well and highlight the importance of incorporating latent query deep fusion in both object detection and phrase grounding.
Ablation and Analysis
To further investigate the behavior of our proposed method, we conducted several follow-up studies: 1) an analysis of the efficacy of deep fusion, 2) an analysis of the effect of pre-training, and 3) visualizations of language-aware object detection.We verify that the proposed deep fusion mechanism can effectively learn from multiple object detection datasets without suffering from the task conflict challenge.Additionally, we show the benefit of the proposed multiple datasets training over a single dataset.
Analyze the Efficacy of Deep Fusion
For MDOD, we follow the experimental setting from (Yao et al., 2020) and choose COCO (Lin et al., 2014), Pascal VOC (Everingham et al., 2010), WIDER FACE (Yang et al., 2016) and WIDER Pedestrian (Loy et al., 2019) as joint-training datasets.Note that COCO is the larger data with 118K images while the other 3 datasets are almost 10 times smaller.Also the COCO dataset has a diverse set of categories that cover the classes in Pascal VOC and WIDER Pedestrian.WIDER Face is the only dataset that has "face" class.Therefore, these four datasets serve as a great testing bed for MDOD study.Three baselines are served as baselines.First, we include Sparse R-CNN (Sun et al., 2021) as the baseline due to its strong performance and similarity to OmDet in terms of model structure.Then we create OmDet-Single, OmDet-Shallow for the ablation study.
• OmDet-Single: To compare the performance on single datasets with Sparse R-CNN, we train OmDet on the four datasets separately.Since they are only trained with a single dataset, so it cannot benefit from the proposed multi-dataset training.
• OmDet-Shallow: We also train an OmDet-Shallow model by removing the task encoder and MDN, which degenerates the localization network to Sparse R-CNN, and only utilize the language feature for the final label classification.This is similar to previous work such as Detic(Zhou et al., 2022b).
We use Image Net (Deng et al., 2009) pre-trained Swin Transformer Tiny (Liu et al., 2021) as the visual backbone and use CLIP ViT-B/16 as language en-coder for OmDet.The same Swin Transformer is used as the backbone for Sparse R-CNN.All models are trained with 12 epochs.The initial learning rate is set to 5e-5 for OmDet and 2.5e-5 for Sparse R-CNN.
OmDet vs. Sparse R-CNN on single Dataset: First, we demonstrate the validity of our framework on classic OD tasks.
As shown in Table 7, OmDet-Single gets higher AP scores on COCO, PASCAL VOC and WIDER FACE than Sparse R-CNN under the same training setting, and is only about 1 point lower on WIDER Pedestrian.These results prove our model with the novel language-aware OD architecture still maintains the same or better performance on single dataset OD using the same number of trainable parameters, but since it is only trained with a small set of vocabulary, it does not have the open-vocabulary or few-shot capability.
OmDet vs. OmDet-Single: The only difference between OmDet and OmDet-Single is that OmDet is joint-trained on all four datasets by utilizing the proposed language-aware OD architecture.Table 7 shows that the AP scores of OmDet are significantly higher than OmDet-Single on PASCAL VOC (+15.52 AP), WIDER FACE (+7.33 AP), and WIDER Pedestrian (+12.85AP).Moreover, OmDet shows better performances than Sparse R-CNN on all datasets.These results confirm that OmDet possesses the capability of multi-dataset training by solving taxonomy conflicts and fore/background inconsistency.Moreover, knowledge sharing in joint training improves the overall detection performance, especially for the ones with fewer training samples.
OmDet vs. OmDet-Shallow: Lastly an ablation study is used to verify the contribution of the proposed MDN block.OmDet's fusion mechanism is deep since the task embedding is combined with visual features early on and influences both localization and classification.On the other hand, OmDet-Shallow's fusion mechanism is shallow, i.e., it only utilizes the label embedding in the final layer of object classification.
Table 7 shows that OmDet performs stronger and OmDet-Shallow only partially solve the MDOD challenges.OmDet-Shallow achieves good performance on PASCAL VOC, WIDER FACE, and WIDER Pedestrian compared with OmDet-Single.This is because OmDet-Shallow resolves the taxonomy conflict challenge and enables semantic sharing among object label embeddings, similar to Detic (Zhou et al., 2022b).
However, OmDet-Shallow fails on COCO with a low AP score, since it cannot resolve the fore/background inconsistency challenge.Since COCO has 80 categories, which is much larger than the other three datasets, many of its objects are considered as background in the other three datasets.Therefore, the low AP is caused by incorrectly detecting COCO objects as background.We visualize outputs of OmDet-Shallow and OmDet on COCO images in Figure 4, which confirms our hypothesis that OmDet-Shallow detects many objects that are not in Pascal VOC and Wider Face/Pedestrian as background.More examples can be found that although OmDet-Shallow correctly detects all pedestrians in the last images, it misses object "Skis".Unlike OmDet-Shallow, OmDet has benefited from deep fusion and detects all the images correctly.8.The aim of this setup is to examine the relationship between the number of visual concepts in the pretraining data and the performance of the model on downstream tasks under various fine-tuning settings.Note that we used OmDet ConvNeXt-T as the backbone architecture for our ablation studies.
The effectiveness of Zero/Few-Shot : As shown in Table 8, adding more pre-train datasets yields significant improvement in zero-shot settings.Specifically, adding the object365 dataset gives an absolute gain of 3.7 points on the average mAP.Surprisingly, adding LVIS to the pre-train data hurts performance by 1.1 points.We speculate that the performance drop is due to the noisy and incomplete annotations of LVIS dataset.Adding GCC dataset to the pre-train corpora yields another huge gain, leading the zeroshot performance to 16.0 (compared to 9.8 for OmDet-C).There are several promising directions to further improve the zero-shot performance OmDet, including unfreezing the text encoder during pre-training and incorporating phrase grounding data with contextual text information in pre-training.We leave them to future research.
Meanwhile Parameter-efficient Fine-tuning: As large-scale pretraining models get significantly larger, e.g., more than 1B parameters, the cost to fine-tune (FT) the entire model becomes prohibitive for low-end GPUs.Parameterefficient fine-tuning is designed to alleviate this challenge by only tuning a very small proportion of the entire model.In this paper, we explore two options: Head-only Tuning and Prompt Tuning.
Experimental results show that large-scale multi-dataset pre-training is crucial for successful parameter-pretraining (Table 8).For Head-only FT, the performance drop is reduced from 11.3% for OmDet-C to only 6.1% for OmDet.The same trend is observed for Prompt FT, in which the performance drop compared to full-model tuning is reduced from 65.9% to 45.5% from OmDet-C to OmDet. Figure 5 also visualizes the trend of AP vs. the vocabulary size in pre-training (log-scale).The apparent up-going curve can be observed as more visual concepts are included during pre-training.This suggests that: (1) multi-dataset pre-training enables the accumulation of a large number of visual concepts, which leads to a stronger backbone that extracts generalpurpose visual features (supported by head-only FT results).
(2) the diversity in language is crucial for successful prompt tuning such that the entire model output can be controlled by the task embedding only (less than 1% of the parameters of the entire model).
Visualization of Language-Aware Detection
Lastly, we conducted qualitative visualizations to showcase the effectiveness of our proposed language-aware object detection model in accurately localizing and labeling objects based on natural language inputs (Figure 6).By inputting different tasks, e.g., [Sandwich, Tobacco Pipe] vs. [Lighter, Bottle], OmDet can dynamically adapt its object localization and classification conditioned on the given task.Figure 6 visualizes the intermediate output at each stage of the MDN block.We found that the model learns to place its proposal boxes as the whole image for the initial stage and quickly narrows its focus from the initial whole-image boxes to the objects of interest quickly in 2-3 steps in a top-down search manner.The later stage output continues to refine its output and confidence scores (e.g., with less duplicated bounding boxes and more certain confidence).Table 10: Inference Speed on LVIS datasets, 12K labels Table 10 presents a comparison of the inference speeds across different models.Visual grounding MDETR model encodes one class at a time for OD and suffer from slow inferences speed, e.g.10s/img for MDETR.Also, MDETR uses transformer to fuse image with text, which cannot scale up to multi-scale features due to complexity of self-attention.In the GLIP method for object detection, objects are identified by combining all their labels into a single descriptive sentence.While this method proves effective in certain contexts, it encounters limitations when applied to datasets with an extensive labels, such as those found in LVIS.The reason for this slowdown is that creating one big sentence out of many labels creates unnecessary links between the labels, which complicates the detection process and reduces speed.Our MDN deals with fixed-size latent queries, which are independent of visual features.Thus, our method is able to predict many classes with significant speed up with on par or better performance.
Different Iterative Fusion
We have explored the influence of the iterative number on the multimodal detection network based on iterative fusion in the ODinW datasets.Our investigation involved varying the number of heads in the ConvNext-B architecture, specifically analyzing the performance impact of 1, 3, and 6 heads.The results of these experiments are summarized in Table 11.We observe a substantial improvement when increasing the number of heads from 1 to 3, indicating that additional iterations enhance the network's capability to discern complex patterns in data.This improvement continues, though at a reduced pace, when expanding from 3 to 6 heads.This suggests that while iterative fusion brings benefits in handling complex scenes.12, we present a comprehensive quantitative comparison between our proposed model, OmDet, and several state-of-the-art object detection models, namely ViLD Gu et al. (2021), CORA Wu et al. (2023b), and BARON Wu et al. (2023a).The evaluation is conducted on the widely recognized COCO and LVIS benchmarks, utilizing the open-vocabulary setting as conducted in VILD.The evaluation metrics employed for the comparison include AP 50 novel and AP 50 base for COCO, as well as AP r (the AP of rare categories) for LVIS.The AP 50 novel score evaluates the model's performance on novel objects, which are not seen during the training phase, while the AP 50 Base score assesses detection on the base categories, which are present in the training dataset.Additionally, the AP r score provides valuable insights into the model's performance on the 337 rare categories of LVIS that were not part of the training categories.These evaluation metrics collectively offer a comprehensive assessment of the proposed model's effectiveness across different open-vocabulary object detection scenarios and dataset characteristics.As illustrated in the
Conclusion
This work proposes to advance zero/few-shot OD via continual pre-training from a large number of OD datasets by solving the two key technical challenges: Taxonomy conflict and Fore/background inconsistency.OmDet proposes a novel multimodal detection network that is able to do a fusion of natural language prompts with visual features for language-augmented object detection.Study results confirm the efficacy of OmDet for multi-dataset learning and large-scale pre-training as a foundation model.Our approach OmDet achieved state-of-the-art performance on OV-COCO with a notable AP 50 novel score of 75.17,AP 50 base score of 70.79, and an APr score of 24.65 on OV-LVIS, thereby significantly surpassing competing models such as BARON and CORA.We also show that enlarging the vocabulary size via multi-datasets pre-training effectively improves zero/few-shot learning and parameter-efficient fine-tuning.OmDet achieved state-of-the-art performance on 35 downstream tasks from ODinW.Future research will focus on improving OmDet by exploring better text prompt encoding methods and pre-training strategies that will improve zero-shot detection performance and prompt-tuning performance.
Figure 1 :
Figure 1: Overview of OmDet Architecture.The proposed Multimodal Detection Network iteratively fuses vision and language features into latent queries for object detection.
Figure 2 :
Figure 2: Network architecture for the Multimodal Detection Network (MDN), simplified here for illustration purposes.
Figure 3 :
Figure 3: Comparison with other frameworks.(a) Shallow fusion that only utilizes text information for object classification.(b) Deep fusion that fuses visual and text in the backbone before entering the object detection head.(c) Deep latent fusion (ours) utilizes latent queries to fuse multimodal information, enabling adaption to any querybased OD architecture.
Figure 5 :
Figure 5: Vocabulary size used in pre-training vs. the AP score of fine-tuning on ODinW with head-only and prompt tuning.. X-axis is in log-scale.
Figure 6 :
Figure 6: Illustration of language-aware OD, where a single model can generalize (without fine-tuning) to any input tasks on the fly in the form of natural language.
Table 1 :
. Data details are described in Table1Pre-train data used in large-scale OD pre-training, resulting in OmDetV1.
Table 3 :
Baseline models and their training setup.
Table 4 :
Comparison between OmDetV1 and other models, on average AP of zero-shot, few-shot (3-shot), and full-shot on ODinW35.
Table 6 :
Zero-shot Performance on Flickr30K val for Phrase Grounding.
Table 7 :
MDOD training results on four datasets.OmDet is able to resolve task conflict issues in MDOD and achieves higher performance compared to single dataset models.
Table 8 :
Average AP of zero-shot, full-model, head-only and prompt finetuning on 35 downstream tasks in ODinW.The gray text shows the performance drop of parameter-efficient tuning compared to full-model tuning.
Table 9 :
, the 35 downstream tasks in ODinW come with different training data sizes, varying from only 17 training images to more than 32K training images.Therefore, we divide the 35 tasks into three categories: (1) Small-shot (8 tasks): tasks with less than 200 training images (2) Medium-shot (13 tasks): tasks with between 200 to 2000 training data (3) Big-shot (14 tasks): tasks with more than 2000 training images.Results with full-model fine-tuning are summarized in Table9.Results show that large-scale multi-dataset pre-training is particularly effective for small-shot and medium-shot tasks with limited in-domain training data.Especially for small-shot datasets, OmDet outperforms OmDet-C with 10.99 absolute AP points.Whereas for Big-shot tasks, the advantages of pre-training become less evident.Average AP of full-model fine-tuning on 35 downstream tasks in ODinW for Small-shot, Medium-Shot and Big-Shot tasks.
Table 11 :
Comparison of AP and Inference Speed with Different Iterative Numbers 7.3.Comparison with State-of-the-Art Methods on Open-Vocabulary Benchmarks In Table
Table 12 :
table, OmDet significantly outperforms the other models across all three metrics.With an AP 50 novel score of 75.17, an AP 50 Base score of 70.79, and an AP r of 24.65, OmDet demonstrates superior detection capabilities for both novel and base objects.Comparative analysis of recent object detection models on OV-COCO and OV-LVIS | 9,158.8 | 2022-09-10T00:00:00.000 | [
"Computer Science"
] |
Accelerating Corrosion of Pure Magnesium Co-implanted with Titanium in Vivo
Magnesium is a type of reactive metal, and is susceptible to galvanic corrosion. In the present study, the impact of coexistence of Ti on the corrosion behavior of high purity Mg (HP Mg) was investigated both in vitro and in vivo. Increased corrosion rate of HP Mg was demonstrated when Mg and Ti discs were not in contact. The in vivo experiments further confirmed accelerating corrosion of HP Mg screws when they were co-implanted with Ti screws into Sprague-Dawley rats’ femur, spacing 5 and 10 mm. Micro CT scan and 3D reconstruction revealed severe corrosion morphology of HP Mg screws. The calculated volume loss was much higher for the HP Mg screw co-implanted with Ti screw as compared to that co-implanted with another Mg screw. Consequently, less new bone tissue ingrowth and lower pullout force were found in the former group. It is hypothesized that the abundant blood vessels on the periosteum act as wires to connect the Mg and Ti screws and form a galvanic-like cell, accelerating the corrosion of Mg. Therefore, a certain distance is critical to maintain the mechanical and biological property of Mg when it is co-implanted with Ti.
Magnesium and its alloys have gained attention in recent decades as metallic biomaterials due to their excellent biocompatibility, mechanical properties and biodegradability [1][2][3] . Numerous studies have focused on the orthopedic applications of Mg because its elastic modulus is similar to that of bone, which minimizes the stress shield effect 4,5 . In addition, the released Mg 2+ ions can stimulate new bone formation 6,7 . Mg-based bone screws, plates, and intramedullary nailing systems have been proven to be capable as degradable implants [8][9][10] . Currently, several pilot clinical trials have been performed. Windhagen et al. 11 demonstrated that degradable MgYREZr screws did not cause foreign body reaction, osteolysis, or systemic inflammatory reaction and were equivalent to titanium screws for the treatment of mild hallux valgus deformities. Zhao et al. 12 fixed vascularized bone graft with high purity Mg screws in patients with osteonecrosis of the femoral head and found that Mg screws provided promising bone screw fixation and presented considerable potential for medical applications.
In the clinic, different metallic biomaterials might be co-implanted to maximize the therapeutic effect. For example, co-implanted Ti-6A1-4V and Co-Cr alloys have been used in total hip arthroplasty (THA) since the 1990s 13 . And in dentistry, Ti materials are often selected as endo-osseous implants with other alloys served as suprastructure 14 . This could also occur to Mg biomaterials in orthopedic applications. In a recent clinical study, Yu et al. 15 used vascularized iliac grafting, together with commercial cannulated compression screws and magnesium screws to treat displaced femoral neck fractures in young adults. However, it is well known that the corrosion behavior of Mg would be changed if it is in contact with other metal, or with the β -phase in Mg alloy 16 . The accelerated corrosion rate of Mg would result in loss of mechanical properties and even failure of the orthopedic implants. For example, a few years after the Ti-6A1-4V and Co-Cr alloys were used in THA, some researchers observed significant corrosion in the head-neck taper region 17,18 . Other laboratory experiments drew the conclusion that Ti-6A1-4V, Co-Cr-Mo coupled with stainless steel can be regarded as clinically unsafe 19 Mg and its alloys are especially susceptible to galvanic corrosion because of their inferior ability to form a compact oxide on surface 20 . In an earlier study, Lambotte 21 reported a case in which an iron wire cerclage at the fibula and Mg disc with six steel screws were inserted at the tibia. It was observed that one day after the operation, the patient experienced extensive subcutaneous gas cavities, local swelling and pain. Although it is well known that the corrosion rate and corrosion behavior of Mg are fatal to the implantation, unfortunately, few studies have addressed the change of the corrosion rate and corrosion behavior of Mg under conditions in which Mg and another metal were co-implanted in vivo. Herein, the aim of this study is to investigate the effect of titanium screws co-implantation on the corrosion behavior of pure magnesium, as well as its impact on the osteogenesis.
Materials and Methods
Materials preparation. The extruded high-purity Mg (HP Mg, more than 99.98 wt.%; 0.002 wt.% Si; 0.0015 wt.% Fe; 0.0008 wt.% Al; 0.0008 wt.% Mn; 0.0002 wt.% Ni; 0.0003 wt.% Cu) and Ti (TA1ELI, 99.8 wt.%) used in these experiments were supplied by Suzhou Origin Medical Technology Co. Ltd., China. The HP Mg and Ti disc samples with a diameter of 7.5 mm and a thickness of 1 mm were used in the immersion experiments in vitro. The discs were ground with SiC paper up to 1200 grit followed by ultrasonically rinsing with 100% ethyl alcohol. In the in vivo experiments, the HP Mg and Ti screws with an outer diameter of 2.0 mm, inner diameter of 1.6 mm, screw pitch of 0.6 mm and length of 10.0 mm were prepared. The screws were sterilized with 25 kGy of 60 Co radiation. Immersion test. HP Mg disc was fixed on the plastic mold, and Ti disc was fixed on another mold at a distance of 5 or 10 mm. Then, the mold was immersed in 250 ml of phosphate buffered saline (PBS, prepared as described by Lewis AC et al. 22 ) at 37 °C. The HP Mg and Ti discs directly connected with the copper wire were designated as Group 0, while Group 5 and Group 10 represents the HP Mg and Ti discs fixed at a distance of 5 and 10 mm, respectively. The two HP Mg discs that were fixed at a distance of 10 mm were used as the control group. After 1 week of immersion, the samples were removed from PBS and ultrasonically rinsed with 180 g/L chromic acid and a 10 g/L AgNO 3 solution followed by distilled water and were dried with air flow. Surface morphology was analyzed by scanning electron microscopy (SEM, JEOL 7600). The samples were weighed and the weight loss rate (R WL ) was calculated as per formula (Eq. 1). with an average weight of 286 g (240-328 g) were used. All rats were anesthetized by 3% pentobarbital sodium (0.1 ml/100 g body weight). Surgical site was sterilized with povidone iodine, and the left leg was shaved and exposed via the anterolateral approach. Two parallel transcortical implantation beds with a diameter of 1.8 mm were pre-drilled separately on femoral diaphysis, with a spacing of 5 or 10 mm. Then, the HP Mg screws were implanted at the distal end of femur after countersinking with a drill bit tap. In the experimental group, a Ti screw was implanted at the proximal end. The Ti screw and HP Mg screw with a spacing of 5 or 10 mm was named MT 5 and MT 10, respectively. In the control group, another HP Mg screw was implanted at the proximal end. The two HP Mg screws with a spacing of 5 or 10 mm were named MM 5 and MM 10, respectively. All implants were tolerated by the rats, and no antibiotics were given. The rats had normal activity, and no infections were observed post operation.
Micro-CT scan. The rats were sacrificed at 2, 4, and 8 weeks post-operation. Micro-CT scan was conducted using a Laboratory Micro-CT Scanner eXplore RS 80 (GE Healthcare, Little Chalfont, UK). The X-ray tube was set at 80 kV and 450 μ A with a scan resolution of 45 μ m and exposure time of 400 ms. 3-D reconstruction of the HP Mg screws was conducted via Micro View 2.2 Advanced Bone Analysis Application software (GE Health Systems, Waukesha, WI, USA). The volume of the remaining HP Mg screws was measured, and the volume loss ratio (R VL ) was calculated as per formula (Eq. 2).
where V 0 is the initial screw volume, and V 1 is the residual screw volume.
Pullout test. Pullout test was performed by Material Testing Machine (Shanghai Baihe Instrument
Technology Co. Ltd., China). The max load was recorded.
Histological test. The rat femurs were collected at 2, 4 and 8 weeks post-operation; fixed in 4% formalin for 3 days; and were then dehydrated in graded ethanol followed by methyl methacrylate embedding. A low speed precision cutting machine (DTQ-5, HOVKOX, China) was used to perform lengthways sectioning parallel to the longitudinal axis of both the femur and screws. Sections were reduced to a thickness of 90 μ m by an EXAKT micro-grinder system (EXAKT, Germany). The sections were stained with toluidine blue, and histological images were recorded by optical microscopy (Leica DM2500, Leica, Germany Statistical analysis. The data are expressed as the means ± standard deviations. Statistical analysis was performed with SPSS (SPSS 17.0 Inc., Chicago, USA). One-way ANOVA and Student-Newman-Keuls post hoc tests were used to determine the level of significance. p values less than 0.05 were considered to be significant, and p values less than 0.01 were considered to be highly significant.
Results
In vitro corrosion behavior of HP Mg affected by Ti. Gross observation in Fig. 1 shows that the HP Mg discs in the control group remained integrated after 1 week of immersion. The SEM morphologies revealed that the samples experienced relatively uniform corrosion (Fig. 1a2). Only small pits could be seen on the corrosion surface. In contrast, the HP Mg discs in Group 0 suffered severe corrosion and were almost depleted. Due to the great potential difference, which is − 1.6 V vs SCE for pure Mg 23 and − 0.4 V vs SCE for Ti in PBS 24 , galvanic corrosion occurred and significantly accelerated the corrosion of Mg. The SEM results demonstrated that there was a large area that had non-uniform degradation (Fig. 1d2). It was observed that enhanced corrosion occurred in the HP Mg samples in Group 5 when the Mg and Ti disc were not in contact with each other, i.e., the galvanic corrosion unit did not formed. Several large corrosion pits could be seen on the edge of HP Mg disc according to the SEM morphology (Fig. 1c2). When the distance increased to 10 mm, the surface of HP Mg disc became relatively flat with many small corrosion pits (Fig. 1b2). Accordingly, a significantly difference (p < 0.05) was revealed in the weight loss rate between Group 5 (9.6 ± 0.4%, n = 3) and Group 10 (6.0 ± 0.6%, n = 3). Moreover, as illustrated in Fig. 2, the weight loss rate of both Group 5 and Group 10 are obviously higher than the Control group (3.5 ± 0.7%, n = 3; p < 0.01 and p < 0.05 respectively).
Micro-CT scan. The representative 2D micro CT images of femurs with screws are shown in Fig. 3. It is demonstrated that a region of low density locates around the cortical bone in all groups during the 8 weeks of implantation. In group MT 5 and MT 10, a massive cavity was found around the screws after 4 weeks of implantation and the screws showed a blurring screw thread contour and a thin screw body after 8 weeks of implantation. Newly formed bone was found around the HP Mg screws, but the connection was not tight. In contrast, the HP Mg screws in both Group MM 5 and MM 10 remained relatively integrated, with only mild changes in the depth of the screw thread after 8 weeks implantation. A large amount of new bone formation that had a tight connection with HP Mg screws was observed. No obvious difference of screws degradation or osteogenesis was found between Group MT 5 and MT 10 at each time point. However, the 3D reconstruction of the HP Mg screws shown in Fig. 4a illustrates that the HP Mg screws in Group MT 5 have a higher corrosion rate than that in Group MT 10. A corrosion crack was found on the HP Mg screws as early as 4 weeks in Group MT 5, and a wider crack was found after 8 weeks of implantation, while the HP Mg screws in Group MT 10 remained integrated after 8 weeks. Although no obvious crack was found in Group MT 10, the screw had a thinner body than that of Group MM 5 and MM 10. The calculated volume loss (Fig. 4b) confirmed that during the 8 weeks of implantation, the volume of the Mg screws in Group MT 5 reduced more than those in Group MM 5 and MM 10. No significant difference was observed between Group MM 5 and MM 10. Figure 5 illustrates the results of a histological analysis of the bone tissue surrounding the Mg screws. In Group MT 5 and MT 10, inadequate osteogenesis was found after 2 weeks of implantation. Massive cavities were observed in the cortical bone and around the screws, and only part of the newly formed bone (stained in dark blue) contacted the screw. After 4 weeks of implantation, cavities still existed around the screws and the bone tissue did not directly contact the screws; instead, a gap that contained an osteoid structure (stained in light blue without cell structure) existed between the screw and cortical bone. Although fewer cavities were observed after 8 weeks of implantation, the gap between the screw and cortical bone was wider and the content in this gap became disordered. Additionally, it was obvious that the screws in Group MT 5 and MT 10 degraded more severely than those in Group MM5 and MM 10. In contrast, although inadequate osteogenesis and cavities also existed after 2 weeks of implantation in Group MM5 and MM 10, the newly formed bone tissue contacted HP Mg screws well after 4 and 8 weeks of implantation. The BIC presented in Fig. 6 indicates poor osseointegration in Group MT 5. When the distance between Mg and Ti screw increase to 10 mm, the BIC increase slightly. No statistical difference was found between Group MM 5 and MM 10. Pullout test. A dramatic decrease of the pullout force of the HP Mg screws was observed in the first 2 weeks of implantation in Group MT 5 and MT 10. However, no significant difference was found after 4 weeks of implantation among all groups. In general, the pullout force increased steadily over the duration of implantation (Fig. 7). No statistical difference was found between Group MM 5 and MM 10.
Discussion
The corrosion behavior of Mg is one of the crucial factors that should be taken into consideration when it is used as orthopedic implants. Fast degradation of Mg implants decreases the mechanical stability and osseointegration ability. Co-implantation of Mg with other metallic materials is a possible situation that occurs in future clinic. Therefore, the changes in the corrosion behavior of Mg affected by other metals should not be neglected.
Mg is a type of reactive metal. A galvanic cell will formed if Mg and other biomedical metal are in contact, in which Mg acts as the anode and the other metal acts as the cathode. As a result, the corrosion rate of Mg will increase. The hazard of galvanic corrosion has long been noted both in industry 25,26 and in the clinic 21,[27][28][29] . Therefore, direct contact of Mg and other metals should be avoided in clinical use.
More importantly, the in vitro immersion test demonstrated increased corrosion rate of Mg when Ti was not even in contact with Mg. In general, the corrosion of magnesium includes self-corrosion and galvanic corrosion. It was reported that during galvanic corrosion, the surface morphology was dramatically different from the filiform structures associated with free corrosion 30 . In this study, the SEM demonstrated the change of surface morphology, indicating the galvanic-like corrosion.
The in vivo experiments also revealed enhanced corrosion rate of Mg screw when co-implanted with Ti screw. The Mg screws in Group MT 5 suffered the most severe corrosion in morphologies and the volume loss was significantly higher than those of the Group MM 5 and Group MM 10. With the increasing distance between Mg and Ti screws, the Mg screws in Group MT 10 exhibited no difference in volume loss from Group MM 5 or Group MM 10. In addition, the representative 2D micro CT images of a femur with screws and the 3D reconstruction showed the most severe corrosion site was the junction of the cortical bone and the screw. One of the probable reasons is that the peak load of the implant is in this area 31 , which might lead to reduced mechanical strength and accelerated degradation 32,33 . On the other hand, the volume loss test clearly showed faster degradation in Group MT 5 than that of Group MM 5 and MM 10, indicating that the co-implanted Ti screw might also be contributed to the accelerating corrosion. Figure 8 is the schematic diagram of the femoral diaphysis with screws implanted. According to the result of micro CT, the most severe corrosion site was stained in green in Fig. 8a. It should be noticed that these sites were in contact with periosteum or endosteum, which contain abundant blood vessels. It is well known that the plasma contains various proteins, some of which could play a role in the electron transport 34 . Chen et al. 35 found side chains of four aromatic amino acids (Phe, His, Tyr, and Trp residues) may promote methionine and cystine residues to participate in the protein electron hole transport. The electrical conductivity of blood vessels was also reported in many researches [36][37][38] . Thus, as depicted in Fig. 8b, it is hypothesized that electrons at anode might migrate to the cathode (Ti screw) through binding with proteins. The electrically conductive blood vessels connected Mg and Ti screws, together with body fluid, formed a galvanic-like cell. The degradation of Mg also occurred in the other part of Mg screw, however, due to limited electron transfer, the corrosion rate was relatively slow. Moreover, as the distance between Mg and Ti screws increases, the electric resistance will increase proportionally and the effect of the potential difference will diminish. It should be pointed out that tissues such as bone and muscle etc. have been proved to have electric resistivity 39 , indicating that tissues other than blood vessels might also be involved in the process of electron transport. However, exact mechanism needs to be further investigated.
Histological results indicated less new bone tissue ingrowth and indirect bone-implant contact in Group MT 5 and MT 10 compared with those in Group MM 5 and MM 10. Fast corrosion of Mg results in high local Mg 2+ concentration and alkalization, which is unfavorable to bone cell proliferation and osseointegration [40][41][42] . Besides, a significant decrease in pullout force was observed in Group MT 5 after 2 and 8 weeks of implantation. As rigid internal fixation is needed for fracture healing, the accelerated corrosion process of the HP Mg screws in Group MT 5 might not meet the clinical requirements.
It is noticed that the effect of Ti screws on the corrosion behavior of HP Mg screws decreases as their spacing increases. Hence, it is suggested that there might be a "sphere of influence" around Mg when it is implanted in vivo. If other metal such as Ti is implanted in this sphere, the corrosion rate of Mg will be accelerated, otherwise, the galvanic-like corrosion will have slight impact on Mg. Therefore, the potential risk of accelerating corrosion when Mg and other metals are co-implanted should be noted in clinic.
Conclusion
In summary, the results showed that galvanic corrosion strongly enhanced the corrosion of HP Mg. In addition, the in vivo experiments suggested accelerating corrosion of HP Mg screws when they were co-implanted with Ti screws into the SD rats' femur, with a spacing of 5 and 10 mm. It is hypothesized that the Mg and Ti screws formed a galvanic-like cell through the blood vessels on the periosteum, which was responsible for the accelerating corrosion behavior of HP Mg. Therefore, a certain distance is critical to maintain the mechanical and biological property of Mg when it is co-implanted with Ti. | 4,712.4 | 2017-02-07T00:00:00.000 | [
"Materials Science"
] |
Echocardiographic Findings in Heart Failure Patients With Methamphetamine Use: A Case-Control Study
Background Methamphetamine use is associated with cardiovascular disease and significant morbidity and mortality. There is only one previous study performed on echocardiographic parameters in patients with methamphetamine cardiomyopathy. Methods We performed a retrospective review of medical records in a county hospital in Southern California with a high population of methamphetamine users. We reviewed medical records and echocardiogram findings in patients seen in our institution from November 2019 to November 2020 who had cardiomyopathy with and without methamphetamine use. We excluded patients who either left the hospital or expired before appropriate assessment. We divided our patient population into a case group (methamphetamine users) and a control group (non-methamphetamine users) to study and compare their echocardiographic parameters. Results Case group included a total of 254 patients and control group included 268 patients. Majority of the patient population were males - 178 (70%) and 180 (67%) in the case and control group respectively. Age was found to be statistically significant with the younger population in the case group (p = 0.0000). Our analysis revealed statistically significant difference in methamphetamine users compared to non-users in regards to left ventricle ejection fraction (33.65% ± 18.02 vs. 41.55% ± 15.61, p=0.0000), left ventricle mass index (122.49 grams/m2 ± 40.66 vs. 108.62 grams/m2 ± 32.82, p=0.0000), left ventricle end diastolic volume index (85.91 mL/m2 ± 37.40 vs. 72.44 mL/m2 ± 25.44; p=0.0000) and marginally significant right ventricle systolic pressure (42.29mmHg ± 17.53 vs. 39.59mmHg ± 15.61; p=0.0540) Conclusion Our results indicated that methamphetamine users had echocardiogram findings with decreased ejection fraction and increased left ventricular mass index, end-diastolic volume index, and right ventricular systolic pressure consistent with worse dilated cardiomyopathy comparison to non-users.
Introduction
Methamphetamine activates the sympathetic system, resulting in tachycardia, hypertension, vasospasm/vasoconstriction, and myocardial wall stress or ischemia [1]. The exact prevalence of methamphetamine associated cardiomyopathy is unknown; however, the prevalence is on an incline due to increased drug usage. An echocardiogram is a noninvasive test commonly used to assess myocardial and valvular structure, quantify chamber size, and estimate ejection fraction. Additional specific details such as wall motion, wall thickness, and chamber size have been correlated to clinical conditions. Methamphetamine associated cardiomyopathy has been reported to have typical echocardiographic findings of dilated cardiomyopathy with reduced left ventricular systolic function and cardiac chamber enlargement [1]. Many overlapping features may be seen on echocardiograms from the various etiologies of cardiomyopathy, such as ischemic heart disease [2]. Assessments for wall motion abnormalities, cardiac chamber enlargement, and decline in left ventricular ejection fraction provide information and prognosis about an individual patient's clinical condition. Typical factors attributed to influence left ventricle mass index (LVMI) include body size, ethnicity, and exercise-related factors. However, LVMI has been shown to 1 1 1 1 1 1 predict cardiovascular events and premature death independently [3,4]. Other parameters such as left ventricular end-diastolic volume (LV EDV) indicate left ventricular function and are standardized to a bodysurface-area ratio. It represents the volume of blood at the end load filling and can be quantified via echocardiogram. Depending on the value, it may indicate enlargement compared to a normal value of <82 ml/m 2 [5]. The right ventricle is designed to deliver a venous return to a low-pressure system. Right ventricular systolic pressure (RVSP) has been adopted as a marker to evaluate for pulmonary hypertension. It is equivalent to pulmonary artery systolic pressure in the absence of pulmonary outflow tract obstruction and is associated with reduced survival in patients with heart disease [6]. The purpose of our study was to characterize various echocardiographic findings, including ejection fraction, right ventricular systolic pressure, cardiac mass index, and left ventricular end-diastolic volume amongst heart failure patients with and without a history of methamphetamine use.
Materials And Methods
We intended to perform a case-control study in heart failure patients (HF) patients with active methamphetamine usage compared to patients with cardiomyopathy without methamphetamine use. We observed the echocardiographic characteristics in both patient populations. The primary aim of this study is to determine whether there is any significant difference in echocardiographic parameters in heart failure patients with a history of methamphetamine use that would potentially determine their prognosis compared to patients without methamphetamine use. We included Right Ventricular Systolic Pressure (RVSP), Left Ventricular Ejection Fraction (LVEF), Left Ventricular Mass Index (LVMI), End Diastolic Volume Index (EDVI), grade of Diastolic Dysfunction (DD) along with age and gender to determine if there is any statistical difference between the two groups. The ejection fraction was calculated using the Modified Simpson Method (biplane method of disks).
After obtaining institutional review board (IRB) approval, we performed a retrospective chart review. We screened patient charts from November 1, 2019, to November 1, 2020, using heart failure related International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10 CM) codes that showed 1410 records. Our inclusion criteria included patients aged 18 years and above with clinical heart failure and active and recent methamphetamine use (in the last six months) based on history provided or urine toxicology, along with patients who underwent echocardiography during the index admission. We excluded patients who either left the hospital or expired before an echocardiogram was obtained. The final case group included 254 patients ( Figure 1). Controls were screened using the same criteria except that patients were negative for methamphetamine on urine toxicology and history. Charts were also reviewed for a history of methamphetamine use. Patients were included in the case group if they were actively using methamphetamine based on history, even if urine toxicology was negative.
FIGURE 1: Selection criteria for the study
Data was collected for index admission, defined as the first admission to our institution for heart failure symptoms. The parameters were taken from echocardiography that was performed in our institution. The results for RVSP, LVMI, EDVI are reported as a continuous numerical value. As for LVEF, it is either reported as an exact numerical value or as a range within five value points. As a result, LVEF was rounded off to the nearest numerical value or an average value was taken for the range as follows: LVEF of <10 was entered as 10; 10-15 as 12; 15-20 as 17; 20-25 as 23; 25-30 as 27; 30-35 as 33; 35-40 as 37; 40-45 as 43; 45-50 as 47; 50-55 as 52; 55-60 as 58. Diastolic dysfunction is reported as grade 1 to 3. Grade 0 was used for patients who had no diastolic dysfunction.
All the data was compiled into an excel sheet. Patient identifiers were not disclosed except to the members of the data collection team. Data were analyzed using R version 4.0.0. A two-sample one-tailed t-test was used for RVSP, LVEF, LVMI, EDVI and age. Odds ratio and chi-square test of independence were used for diastolic dysfunction. Subgroup analyses were performed for parameters where the values were not available for all the patients.
Results
The final case group included 254 patients after the application of inclusion and exclusion criteria. The final control group included 268 patients. Majority were males, 178 (70%) and 180 (67%) in the case and control group respectively which was not statistically significant (odds ratio = 1.1450, p = 0.4735) ( Table 1). The results for RVSP, LVEF, LVMI, EDVI and age are based on a two-sample one-tailed t-test, whereas diastolic dysfunction was reported using odds ratio and chi-squared test of independence.
Discussion
Methamphetamine is a highly potent and addictive substance that was first created in the late 1920s to mimic the nasal vasoconstrictor ephedrine. The addition of the "methyl" group to amphetamines further enhanced its potency by making it lipophilic and its ability to penetrate the blood-brain barrier. In chronic methamphetamine users, the drug permanently alters the user's neurological function by affecting cognitive, psychiatric, and behavioural modalities. Similarly, methamphetamine users have various cardiovascular manifestations that include but are not limited to accelerated coronary plaque formation, cardiac arrhythmias, pulmonary hypertension, coronary vasospasm, and cardiac remodelling leading to cardiomyopathy, hypertension, aortic dissection, and acute coronary syndromes (Figure 7). Due to this, cardiovascular disease is the second leading cause of death in methamphetamine users after overdose [7].
FIGURE 7: Cardiovascular Manifestations of Methamphetamine
Although methamphetamine consumption occurs independent of socioeconomic status, geographics, race, and culture, most consumers tend to be younger. The cardiac complications of methamphetamine are hypothesized to arise from a variety of mechanisms. In human autopsy specimens, severe interstitial fibrosis and scar formation has been documented [8]. Physiological pathways implicated include catecholamine surges acutely during methamphetamine intoxication with contingent hypertensive crises, longer-term upregulation of the sympathetic axis, and myocardial toxicity with impaired cellular metabolism. Depending on which process predominates, different patterns of pathology may develop, particularly in the case of methamphetamine-associated cardiomyopathy [9].
Preclinical studies have also shown that methamphetamine-induced endothelial nitric oxide synthase activation and endothelin-1 release leads to potent vasoconstriction [10]. Further, chronic methamphetamine users are less responsive to the vasodilatation effects of nitroglycerin [11].
Prolonged use of methamphetamine, especially when used intravenously, directly accumulates in the lungs. It is then hypothesized to be engulfed by pneumocytes leading to a similar surge of radical oxygen species formation, causing endothelial damage and pulmonary hypertension. Methamphetamine users are at a 27% increased risk of sudden cardiac death, as reported by the United States National Inpatient Sample database that studied over 180,000 methamphetamine drug users. Autopsies obtained from chronic methamphetamine users showed cardiac fibrosis and necrosis findings that were directly proportional to the duration and frequency of drug use. These chronic structural changes likely increased the risk of ventricular arrhythmia due to electrical conduction abnormalities of the heart [1,7].
Methamphetamine causes systolic dysfunction by causing chronic remodelling and left ventricular dilatation. In a study performed by Yu et al., mice were administered large doses of methamphetamine and 12 weeks later were found to have concentric left ventricular hypertrophy, myocardial fibrosis, necrosis and decreased contractile capacity [12]. In contrast, in an additional study performed by Lord et al., echocardiographic parameters were studied in mice given binge doses of methamphetamine. This study showed left ventricular dilatation with contractility and relaxation dysfunction -and, subsequently, an increase in left ventricular volume and a decrease in left ventricular wall thickness leading to eccentric dilated cardiomyopathy. Impaired relaxation suggested diastolic dysfunction with echocardiogram findings of increased end-diastolic volume and pressure [13].
A similar study performed in Germany by Schurer et al. studied thirty endomyocardial biopsies of methamphetamine users to evaluate their overall outcome with or without cessation of methamphetamine, the extent of fibrosis affecting clinical symptoms, and echocardiographic features. Their results were significant for the echocardiographic prevalence of impaired ejection fraction, increase in left ventricular end-diastolic diameter, with improvement in these factors with discontinuation of the drug [14].
To the best of our knowledge, there is only one previous study before ours that studied these specific echocardiogram features in methamphetamine users: ejection fraction (EF), end-diastolic volume (EDV), left ventricle mass index (LVMI), right ventricle systolic pressure (RVSP). In this study performed by Ito et al., in Hawaii in 2009, 28 methamphetamine users (case group) were compared to non-users revealing a statistically significant decrease in LVEF, increase in LV EDV; however, the results for LVMI and RVSP were not statistically significant. This possibly was due to their small sample size. Similar to cocaine, methamphetamine is hypothesized to activate the calcium/calmodulin-dependent protein kinase pathway that causes myocardial fibrosis and necrosis; methamphetamine likely cause the same effect physiologically. Though this study did not show a statistically significant increase in LVMI, they acknowledged their limitations due to the small sample size. Their results were still consistent with autopsy studies performed in San Francisco, California, that supported evidence of methamphetamine users to have enlarged hearts that weighed more than their age-matched controls. Our study showed statistically significant results for an increase in LVMI in methamphetamine users, which we hypothesize due to a chronic burden of hypertension and tachycardia leading to LV hypertrophy. The exact mechanism of myocardial injury remains unknown; however, a combination of LV hypertrophy due to significant vasoconstriction increased myocardial afterload and myocardial necrosis, fibrosis leading to eccentric dilatation and impaired relaxation causes this distinctive catastrophic heart failure [15].
Cocaine abuse causes higher LVMI, and lower EF [16,17]. Methamphetamine like cocaine has sympathomimetic effects causing vasoconstriction of coronary arteries, tachycardia, and direct myocardial toxicity; both may also have similar pathophysiological effects on the myocardium. Indeed, the sympathomimetic and toxic effects on the heart induced by methamphetamine are likely worse than cocaine because it increases catecholamine release rather than just inhibiting its reuptake [18].
Our study is limited by several factors but mainly by the inclusion of only select echocardiographic parameters. We also did not include any demographic data other than age and gender. The small sample size also limits our study as we only looked into patients' data spanning one year (from 2019 to 2020) due to a change in our electronic medical record database before the year 2019. Our case group included patients with heart failure that had recent or active methamphetamine usage as correlating factor, not necessarily as the only causation of heart failure.
Conclusions
In conclusion, patients with methamphetamine-related cardiomyopathy tend to be younger and had worse cardiomyopathy related echocardiogram parameters when compared to patients with cardiomyopathy without methamphetamine use. Methamphetamine users had increased LV mass index, worse systolic dysfunction and increased chances of developing right heart failure from increased overall RVSP. Further large scale, randomized controlled trials are needed to confirm these findings of methamphetamine-related cardiomyopathy.
Additional Information
Disclosures | 3,130.4 | 2021-07-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Yes-associated protein reacts differently in vascular smooth muscle cells under different intensities of mechanical stretch
Vascular smooth muscle cells (VSMCs) are stromal cells of the vascular wall and are continually exposed to mechanical signals. The loss of VSMCs is closely related to the occurrence of many vascular diseases, such as aortic aneurysms and aortic dissection. The proliferation and apoptosis of VSMCs are mechanically stimulated. Yes-associated protein (YAP), one of the core components of the Hippo pathway, plays a key role in the response of VSMCs to mechanical signals. In this study, we tested the impact of different intensities of mechanical stretch on the proliferation and apoptosis of VSMCs, as well as YAP. We tested VSMCs’ proliferation and apoptosis and YAP reaction via immunocytochemistry, western blotting, CCK-8 and flow cytometric analysis. We found that 10% elongation could increase the phosphorylation of YAP and prevent it from entering the nucleus, as well as inhibit cell proliferation and promote apoptosis. However, 15% elongation reduced YAP phosphorylation and promoted its nuclear entry, thereby promoting cell proliferation and inhibiting apoptosis. Accordingly, YAP knockdown suppressed the phenotype of VMSCs induced by 15% elongation. Taken together, YAP regulates proliferation and apoptosis of VSMCs differently under different intensity of mechanical stretch. Mechanical stretch with appropriate intensity can promote the proliferation and inhibit apoptosis of VSMCs by activating YAP.
INTRODUCTION
Aortic aneurysms are associated with aortic dissection and rupture. There is currently no effective treatment to prevent or cure aortic aneurysms. At present, both the pathogenesis and pathophysiology of ascending aortic aneurysms are not entirely clear. Vascular smooth muscle cells (VSMCs) have been recognized as the most important factor in the development of ascending aortic aneurysms. Aortic aneurysms can occur due to loss of VSMCs in the media layer of the aortic wall, leading to progressive aortic dilation [1][2][3]. We observed an interesting phenomenon in clinical work where the aneurysm or dissection remodeling varies from site to site, which may be due to differences in the mechanical stimuli to which different sites are exposed.
Aortic walls are subjected to various mechanical stimuli from the bloodstream, such as shear and mechanical stretch. Stretch sensing is generally known as an integrin-mediated pathway, which is coupled to cell contractile activity, and thus shares many mechanotransduction pathways with the rigidity sensing process in translating mechanical stimuli into intracellular biochemical signals [4][5][6]. The relationship between mechanical stretch and cell proliferation/apoptosis has AGING been extensively studied [7][8][9][10][11]. VMSCs experience mechanical stimuli during growth and differentiation and transduce these stimuli into biochemical signals that in turn regulate cell responses to the imposed forces. The effect of cyclic stretch is recognized as an important regulator of the development and pathological abnormalities of aortic walls. Under physiological conditions, the aorta undergoes approximately 10% circumferential stretch during systole. This number increases to approximately 20% under conditions of hypertension. The rate of apoptosis of VSMCs under 20% circumferential stretch is higher than that under 10% stimulation; how VSMCs change under 10-20% stretch remains uncertain [12].
The Hippo pathway plays an important role in the cell's reaction to mechanical stretch [13][14][15][16][17]. It was first defined in Drosophila by genetic mosaic screening following identification of a loss-of-function mutation of Hippo that led to a strong overgrowth phenotype [18]. As the major downstream effector of the Hippo pathway, YAP/TAZ mediates major physiological functions therein. MST1/2, Sav1, LATS1/2, and Mob1 constitute a kinase cascade that eventually phosphorylates YAP/TAZ and promotes its binding with 14-3-3 and cytoplasmic retention [19,20]. YAP/TAZ has been identified as the sensor and mediator of mechanical cues arising according to the rigidity of the extracellular matrix, cell geometry, cell density, and the status of the actin cytoskeleton [14,15,21,22]. Among them, YAP, as an important factor in mechanical signal transduction, controls cell survival and proliferation by combining with DNA-binding transcription factors to induce gene expression (Hippo pathway in organ size control, tissue homeostasis, and cancer). YAP inhibits the expression of smooth muscle differentiation genes, and at the same time promotes smooth muscle proliferation and migration in vitro and in vivo, and plays a novel comprehensive role in smooth muscle phenotype regulation (The induction of yesassociated protein expression after arterial injury is crucial for smooth muscle phenotypic modulation and neointima formation). Therefore, these evidences indicate that YAP is a key molecule in the regulation of VSMC phenotype. Rho-ROCK is the signaling pathway upstream of Hippo. The Rho-ROCK signaling pathway inhibits Hippo pathway activity [23,24]. Rho-ROCK is affected by mechanical stress and regulates the proliferation of VSMCs. However, the influence of different intensities of mechanical stretch on the Hippo pathway, and the role of Rho-ROCK in this mechanism, remain unclear.
In our study, we placed VSMCs under different intensities of mechanical stretch in vitro. We aim to find out that how the Hippo pathway and cell proliferation of VSMCs changes under mechanical stretch from 0% to 15% elongation. Based on our study, we wish to achieve a better understanding of the relationship between different intensities of mechanical stretch and cell proliferation of VSMCs.
VSMCs isolation and culture
Sprague-Dawley rats (male, 200-250 g) were purchased from the animal experimental center of Academy of Military Medical Sciences of PLA (Beijing, China) and housed under the specific pathogen-free (SPF) conditions (temperature, 23 ± 2° C; relative humidity, 65% ± 5%; 12h / 12h light / dark cycle, 07:00-19:00) with free access to food and water for 3 days. SD rats were anesthetized with isoflurane, and then VSMCs were isolated from thoracic aorta using the explanting technique [25]. and cultured in Dulbecco's modified Eagle's medium (DMEM; Gibco, Grand Island, NY, USA) containing 10% fetal bovine serum (FBS; Gibco, Grand Island, NY, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin at 37° C in a humidified atmosphere of 5% CO2. The medium was changed every 2 d, and cells were passaged by treatment with a 0.05% trypsin-EDTA solution. The cells were used between passages 3 to 8. All animal experimental procedures were in accordance with the National Institutes of Health's Guide for the Care and Use of Laboratory Animals and were approved by the Animal Care and Use Committee of the Military Medical Sciences of PLA, as well as the Animal Laboratory Administration Center and Ethics Committee of the Military Medical Sciences of PLA.
Cyclic stretch stress on VSMCs
VSMCs were plated in 6-well plates (Flexcell International Corp., Hillsborough, NC, USA) coated by type I collagen (Solarbio, Beijing, China) at a concentration of 3 × 10 5 cells/mL. After 24 h attachment, the cells were synchronized by DMEM with 10% FBS for another 24 h and then applied to cyclic stretch produced by FX-5000T Tension System (Flexcell International Corp., Hillsborough, NC, USA) with 10, 15, and 20% elongation at a frequency of 1 Hz (60 cycles/min), and the duration of cyclic stretching forces for 24 h.
YAP siRNA
Experimental 1 consisted of the following groups: VSMCs, VSMCs+YAP siRNA NC (TTCTCCGAACG TGTCACGT), VSMCs+YAP siRNA 1 (ACAGCAGGA GTTATTTCGG), VSMCs+YAP siRNA 2 (GACCTCT AGING TCTGGTCAGAGA), and VSMCs+YAP siRNA 3 (ATCACAATGATCAGACAAC). Cells were inoculated in 6-well plates at 2 × 10 6 cells/well and cultured overnight. Each well was diluted to 250 μL of Opti-MEM I Reduced Serum Medium by adding 100 pmol of siRNA. Lipofectamine 2000 (5 μL; Life Technologies, Carlsbad, CA, USA) was added to 250 μL of Opti-MEM I Reduced Serum Medium and incubated for 5 min. Then, the siRNA solution was added to the Lipofectamine 2000 solution and incubated for 20 min. The culture medium in the cell culture plate was aspirated and 1.5 mL fresh medium was added. The siRNA solution was added to the samples and cultured in a 5% CO2 incubator at 37° C for 48 h. Western blotting was used to assess the results and VSMCs were reclassified according to the obtained results.
Experimental 2 consisted of the following groups: VSMCs control+0% elongation, VSMCs YAP shRNA+0% elongation, VSMCs control+10% elongation, VSMCs YAP shRNA+10% elongation, VSMCs control+15% elongation, and VSMCs YAP shRNA+15% elongation. shRNA using the best one of the previous experiment which is siYAP1. VSMCs were cultured as described above. Cells were infected with the viruses at a multiplicity of infection of 50, and control groups were transfected with lentiviruses containing control sequences.
Flow cytometric analysis and CCK-8 assay
VSMCs from each group were stained with annexin Vfluorescein isothiocyanate to determine the number of apoptotic cells. And another samples were fixed with 75% ethanol and treated with RNase to analyze the cell cycle. Then, cell nuclei were stained with propidium iodide (Molecular Probes, Eugene, OR, USA), and VSMCs were analyzed using a FACSCalibur flow cytometer and Cell Quest software (Becton Dickinson, Franklin Lakes, NJ, USA).
VSMCs were inoculated in 6-well culture plate. Cell counting kit-8 (CCK-8; DOJINDO, Kumamoto, Japan) solution was mixed with serum-free medium in a 1:10 ratio (v/v). The mixture was added at 100 μL per well and incubated at 37° C and 5% CO2 for 1 h. Absorbance at 450 nm was measured using an enzyme marker.
Immunocytochemistry analysis
VSMCs from each group were fixed in 4% paraformaldehyde for 20 min, and permeabilized with 0.2% Triton X-100 for 10 min at room temperature. Each sample was dripped with 3% BSA blocking solution and sealed at room temperature for 30 min. Then, samples were incubated with primary antibody to YAP (bs-3605R; Bioss, Los Angeles, CA, USA) and fluorescent CY3 goat anti-rabbit IgG secondary antibody (rhodamine-labeled, BA1036; Boster Biological Technology, Pleasanton, CA, USA). The nuclei were counterstained with 4', 6-diamidino-2phenylindole (DAPI; 1:500). The samples were observed under the confocal laser scanning microscopy.
Western blotting
Treated VSMCs were harvested in lysis buffer using protease inhibitors, and total protein was extracted and quantified using a BCA protein concentration kit according to the manufacturer's instructions. Proteins were separated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to polyvinylidene fluoride (PVDF) membranes. The membranes were blocked with 5% BSA in Tris-buffered saline with Tween-20 (TBST) for 2 h at room temperature, and incubated overnight at 4° C with primary antibody (
Statistical analysis
One-way analysis of variance was performed in SPSS software (ver. 19.0; SPSS Inc., Chicago, IL, USA) to evaluate group differences. Data are expressed as means ± standard deviation, and P values < 0.05 were considered to indicate statistical significance.
Availability of data and materials
All data generated or analyzed during this study are included in this published article.
Mechanical stretch intensity influences the proliferation and apoptosis of VSMCs
To determine the changes in proliferation and apoptosis of VSMCs under different intensities of mechanical stretch, we applied 0, 10, 15, and 20% elongation forces to VSMCs (Figure 1). We used the CCK-8 assay to assess cell proliferation and flow cytometric analysis to detect apoptosis and analyze the cell cycle. The 10% and 20% elongation forces increased the rate of apoptosis and downregulated VSMCs proliferation ( Figure 1). Meanwhile, 15% elongation produced the opposite results, i.e., decreased the rate of apoptosis and upregulated VSMCs proliferation (Figure 1). 10% and 20% of the stretch-induced S phase cells reduce might inhibit cell proliferation, while 15% stress-induced S phase cells increase might promote cell proliferation (Figure 1).
Effects of different intensities of mechanical stress on YAP
To test whether YAP was involved in the effects of mechanical stress on VSMCs, we first performed immunofluorescence staining to determine the intracellular localization of YAP. YAP was both in the nucleus and the cytoplasm under 0% elongation ( Figure 2A-2C). YAP was primarily cytoplasmic under 10% AGING elongation conditions, whereas YAP was primarily nuclear with 15% elongation (Figure 2A-2C). Intracellular localization of YAP was linked to YAP phosphorylation, so we determined the level of YAP phosphorylation in each group. YAP phosphorylation was upregulated under 10% elongation, while phosphorylation of YAP remained low under both 0% and 15% elongation and being the lowest under 15% elongation ( Figure 2D). Because nuclear localization of YAP was linked to the expression of ccnd1, we also assessed the protein levels of ccnd1 in each group at the same time. The protein levels of ccnd1 varied according to the phosphorylation level of YAP. Compared with 0% elongation, expression of ccnd1 decreased with 10%, and increased with 15% elongation ( Figure 2D). Next, we tested the expression of key protein molecules in the Rho-ROCK signaling pathway. Similarly, compared with 0% elongation, expression of Rho and ROCK decreased with 10%, and increased with 15% elongation ( Figure 2D).
Knockdown of YAP impairs proliferation and apoptosis of VSMCs under different intensities of mechanical stretch
To determine the effects of the Hippo pathway in VSMCs, we knocked down YAP by siRNA. We tested the efficiency of different YAP-targeting siRNAs using western blotting to generate optimal knockdown of YAP. Our results showed that YAP siRNA1 (ACAGCAGGAGTTATTTCGG) resulted in the greatest knockdown of YAP ( Figure 3A). To test the effects of YAP knockdown on VSMCs proliferation, apoptosis, and the cell cycle under different intensities of mechanical stimulation, we performed CCK-8 assays and flow cytometry. The CCK-8 assays indicated that cell proliferation was inhibited following YAP knockdown in each group ( Figure 3F). Compared with 0% elongation, VSMCs proliferation was decreased in the absence of YAP with 10% elongation, but increased with 15% elongation ( Figure 3F). Following YAP knockdown, VSMCs showed a similar rate of proliferation under the different mechanical stimulations, but with smaller amplitudes ( Figure 3F). Flow cytometry analysis demonstrated that apoptosis of VSMCs was increased following YAP knockdown under all stretch intensities ( Figure 3B, 3C). The rate of apoptosis of VSMCs was significantly increased following YAP knockdown under 10% elongation ( Figure 3B, 3C). Moreover, flow cytometry revealed that YAP knockdown kept more VSMCs in the G0/G1 phase under the same intensity of AGING stretch, indicating that more VSMCs were in a state of dormancy ( Figure 3D, 3E). Compared with the 0% control group, G0/G1 phase cells in the 10% and 15% control group were significantly increased and decreased, respectively ( Figure 3D, 3E). A similar trend was apparent in the YAP shRNA groups ( Figure 3D, 3E).
Inhibition of the Rho-ROCK pathway affects the Hippo pathway under different intensities of stretch
To better understand the role of the Rho-ROCK-Hippo pathway in the effects of mechanical stretch on YAP, we inhibited the Rho-ROCK pathway using Y27632. We performed immunocytochemistry staining to determine the intracellular localization of YAP. The results showed that more YAP stayed in the cytoplasm under 0% elongation following inhibition of the Rho-ROCK pathway (group 2) compared with 0% elongation (group 1) ( Figure 4A, 4B). Y27632 treatment combined with 10% elongation (group 4) caused a greater amount of YAP to remain in the cytoplasm compared with 10% elongation in the absence of Y27632 (group 3) ( Figure 4A, 4B). The nuclear proportion of YAP following 15% elongation with Y27623 treatment (group 6) was higher than that in groups 2-4 ( Figure 4A, 4B). The nuclear proportion in group 6 was lower than that in groups 1 and 5 ( Figure 4A, 4B). We then performed western blotting to assess the phosphorylation of YAP and Lats1, as well as the protein levels of ccnd1, Rho, and ROCK in the different groups. The proportion of phosphorylated YAP and phosphorylated Lats1 increased under each stretch intensity following inhibition of the Rho-ROCK pathway, while the levels of ccnd1, Rho, and ROCK decreased, suggesting that blockage of the Rho-ROCK pathway partially reduces the effect of different stretch intensities on YAP ( Figure 4C).
DISCUSSION
One emerging concept is that changes in the VSMCs phenotype in response to mechanical forces are important in vascular diseases, such as hypertension, atherosclerosis, aortic aneurysms, and age-dependent arterial stiffening [21,26,27], in which the vascular wall is exposed to chronically elevated levels of cyclic stretch. In particular, the production of aortic aneurysms and aortic vascular VSMCs reduction are closely related. Different parts of the aorta are subject to different pressures, and the pathogenesis of aortic aneurysms may vary by location. Our study demonstrated that different intensities of stretch differently affect the proliferation and apoptosis of VSMCs. 10% and 20% of the stress-induced S phase cells reduce, may inhibit cell proliferation, 15% stressstimulated S phase cells relative increase, may promote cell proliferation ( Figure 5). However, we believe that VSMCs will not continue to grow, or may even break up, if more than 20% elongation force is applied. This also confirmed the effect of the difference of 10% physiological elongation and 15% stress elongation on the proliferation and apoptosis of VSMCs. Therefore, we believe that applying more tension to VSMCs is not necessary. In this study we sued 10% and 15% elongation forces, which we believe are more representative experimental conditions.
Studies have found that mechanical stimulation can inhibit the Hippo pathway, increase nuclear levels of YAP, promote cell proliferation, and inhibit apoptosis. [16,[28][29][30][31] YAP is the major downstream effector of the Hippo pathway which mediates major physiological functions. YAP phosphorylation results in cytoplasmic retention, which inhibits SMC proliferation and promotes apoptosis. [32,33] To the best of our knowledge, however, this is the first study to show that different intensities of mechanical stretch can differentially influence the proliferation and apoptosis of VSMCs through the Hippo pathway. However, our study found that different intensities of mechanical stimulation had different effects on VSMCs' proliferation and apoptosis, as well as the Hippo pathway. Stimulation with 10% elongation may activate the Hippo pathway, increasing phosphorylation of YAP and resulting in translocation of YAP into the cytoplasm, where it binds to the cytosolic protein 14-3-3 thereby promoting YAP degradation. These reports are consistent with our results. Our results revealed the intracellular localization of YAP under different stretch forces: 10% elongation for 6 h induced YAP cytoplasmic retention, while 15% elongation for 6 h promoted YAP entry into the nucleus. Mechanical stretching forces of 10% and 15% for 6 h upregulated and downregulated the phosphorylation of YAP, respectively. According to the former studies [34,35], YAP in the nucleus, as well as the binding of YAP to the transcription factor TEAD, were decreased, leading to decreased expression of the target gene ccnd1 and inhibition of cell proliferation. When the stretching force was increased to 15%, the Hippo pathway was inhibited, YAP phosphorylation was reduced, more YAP remained in the nucleus, and the transcription factor TEAD induced expression of its downstream target gene ccnd1 to promote proliferation [36][37][38]. Thus, 10% and 15% stimulation can affect the proliferation of arterial SMCs through the Hippo pathway. The results demonstrate that 10% mechanical stimulation promotes cell proliferation and inhibits apoptosis, while 15% mechanical stimulation had the opposite effect. This trend is consistent with previous results on the Hippo pathway [36][37][38]. Previous studies have shown that YAP/TAZ expression in human AGING Values are expressed as means±SD. ** P < 0.01, compared with 0% elongation+siRNA(-) group; ## P < 0.01, compared with 10% elongation+siRNA(-) group; ^^P < 0.01, compared with 20% elongation+siRNA(-) group; @@ P < 0.01, compared with 0% elongation+siRNA(+) group. AGING trabecular meshwork cells also varies by substrate stiffness [39,40]. The results of our study are consistent with previous studies investigating the effects of extracellular matrix on cells [41,42]. After knocking down the expression of YAP, VSMCs showed the same trends in cell proliferation, apoptosis, and the cell cycle following stimulation with different intensities of stretch compared with the controls. These results indicated that different intensities of stretch stimulation could still regulate VSMCs proliferation and apoptosis following inhibition or activation of the Hippo pathway.
In addition, we tested the activity of the Rho-ROCK pathway and found that 10% elongation stimulated the expression of related proteins to decrease, while 15% elongation stimulated the expression of related proteins to increase. After blocking the Rho-ROCK pathway, the effects of different intensities of stretch on YAP were weakened and phosphorylated YAP was decreased in each group. Overall, inhibition of the Rho-ROCK pathway combined with 10% mechanical stress led to activation of the Hippo pathway, which in turn reduced nuclear localization of YAP. YAP failed to bind to TEAD; this led to a decrease in ccnd1 expression, inhibition of VSMCs proliferation, and promotion of apoptosis. When the mechanical stress was increased to 15%, we observed the opposite effects.
Through our experiments, we found that the mechanism underlying the effects of mechanical stress on VSMCs is as follows: after being stimulated by physiological 10% elongation stretch, VSMCs are inhibited in various ways. Inhibition of Rho-ROCK leads to a decrease in LATS1/2 levels, which in turn leads to an increase in the phosphorylation of YAP. Phosphorylated YAP cannot enter the nucleus nor bind to TEAD. On the other hand, our ongoing experiment shows that 10% elongation stretch may downregulates the expression of miR130a, which increases the amount of VGLL4. In the nucleus, VGLL4 may competes with YAP for the opportunity to bind TEAD. As a result of these two mechanisms, 10% elongation reduced ccnd1, resulting in decreased VSMCs proliferation and increased apoptosis, while a stimulus that is lower or higher than physiological elongation resulted in opposing effects, eventually promoting the proliferation of VSMCs and reducing apoptosis.
On conclusion, our study demonstrates the possibility of preventing or treating diseases such as aortic aneurysms by controlling blood pressure. The Rho-ROCK pathway can be activated by mildly adjusting the blood pressure. Activation of the Rho-ROCK pathway, in turn, inhibits the Hippo pathway and ultimately promotes VSMCs proliferation and inhibits apoptosis. Our findings may provide insight into the pathogenesis prognosis of these diseases, as well as into therapeutic interventions.
AUTHOR CONTRIBUTIONS
XHW and XDL conceived and designed the experiments, LJZ, SL and ZQF analyzed and interpreted the results of the experiments, CZH, YGH and ZSG performed the experiments.
CONFLICTS OF INTEREST
The authors declare that they have no conflicts of interest. | 5,031.6 | 2022-01-04T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Experimental evaluation of occupancy lighting control based on low-power image-based motion sensor
Occupancy lighting control, which is used to turn lights on/off based on the occupancy state measured by an occupancy sensor, is a popular and effective type of automatic lighting control. This paper clarifies the effectiveness of occupancy control based on a low-power image-based sensor, which measures only the occupancy state by applying an image processing technique to visible images, implemented on a low-cost single-board computer. We found that the rated power consumption of the low-power image-based sensor was 69.50% or less than that of commercial image-based sensors. In addition, the low-power image-based sensor increased the total comprehensive energy-saving rate of the occupancy control by 5.38% or more compared with a commercial PIR sensor and commercial image-based sensor. Further analysis of the effectiveness of the low-power image-based sensor is provided herein.
Introduction
Lighting accounts for a considerable portion of global energy consumption, comprising 10% or more of the total energy consumed in residential and nonresidential environments in the United States [1,2]. Therefore, the energy consumption from lighting should be reduced to achieve zero energy homes and buildings [3,4].
A popular and effective lighting control is occupancy control, in which lights are turned on/off based on the presence of the occupants (workers or residents in a room) as measured by an occupancy sensor (e.g. passive infrared (PIR) or image-based sensors) [5,6]. A PIR sensor, which is widely used owing to its cost effectiveness, measures the changes in infrared radiation received through a Fresnel lens [7].
The occupancy control turns on the lights when the sensor detects occupants and turns off the lights after the sensor detects that no occupants have been continuously present for a preset time. Because an occupancy sensor cannot perfectly measure an occupancy state, a time delay between the detection of no occupants and the turning off of the lights is required to avoid a false-off (i.e. turning off the lights in an occupied room) [5,6]. However, the time delay decreases the energy-saving effect of the occupancy control because the light remains turned on for a preset time in an unoccupied space. Thus an occupancy sensor that can decrease the time delay while avoiding a false-off is required if we want to increase the energy-saving effect [5].
Image-based sensors, which apply an image processing technique to visible images [8][9][10], are expected to increase the energy-saving effect by decreasing the time delay while avoiding a false-off. Two widely applied image processing techniques for the occupancy measurement are motion detection, which tracks the change in appearance in an image [11,12], and image-recognition, which utilizes a classifier based on machine learning [13,14]. Frame difference, background subtraction, and optical flow are well-known motion detection methods, whereas classifiers such as a support vector machine (SVM) or deep neural network (DNN) are primarily used to construct image recognition.
The measurement accuracy for occupants depends on the image processing technique that is used. However, commercial image-based sensors utilize frame difference, which calculates the differences between consecutive captured images [10]. This utilization is based on the fact that the frame difference, which is categorized as a simple and fast method, can be easily implemented on a low-power and low-cost embedded board compared with other image processing techniques [15]. On the basis of such commercialization trends, in this work, we assume that image-based sensors utilize the frame difference.
The time delay of an image-based sensor tends to be set to a smaller value (e.g. 5 min) than that of a PIR sensor (e.g. 15 min) [6,9,[16][17][18]. This tendency mainly stems from the image-based sensor increasing the measurement accuracy for occupants with slight motions [19].
However, the rated power consumption of a commercial image-based sensor is considerably greater than that of a PIR sensor [17,20]. In [17], it is suggested that, compared with commercial PIR sensors, commercial image-based sensors do not decrease the comprehensive energy saving, which is based on the energy consumption of both the lights and the sensors. Focusing on the specifications in [21][22][23], commercial image-based sensors typically measure not only the state of occupancy but also the number of occupants or their activity level. Measurements that are not used in the occupancy control might increase the rated power consumption of the commercial image-based sensor. While there have been previous studies on occupancy control that evaluated the energy-saving effect of the commercial image-based sensor, the energy-saving effect of a low-power image-based sensor that measures only the occupancy state has not been sufficiently examined. It is important to evaluate the energy-saving effect of the low-power image-based sensor so as to develop and commercialize the next generation of image-based sensors that can increase the comprehensive energy saving. Thus, in this study, we clarify the energy-saving effect of occupancy control based on the low-power image-based sensor.
In [24], an image-based sensor was implemented on a Raspberry Pi, which is a single-board computer. Furthermore, an occupancy sensor implemented on a single-board computer is expected to be widely utilized for occupancy control owing to its cost effectiveness and energy efficiency [5]. Therefore, on the basis of a literature review, we utilize a low-cost and energyefficient single board computer to implement the lowpower image-based sensor and achieve the aforementioned purposes of the present study.
Because a single-board computer has limited computational power [25], it is important to evaluate the computational time when conducting an occupancy state measurement. However, the computational time of the occupancy state measurement implemented on a single-board computer was not evaluated in [24]. Therefore, we conducted an experiment to carefully examine the computational time of the occupancy state measurement implemented on a single-board computer.
In Section 2 of this paper, we provide an overview of the conventional occupancy control based on PIR sensors. In Section 3, we discuss the low-power imagebased sensor. In Section 4, we report the experiment we conducted in an actual home environment and discuss the results, and in Section 5, we broaden the discussion for greater generalizability. We conclude in Section 6 with a brief summary and mention of future work.
Conventional occupancy control
This section describes the conventional occupancy control based on PIR sensors. Sections 2.1 and 2.2 overview and review the occupancy control and the PIR sensor, respectively.
Occupancy control
Occupancy control is used to turn lights on/off based on the occupancy state measured by the occupancy sensor, as depicted in Figure 1. In practice, the occupancy sensor tends to be mounted on the ceiling but not the wall, as ceiling-mounted sensors are not obstructed by furniture or occupants [26]. Figure 2 shows a flowchart of the occupancy control system, which turns on the light when the sensor detects an occupant and turns off the light after the sensor has detected that no occupants have been continuously present for a preset time. Because the occupancy sensor cannot perfectly measure the occupancy state, a time delay between the detection of no occupants and the turning off of the lights is required to avoid a false-off [5,6].
A shorter time delay causes a false-off, which decreases the visual comfort of the occupants [5], whereas a longer time delay decreases the energysaving effect by increasing the time during which the light remains turned on in an unoccupied space. Thus time delay is an important parameter in terms of both the visual comfort of the occupants and the energysaving effect. Because the time delay needed to avoid a false-off depends on the measurement accuracy of the installed occupancy sensor, the accuracy should be increased to enhance the energy-saving effect without incurring a false-off.
PIR sensor
PIR sensors measure the occupancy state based on the change in infrared radiation emitted by the human body [7]. The PIR sensor is widely utilized as an occupancy sensor owing to its cost effectiveness and power efficiency. However, it suffers a decreased measurement accuracy of occupants with small (arm-and hand-level) motions [27][28][29]. For example, in [29], it was observed that the PIR sensor detects the arm-level motion at a rate of only 35% on a median. To enhance the energysaving effect without a false-off, an occupancy sensor that can decrease the time delay without a false-off is required [10,16,19].
Low-power image-based sensor
This section describes the low-power image-based sensor, the effectiveness of which we evaluate in comparison with the commercial PIR sensor and commercial image-based sensor. The low-power image-based sensor measures only the occupancy state and is implemented on a single-board computer so as to decrease the power consumption compared to a commercial image-based sensor. Sections 3.1 and 3.2 overview and review the occupancy state measurement algorithm of the low-power image-based sensor and single-board computer, respectively.
Occupancy state measurement algorithm
Image-based sensors measure the occupancy state based on the images captured by a visible camera [8][9][10]. As an occupancy state measurement algorithm, the commercial image-based sensor typically uses the frame difference [10]. Thus the low-power image-based sensor also utilizes the frame difference as the occupancy state measurement algorithm.
The frame difference proposed in [8] is assumed to be applied for the occupancy control. In addition, since this frame difference is implemented on a singleboard computer in [24] to provide a real-time occupancy measurement, we expect it to be effective. Thus the frame difference [8] is assumed to be utilized for the low-power image-based sensor. Note that the commercial image-based sensor is also assumed to utilize this frame difference.
The motion of the occupant incurs a difference in the appearance between the previous and current images ( Figure 3 a and b, respectively). The frame difference focuses on the subtracted image (Figure 3 c), which compares the current image with the previous image. The intensity of the subtracted image at pixel position n is represented by where x t−1 n and x t n denote the luminance value of the previous and current images, respectively. The pixel positions with luminance values that are sufficiently different between the previous and current images are indicated by the thresholded subtracted image, as shown in Figure 3(d). The intensity of the thresholded subtracted image is given bỹ where T x denotes the threshold parameter to determine whether the luminance values are sufficiently different between consecutive images. The occupancy state is determined on the basis of the ratio of the number of pixels with luminance values that are sufficiently different between consecutive images to the number of all pixels. The ratio is represented by using the thresholded subtracted image as where N denotes the number of all pixels. When the ratio is larger or smaller than a threshold value T r , the low-power image-based sensor determines the occupancy state as occupied or unoccupied, respectively.
Single-board computer
A single-board computer is built on a single circuit board and provides the basic features of a computer. Ever since the Raspberry Pi was commercialized in 2012 [30], the single-board computer has been widely used in control and monitoring applications [31], and its lower cost and superior power effectiveness compared to smartphones, laptops, and desktop computers have been demonstrated [32].
Although there are several types of single-board computer currently on the market, including Raspberry Pi, Orange Pi, and BeagleBoard, the Raspberry Pi tends to be superior in terms of cost, power consumption, and computational power [33]. Furthermore, the frame difference [8] has been effectively implemented on a Raspberry Pi 3 model B, which was commercialized in 2017 [34], in [24]. Therefore, in this study, we also implement the frame difference on a Raspberry Pi 3 model B to construct the low-power image-based sensor. The Raspberry Pi 3 model B has an ARM Cortex-A 53 of a quad core CPU and 1.0 GB of RAM [34].
Experiment
In this section, we evaluate the effectiveness of the lowpower image-based sensor implemented on a singleboard computer, which measures only the occupancy state, through experiments in an actual home environment using the data recorded in [19] and compare it with the commercial PIR sensor and commercial image-based sensor. Sections 4.1 and 4.2 discuss the experimental conditions and results, respectively. Since some of the experimental results in [19] are used in this evaluation, the experiment described therein is referred to as the "preliminary experiment," the findings of which are summarized in Section 4.2.1 to identify the differences from the present evaluation.
Environment
We conducted the experiment in a living room (shown in Figure 4) used by two residents. An LED light (CL6D-5.0 produced by Iris Ohyama, Inc.), the details of which are provided in [35], was mounted on the ceiling. The rated power consumption, which shows the maximum power consumption, of a CL6D-5.0 is 33.00 W.
We built a sensing device that records the data of both the sensing results of a PIR sensor module and QVGA sized images (320 × 240 pixels) captured by a camera module at a cycle time of 100 ms. An interframe time, which denotes the gap of time while capturing the previous and current images of the frame difference [8], was also set to the value of the cycle time. These settings for the recording image size and the inter-frame time were chosen because they are the most frequently utilized ones and provide the best performance in occupancy activity analysis. Specifically, the QVGA sized images are the most common ones for the recording images of surveillance cameras, as discussed in [36], and the inter-frame time of 100 ms provides the best performance in occupancy activity analysis, as discussed in [37].
The PIR sensor module contained an LHI778 pyroelectric sensor (produced by PerkinElmer, Inc.) and the camera module contained an OV5647 CMOS sensor (produced by OmniVision, Inc.). Because viewing angles of the PIR sensor module and camera module are 120 and 130 degrees, respectively, both the modules are classified as a wide-angle type. We carefully confirmed that the field of views of both the modules included the entire living room. Further details of the two are provided in [38,39]. The sensing device was mounted on the ceiling at the height of 2.15 m and was pointed downward for the reason described in Section 2.1. The sensing results by the PIR sensor module and images recorded by the camera module were used to simulate the occupancy measurement of the commercial PIR sensor and both the commercial and low-power image-based sensors, respectively. The frame difference [8] was utilized as the occupancy state measurement algorithm of the commercial and lowpower image-based sensors for the reason discussed in Section 3.1.
To evaluate the occupancy control, we set two types of time delay in the occupancy sensors: a practical time delay (t p ) and a minimum time delay (t m ). The practical time delay shows a value that is most frequently used in an actual environment, whereas the minimum time delay shows a minimum value to avoid a false-off during the experiment. The practical time delays for the PIR and image-based sensors were set to 15 and 5 min, respectively, as in prior studies [6,9,[16][17][18]. The minimum time delay was determined on the basis of the preliminary experiment results described in Section 4.2.1.
In addition, the parameters for the sensitivity of each sensor were determined experimentally on the basis of an adjustment test, which was conducted for 8 h before this experiment. The determined parameters were fixed throughout the experiment. Table 1 shows the experimental schedule, where the sensing device recorded data for 76.5 h over a period of 8 days. A portion of the data was excluded from the evaluation at the request of the residents. The length of the data was considered sufficient to evaluate the occupancy control because it is 36.5 h longer than the experiment conducted in [17], where the effectiveness of the lighting control was evaluated in an actual office environment.
Evaluation methodology
The preliminary experiment evaluated an occupancy measurement accuracy based on F-measure, which is defined as where the true-positive denotes the number of correct determinations as an occupied state. In addition, the false-positive and false-negative, respectively, denote the number of correct and erroneous determinations as an unoccupied state. Because the precision and recall are in an unavoidable tradeoff, the F-measure, which denotes the harmonic mean of them, is employed as the comprehensive metric. The preliminary experiment also compared time during which the light was turned on (hereafter called "lighting time") between the sensors. After the preliminary experiment, we evaluated the low-power image-based sensor, which measures only the occupancy state. The low-power image-based sensor was implemented on the Raspberry Pi 3, which is a low-power and cost-effective single-board computer (as discussed in Section 3.2). However, whether or not the single-board computer has the computational resources to perform the frame difference [8] for QVGA sized images within less than 100 ms has not been verified in the literature. This verification is important given that we want to see if the low-power image-based sensor can be constructed on the single-board computer under the most frequently utilized and best-performance settings, as discussed in Section 4.1. Hence, our experiment measures the computational time of the frame difference on the single-board computer. In addition, the power consumption of the low-power image-based sensor has not been measured in the literature. Thus our experiment evaluates the power consumption of the low-power image-based sensor compared with that of the commercial image-based sensor so as to clarify how much the power consumption decreases. This evaluation is also important to discuss the comprehensive energy-saving rate of occupancy control based on the low-power image-based sensor. On the basis of aforementioned aspects, we determined the evaluation items of the experiment (but not the preliminary experiment), which are listed in the following. First, we evaluated the computational time of the occupancy state measurement by running the lowpower image-based sensor for 8 h. This evaluation is important given that we want to see if the low-power image-based sensor can be constructed on a singleboard computer under the most frequently utilized and best-performance settings.
Second, we evaluated the power consumption of the low-power image-based sensor by running it for 8 h. The power consumption was measured by a CT-2 (produced by AVHzY, Inc.), which is a USB power meter. We then compared the measured power consumptions with the rated power consumption of the commercial PIR sensor and commercial image-based sensor on the basis of Table 2, which summarizes the minimum and maximum rated power consumptions of the commercial PIR sensor and commercial image-based sensor produced by a manufacturer that develops both types of sensors. The rated power consumption, which is used to evaluate the comprehensive energy-saving rate, of the low-power image-based sensor was set on the basis of the maximum value of the measured power consumption, since the rated power consumption shows the maximum power consumption.
Third, we evaluated the comprehensive energysaving rate, which is also used in [17], by comparing the energy consumption between the occupancy control and the scenario in which the light is continuously turned on. The comprehensive energy-saving rate of the occupancy control is determined by where d all and d oc denote the time during which the light is turned on under scenarios in which the lights are continuously on and when occupancy control is applied, respectively. In addition, p and l denote the rated power consumption of the occupancy sensor and light, respectively. The effectiveness of the low-power image-based sensor is discussed in terms of the comprehensive energy-saving rate of the occupancy control compared with the commercial PIR sensor and commercial image-based sensor. The rated power consumption of the commercial PIR sensor and commercial image-based sensor were set based on Table 2, while that of the low-power image-based sensor was set on the basis of the experimental results of the second evaluation item.
Preliminary results
This section summarizes the experimental results obtained from [19], which we regard as the preliminary experiment in this paper. The preliminary experiment showed that the commercial image-based sensor and low-power image-based sensors increased the Fmeasure by 21.7% compared with the commercial PIR sensor. The minimum time delays of the commercial PIR sensor and both image-based sensors were 10.58 and 2.66 min, respectively. Both image-based sensors decreased the lighting time by 8.9% compared with the commercial PIR sensor.
Computational time
The minimum, average ± standard deviation, and the maximum computational time of the frame difference on the single-board computer were found to be 3.30, 5.51 ± 1.35, and 12.82 ms, respectively. These findings show that the frame difference using QVGA sized images, which are commonly utilized as the recording image size of surveillance cameras, can be performed within less than 100 ms, which provides the best performance in occupancy activity analysis, on the single-board computer. Hence, the image-based sensor, which achieves the results described in Section 4.2.1, can be implemented on the low-cost single-board computer under the most frequently utilized and bestperformance settings, as discussed in Section 4.1.
Measured power consumption
The minimum, average ± standard deviation, and the maximum measured power consumption of the lowpower image-based sensor were 1.01, 1.04 ± 0.01, and 1.22 W, respectively. Thus the rated power consumption of the low-power image-based sensor was set to 1.22 W, as discussed in Section 4.1.2. Although the rated power consumption of the low-power image-based sensor was larger than the minimum rated power consumption of the commercial PIR sensors in Table 2, it was 2.78 W (corresponding to 69.50%) smaller than the minimum rated power consumption of commercial image-based sensors. Thus the power consumption of the commercial image-based sensor can be decreased by implementing only the occupancy state measurement on the single-board computer. Table 3 shows the comprehensive energy-saving rate of the occupancy control based on the commercial PIR sensor, commercial image-based sensor, and low-power image-based sensor using practical and minimum time delays. The total comprehensive energysaving rate was obtained from the occupancy control for all the times listed in Table 1. The ranges of the comprehensive energy-saving rate by the commercial sensors were provided by those of the rated power consumption as presented in Table 2. The minimum and maximum values were given by commercial sensors which have maximum and minimum rated power consumption, respectively. On the other hand, since Table 3. Comprehensive energy-saving rate (%) from occupancy control based on each sensor ("C*" and "LP*" represent "commercial" and "low-power", respectively). the rated power consumption of the low-power imagebased sensor was determined uniquely in Section 4.2.3, the comprehensive energy-saving rate does not have the range. When the minimum time delay was set, the comprehensive energy-saving rate provided by each sensor increased compared with the practical time delay. For example, the total comprehensive energy-saving rate provided by each sensor increased by 3.25% or more when the minimum time delay was set. These results stemmed from the fact that the minimum time delay of the commercial PIR sensor and both image-based sensors decreased by 4.42 and 2.34 min, respectively, compared with the practical time delay, as shown in Section 4.2.1.
Comprehensive energy-saving rate
The total comprehensive energy-saving rate provided by the commercial PIR sensors ranged from 11.74% to 14.78% and from 15.00% to 18.03% when the practical and minimum time delays were set, respectively. The total comprehensive energy-saving rate provided by the commercial image-based sensors ranged from 5.68% to 11.74% and from 10.12% to 16.18% when the practical and minimum time delays were set, respectively. The total comprehensive energy-saving rate provided by the low-power image-based sensors was 20.16% and 24.61% when the practical and minimum time delays were set, respectively.
The results in Table 3 suggest that the commercial image-based sensors could not decrease the total comprehensive energy-saving rate of the occupancy control compared with the commercial PIR sensor that had the minimum rated power consumption. Thus, the power consumption of the commercial image-based sensor should be decreased so as to increase the total comprehensive energy-saving rate compared with the commercial PIR sensor. On the other hand, the low-power image-based sensor could increase the total comprehensive energy-saving rate of the occupancy control by 5.38% or more and 8.42% or more compared with the commercial PIR sensor and commercial image-based sensor, respectively.
The commercial image-based sensor decreased the comprehensive energy-saving rate compared with the commercial PIR sensors at D-1, D-2, D-5, D-6, and D-7. However, the low-power image-based sensor was found to increase the comprehensive energy-saving rate compared with the commercial PIR sensors on all days. This was evidenced by the fact that the rated power consumption of the low-power image-based sensor was reduced compared to that of the commercial imagebased sensor.
Discussion
This section offers an additional practical discussion of the low-power image-based sensor and the findings of our experiment. In Section 5.1, we discuss further improvement of image-based sensor in the comprehensive energy-saving rate and in the occupancy measurement accuracy, and in Section 5.2, we discuss an application situation of each sensor. In Section 5.3, we present more generalizable findings by assuming that a living room is mounted with a different LED light, and in Section 5.4, we discuss the scope of our experiment, which was designed to provide more general experimental results.
Energy-saving rate
The single-board computer consumes energy even when it is in an idle state because it typically runs an operating system (OS) [40]. The Raspberry Pi, which we used as the single-board computer to implement the low-power image-based sensor, also runs the Linuxbased Raspbian OS.
Therefore, the power consumption of the low-power image-based sensor might be reduced if we implement it on a processor without the OS. This section describes an estimate of the comprehensive energy-saving rate of the occupancy control based on the image-based sensor, which measures only the occupancy state, implemented on a processor without the OS (hereafter called "OS-free image-based sensor ").
The rated power consumption of the OS-free imagebased sensor (p n ) can be estimated by subtracting the rated power consumption of the single-board computer during an idle state (p s ) from that of the low-power image-based sensor (p i ), as Because the rated power consumption of the singleboard computer in an idle state and the low-power image-based sensor was 0.72 and 1.22 W, respectively, that of the OS-free image-based sensor was estimated to be 0.50 W by applying the above equation. Table 4 shows the comprehensive energy-saving rate of the occupancy control based on the OS-free imagebased sensor, which had the rated power consumption estimated using Equation 8. Here, we discuss the effectiveness of the OS-free image-based sensor compared with the other sensors from Table 3. In contrast to the comprehensive energy-saving rate provided by the other sensor, which was not positive at D-2 when the practical time delay was set, that provided by the OS-free image-based sensor was 1.56% or more. In addition, the total comprehensive energy-saving rate provided by the OS-free image-based sensor was found to be 22.34% and 26.79% when the practical and minimum time delays were set, respectively. This finding demonstrates that the OS-free image-based sensor increased the total comprehensive energy-saving rate by 2.18% or more compared with the other sensors.
Occupancy measurement accuracy
The inter-frame time of the frame difference was set to 100 ms in the experiment, since this setting is expected to provide the best performance in the occupancy activity analysis. When the inter-frame time is increased, the small motion is easier to be detected, because the obvious difference in the appearance can be found between previous and current images. Thus we picked up a scene (4:00 p.m. to 5:00 p.m. at D-2), in which the occupant continuously performed the finger-and hand-level motion to control a mouse and to type a keyboard of the computer for 22 min, so as to discuss the measurement accuracy for the small motion. Figure 5 represents the precision, recall, F-measure provided by the frame difference at different interframe time. The transition of the recall indicates that the false-negative was decreased at larger inter-frame time just as expected. However, the false-positive, which causes the lights to remain turned on in the unoccupied space, was found to be conversely increased from the transition of the precision. The maximum F-measure of 92.71% at 2300 ms inter-frame time was higher than that of 71.07% at 100 ms by 21.64%. Hence, these results suggest that the inter-frame time is required to be tuned carefully for the occupancy measurement, especially for that to the small motion.
In addition, we consider that the threshold parameters of the frame difference should be determined adaptively on the basis of the environment as in [41]. However, because the algorithm of the adaptive thresholding is more complicated, the effectiveness is required to be checked at various conditions.
Application situation
Our experiment demonstrated the effectiveness of the low-power image-based sensor compared with the commercial PIR sensor. The results show that the low-power image-based sensor is expected to increase the comprehensive energy-saving rate under the room where the occupants perform the small motion such as living rooms and offices.
However, the image-based sensor may not increase the comprehensive energy-saving rate compared with the PIR sensor under the condition that the minimum time delay cannot be considerably decreased. To verify this statement, we picked up a scene (6:00 p.m. to 7:00 p.m. at D-2), in which the occupants most frequently entered or left the room once a minute on average, and evaluated the comprehensive energy-saving rate. The minimum time delays of the commercial PIR sensor and both image-based sensors were required to be set to 66.41 s and 43.81 s, respectively. The difference in the minimum time delays was not large compared with that in Section 4.2.1 due to frequent and large motion provided by repeatedly entrancing and leaving the room. When these minimum time delays were set, the comprehensive energy-saving rate of the commercial PIR sensor, the commercial image-based sensor, and the low-power image-based sensor were 42.13% to 45.16%, 30.96% to 37.02%, and 45.45%, respectively. These results suggest that the low-power image-based sensor was not always effective in the rooms, in which the frequent and large motion is expected to be found, such as a corridor, because it increased the comprehensive energy-saving rate only by 0.29% compared with the commercial PIR sensor that had the minimum rated power consumption in this scene.
In addition, an acceptability of the image-based sensor is lower, because it raises significant privacy and security concerns of the occupants. Thus the imagebased sensor is difficult to be introduced to lavatory or locker room, in which the privacy is explicitly protected. On the other hand, the PIR sensor has been introduced to these rooms due to its privacy and security preserving capability.
Application of other lights
This section discusses the total comprehensive energysaving rate under the condition where other LED lights were installed in a room of the same size as the living room (see Figure 4). We surveyed the power consumption of the currently available LED lights produced by four manufacturers and found that the minimum and maximum rated power consumptions (l min and l max , respectively) of the LED lights were 21.30 and 39.00 W, respectively. Table 5 shows the total comprehensive energy-saving rate of the occupancy control for the currently available LED lights. Although the total comprehensive energy-saving rate of the commercial image-based sensor decreased compared with the commercial PIR sensor when a lower power light was mounted, that of the low-power image-based sensor increased the total comprehensive energy-saving rate by 5.02% or more compared with the commercial PIR sensor. From these comparisons, we conclude that the low-power imagebased sensor should be developed and commercialized to increase the comprehensive energy-saving rate.
The total comprehensive energy-saving rate provided by the low-power image-based sensor was larger than that by the PIR sensor when lights with a power consumption of more than 3.57 W were applied, which is 17.73 W or more under the rated power consumption of the currently available LED lights.
Scope of this experiment
The effectiveness of the low-power image-based sensor discussed in this paper may not be the same in other environments, as the experiments we conducted were based only on the sensing data obtained from a single Table 5. Total comprehensive energy-saving rate from occupancy control for currently available LED lights (l denotes the power consumption of a light, "C*" and "LP*" represent "commercial" and "low-power", respectively).
Japanese home for 76.5 h over eight days. However, our experiment was designed to enable a discussion of the effectiveness of the low-power image-based sensor in a wide variety of environments.
For example, we selected a Japanese living room in a home occupied by two residents as the experimental field, as the average number of private household members in Japan was 2.3 as of 2019 [42]. Moreover, we focused on the living room because occupants tend to spend most of their time in the living room in the evening, which is when lights are typically turned on [43]. In addition, we chose settings in which the low-power image-based sensor applies frame difference to QVGA sized images at 100 ms inter-frame time. We assume these settings are frequently selected for image-based sensors because QVGA sized images are commonly utilized by surveillance cameras. Moreover, occupancy activity analysis has demonstrated that the best performance is achieved at the inter-frame time of 100 ms.
Furthermore, we have discussed the effectiveness of the low-power image-based sensor compared with the commercial PIR sensor and commercial image-based sensor, which are currently available, under additional conditions in which the living room was mounted with different LED lights or in which the occupants entered and left the room most frequently. Our objective with this was to provide broader findings by avoiding a discussion of only one condition.
Although our experiment and discussion were designed to examine the effectiveness in as wide a variety of environments as possible, it did not cover all potential conditions and scenarios. For example, it would be beneficial to examine other room types to provide a more general discussion, since occupancy patterns (e.g. the frequency and size of the motion), which affect the occupancy measurement accuracy of the sensors, may be different for different rooms. In short, further experimentation is required to clarify the effectiveness of the low-power image-based sensor.
Conclusion
In this study, we evaluated the effectiveness of occupancy lighting control based on a low-power imagebased sensor, which measures only the occupancy state, implemented on a single-board computer. Experimental results demonstrated that the single-board computer has sufficient computational resources to perform the frame difference algorithm under the most frequently utilized and high-performance settings of the image size and frame rate. In addition, we found that the rated power consumption of the low-power image-based sensor was 1.22 W, which is 2.78 W (corresponding to 69.50%) less than that of a commercial image-based sensor.
Our experiment was conducted in an actual living room where a light with the rated power consumption of 33.00 W was mounted for 8 days. As the results of the experiment indicate, the commercial image-based sensor could not decrease the total comprehensive energysaving rate, which is based on the energy consumption of both the light and the occupancy sensor, as much as the commercial PIR sensor did. In contrast, the lowpower image-based sensor increased the total comprehensive energy-saving rate by 5.38% and 8.42% or more compared with the commercial PIR and image-based sensors, respectively.
We further discussed the effectiveness of the lowpower image-based sensor to provide more general findings. First, the comprehensive energy-saving rate provided by the low-power image-based sensor implemented on a processor without an OS was 2.18% or higher than that provided by the commercial PIR sensor and commercial image-based sensor. Second, the low-power image-based sensor increased the total energy-saving rate compared with the PIR sensor under a light with a power consumption of 3.57 W or more, which covers the range of rated power consumption of the LED lights currently available for use in a living room. Our experiments and discussion suggest that the rated power consumption of the commercial imagebased sensor should be decreased in order to increase the comprehensive energy-saving rate of the occupancy control compared with that based on the commercial PIR sensor.
However, our discussion suggested that the lowpower image-based sensor was not always effective in the room where the occupants perform the frequent and large motion. Although our experiment was designed to examine the effectiveness in as wide a variety of environments as possible, further experimentation is required to cover additional conditions and scenarios. In addition, we found that the parameters of the image-based sensor should be tuned or determined adaptively to increase the occupancy measurement accuracy.
Disclosure statement
No potential conflict of interest was reported by the author(s). | 8,500.8 | 2021-01-01T00:00:00.000 | [
"Engineering"
] |
Efficient cosubstrate enzyme pairs for sequence-specific methyltransferase-directed photolabile caging of DNA
Supplemented with synthetic surrogates of their natural cosubstrate S-adenosyl-L-methione (AdoMet), methyltransferases represent a powerful toolbox for the functionalization of biomolecules. By employing novel cosubstrate derivatives in combination with protein engineering, we show that this chemo-enzymatic method can be used to introduce photolabile protecting groups into DNA even in the presence of AdoMet. This approach enables optochemical control of gene expression in a straight-forward manner and we have termed it reversible methyltransferase directed transfer of photoactivatable groups (re-mTAG).
The use of photolabile protecting groups (''caging groups'') in biochemical research to trigger the activity of a bioactive molecule with light has enabled complex investigations with unprecedented spatiotemporal resolution.It has been very powerful for controlling activity of small molecules including metabolites or lipids in vitro and in vivo and has been expanded to larger biomolecules such as proteins and nucleic acids. 1,2till, the site-specific introduction of caging groups into complex biomolecules remains challenging.While the use of unnatural amino acids by the extension of the genetic code nowadays enables site-specific caging of proteins without size limitation in living cells, 3,4 such a methodology has not been developed for nucleic acids so far.Herein, we present a chemoenzymatic strategy that has the potential to overcome these limitations.
Photoprotected nucleic acids can be prepared either postsynthetically or by introduction of a modified nucleoside into DNA or RNA by means of solid-phase synthesis.0][11][12][13] Still, it relies on extensive synthetic organic chemistry and its adaption for the photocontrol of plasmids or large RNA constructs can be cumbersome.
Taking advantage of the inherent target specificity of enzymes, chemo-enzymatic methods depict another strategy to introduce modifications into complex biomolecules and have already proven valuable for labelling various classes of biomolecules in vitro and in vivo.5][16] This MTasedirected transfer of activated groups (mTAG) 17 has been used for single-molecule optical DNA mapping, 18 tagging of epigenomic target sites 19 as well as RNA labelling 20,21 and has even been used to probe histone and RNA Mtases in living cells. 22,23For this purpose, various AdoMet analogues with extended side-chains bearing transferable alkene, alkyne, amino, 4-vinyl benzyl and very recently photoactivatable moieties have been developed. 16,24,25o overcome the issues inherent to current methods for caging DNA, we herein have addressed the potential of mTAG for the photoregulation of DNA.Thus, we decided to explore the enzymatic transfer of photoactivatable nitrobenzyl (NB) groups using the adenine-N6 MTase M.TaqI from Thermus aquaticus.M.TaqI is well-studied and known to be promiscuous, 14,18 while NB groups are widely applied caging groups. 1 We designed four AdoMet analogues bearing transferable NB groups differing in their methoxy substitution pattern of the NB core (see Fig. 1).While such substitutions are known to increase the kinetics of photocleavage, 26 we further show that they permit us to modulate the efficiency of the enzymatic transfer reaction.Complemented by mutagenesis of the cosubstrate binding site of M.TaqI we were able to generate cosubstrate enzyme pairs which catalyse DNA caging with high efficiency despite presence of the native cosubstrate AdoMet.Finally, we were intrigued to explore the utility of enzymatic DNA caging to control gene expression.
The NB derivatives 2-5 were synthesized by alkylation of S-adenosylhomocysteine (SAH, 6) with the respective halides (7-10) using reported conditions. 14Purification and isolation of the enzymatically active (S)-diastereomers using preparative reversed-phase HPLC was confirmed by HPLC, MS and 1 H-NMR.Half-lives for 2-5 were determined in the range of 3-4 hours under physiological conditions (Fig. S5, ESI †) suitable for enzymatic reactions and comparable with previously reported data. 24,27o evaluate MTase-directed caging as well as the uncaging reaction in more detail, we developed an in vitro activity assay.Shortly, a hemi-methylated duplex ODNIÁODNII Me (Fig. 2a) containing the palindromic M.TaqI recognition sequence 5 0 -TCGA-3 0 was incubated with M.TaqI and cosubstrate under saturating conditions (Fig. S15, ESI †) The reaction was monitored using ion-pairing reversed-phase UHPLC-MS analysis at elevated temperatures to separate both strands which allowed identification as well as quantification of the reaction products.These assays revealed that M.TaqI accepts not only AdoNB (2) -as it was recently reported by Rentmeister et al. 25 -but also catalyses the transfer of extended NB groups fast and efficiently within minutes.Alkylation with Ado4MNB (3) and AdoDMNB ( 5) even proceeds at rates comparable to the native substrate AdoMet (1) (Fig. 2b and c).This is remarkable considering that the transalkylation rates of M.TaqI we and others have observed for allylic cosubstrates are usually at least one order of magnitude smaller than the equivalent methyl transfer 14 (Fig. S12, ESI †).A cause for this effect may be the enhanced reactivity of benzylic positions towards substitution and the conjugative stabilization of the sp 2 -hybridized S N 2-transition state through the aromatic system, which has recently also been observed for RNA MTases. 28rradiating samples of caged duplex ODNI R ÁODNII Me for different time periods at l = 365 nm using a UV hand lamp (6 W) revealed clean and rapid photorelease of uncaged duplex for NB-, 4MNB-and 5MNB-modified ODNI and thus confirmed the reversibility of the enzymatic labelling reaction (Fig. 2d and Fig. S13, ESI †).Merely for ODNI DMNB ÁODNII Me , we were able to observe the formation of side products due to reaction with the respective nitrosoaldehyde.
Encouraged by these results, we addressed the question if cosubstrates 2-5 are able to compete with AdoMet for M.TaqI, thus potentially permitting enzymatic caging of DNA in a cellular context.Hence, we performed alkylation assays under steady state conditions with varying ratios of cosubstrate analogue to AdoMet (Fig. 3a).These experiments revealed striking differences between the NB derivatives; while AdoMet (1) was clearly favoured over the unsubstituted AdoNB (2), the differences in enzymatic transfer efficiency observed between 1 and 3-5 became less with AdoDMNB (5) being nearly equally efficiently utilized by M.TaqI as AdoMet (1).Thus, the excess of cosubstrate surrogate needed to outcompete AdoMet reduced from approximately five-fold for AdoNB (2) to 1.5-fold for AdoDMNB (5) indicating that enzymatic caging of DNA can proceed despite the presence of AdoMet.
Steric engineering of the cosubstrate binding pocket represents another approach to further increase the efficiency of MTase-catalyzed reactions with artificial cosubstrates. 29Albeit wild-type M.TaqI is already known to be very promiscuous and employs NB-derived cosubstrates with high efficiency, we became intrigued to revisit the crystal structure (PDB: 1g38) of M.TaqI.Hereby, we concluded that V21, which is in close proximity to the sulfur atom of the cosubstrate, might sterically interfere with extended NB groups (Fig. 3b).Thus, we generated a single-residue mutant of M.TaqI by replacing V21 with glycine and subsequently analysed the reactivity profile of M.TaqI V21G with our established in vitro activity assay using artificial cosubstrates 2-5 as well as AdoMet.Interestingly, transfer rates for the artificial cosubstrates remained mainly unaffected by this mutation indicating that NB groups can be accommodated in the binding pocket without steric clashes.Only for Ado5MNB (4) the transalkylation rate was slightly increased (Fig. S16, ESI †).In contrast to that, we observed a pronounced effect on the rate for methyl transfer, which was slowed down over six-fold and we assume that V21 may play a role in correctly positioning AdoMet in the cosubstrate binding pocket or contributes to its binding affinity via hydrophobic interactions.This significant drop in transmethylation rate prompted us to repeat competition with varying ratios of cosubstrate analogue to AdoMet using M.TaqI V21G.And indeed, this mutant demonstrated a strong preference for NB-derived cosubstrates even with AdoMet being present in excess (Fig. 3a).Thus, even at two-fold excess of AdoMet, transalkylation was catalysed at least twice as efficient than transmethylation by M.TaqI V21G and at equimolar amounts of artificial and native cosubstrate the excess of detected NB-modified ODNI over methylated ODNI ranged from around four-fold (AdoNB (2) and Ado5MNB ( 4)) up to 25-fold for AdoDMNB (5) and Ado4MNB (3).Especially Ado4MNB (3) was employed with superior efficiency by M.TaqI V21G outcompeting AdoMet even if this was present in five-fold excess.Thus, with M.TaqI V21G and our NB-derived cosubstrates we have identified MTase cosubstrate pairs which enable site-specific enzymatic caging of DNA in the presence of AdoMet.This is an essential prerequisite for applications in a biological context and strengthens the potential of re-mTAG as an approach for caging DNA in living cells.
To probe the applicability of re-mTAG in a biochemical context, we studied the photoregulation of gene expression exemplified by sfYFP (sfYFP: superfolder yellow fluorescent protein).For that, we made use of three different expression plasmids: construct V1 represents a pET28 vector encoding for sfYFP under the control of the T7 promoter.Since the sfYFP gene itself contains 6 M.TaqI recognition sites distributed over the whole gene (Fig. 4a and Fig. S18, ESI †), we were interested if caging these sites would already enable photoregulation of sfYFP expression.However, since the T7 RNA polymerase (T7 RNAP) is known to form a very stable elongation complex with the template strand during transcription capable of bypassing lesion sites, 30 we further designed plasmids featuring two (V2) or one (V3) additional M.TaqI recognition sites in close proximity downstream to the T7 promoter region (Fig. 4a and Fig. S19, S22, ESI †).
The plasmids were caged using M.TaqI and Ado4MNB and completeness as well as reversibility of the labelling reaction was confirmed by performing restriction assays with the corresponding restriction endonuclease Taq a I (Fig. 4c and Fig. S17, ESI †).Caged and control plasmids were linearized using PsiI and included as templates in in vitro expression assays (for details see ESI †).In-gel fluorescence of the translated sfYFP protein was utilized as a quantitative readout to evaluate the caging groups' effect on gene expression.
Our results demonstrate that re-mTAG does indeed enable photoregulation of gene expression and furthermore highlight the importance of site-specific caging (Fig. 4b).In previous studies, the promoter region itself has been identified as suitable caging site for efficient photoregulation of gene expression by preventing the RNAP from recognizing the promoter region. 5,7ere, we show that introducing caging groups in close proximity downstream to the T7 promotor also results in full -and upon illumination reversible -silencing of transcription, while random caging of the sfYFP gene only partially suppresses gene expression indicating that the T7 RNAP can efficiently transcribe past 4MNB-modified sites (Fig. S21, ESI †).The fact, that already a single 4MNB group located in proximity to the promoter either on coding or noncoding strand (see Fig. S7 and S8, ESI †) aborts transcription completely, is remarkable and we assume that the caging group interferes with the transition of the T7 RNAP from initiation to the highly processive elongation phase.This transition takes place after the nascent RNA is extended to 8-12 nucleotides and represents a critical multistep process accompanied by conformational changes in both RNAP and DNA. 31Albeit M.TaqI targets further sides in the expression vector, gene expression can be restored to its initial value upon illumination demonstrating clean uncaging of the enzymatically caged plasmid and thus resolves a major issue inherent to stochastically caged plasmids.
In conclusion, reversible methyltransferase directed transfer of photoactivatable groups (re-mTAG) represents a powerful enzymatic method to introduce caging groups site-specifically into large DNA molecules and we have demonstrated its feasibility for controlling plasmid-based gene expression using light.We have shown that the methyltransferase M.TaqI acts as a highly potent biocatalyst for photolabelling not only by accepting a variety of nitrobenzyl-modified cosubstrate analogues but also by transferring such groups at comparable rates as the native AdoMet.Furthermore, utilizing protein engineering we have generated an M.TaqI mutant that features a strong preference for NB-derived cosubstrates over AdoMet enabling caging of DNA in the presence of native cofactor with high selectivity.The developed cosubstrate enzyme pairs have the potential to extent the applicability of re-mTAG to environments even as competitive as those inside living cells.In the light of recent discoveries regarding the in vivo synthesis of AdoMet derivatives using cell-permeable methionine analogues 22,23,32 and regarding the continuous need for methods that enable manipulation of processes in living cells, we anticipate a broad range of applications for this method in the near future.
Fig. 1
Fig. 1 Structures of cosubstrates used in this study and synthesis conditions.
Fig. 4
Fig. 4 In vitro transcription/translation assay; (a) constructs used with M.TaqI recognition sites highlighted; (b) relative amounts of sfYFP expressed after 2 h with or without initial irradiation (LED, 5 min, 365 nm), experiments were performed in triplicates; (c) modification restriction assay confirming successful caging and uncaging of linearized plasmid V3 (details see ESI †). | 2,878.4 | 2018-11-08T00:00:00.000 | [
"Chemistry",
"Biology"
] |
MatchMiner: a tool for batch navigation among gene and gene product identifiers
MatchMiner is a freely available program package for batch navigation among gene and gene product identifier types commonly encountered in microarray studies and other forms of 'omic' research.
Rationale
One of the more painful tasks in 'omic' research [1,2] is navigating among different gene or gene product identifiers. After a cDNA microarray experiment, for example, one usually must translate from IMAGE clone ids to GenBank accession numbers, HUGO names, common names, or chromosome locations for a list of genes. As we generate more and more data from diverse platforms and species, such translations will become increasingly complex but also more important to the synthesis of a coherent biological picture. Beyond simply looking up additional information about a list of genes, such synthesis will require the ability to find the intersection between two lists of genes that are designated by the same or a different identifier type.
Currently, the basic translations can be done on a gene-by-gene basis using public databases such as UniGene, LocusLink, OMIM (Online Mendelian Inheritance in Man), and the working draft of the human genome (from the University of California Santa Cruz (UCSC) or the National Center for Biotechnology Information) [3][4][5] or else in batch through Source [6] or GeneLynx [7]. However, no single data source contains all the necessary information about every gene and, to complicate matters further, the relationships among identifiers are often not one-to-one. For example, there may be several GenBank accession numbers and multiple IMAGE clone ids for the same gene, and a single gene symbol may be an alias for multiple different genes. Therefore, any highthroughput solution to the problem must take these challenges into account and respond with an approach that minimizes the need for human intervention. At the same time, those instances when human intervention is necessary must be flagged and enough metadata must be provided for accurate decision-making without extensive further research.
Motivated by many days spent at the computer doing these tedious, time-consuming translations for our own experimental data, we developed MatchMiner [8] as a freely available Open Access public resource that automates the process for collections of genes. MatchMiner provides two primary functions. The first, LookUp, translates an input list of gene identifiers into a matching output list of identifiers of a different type; the second, Merge, combines two separate lists of either the same or different types of identifiers into one list that details all one-to-one, one-to-many, and many-to-many relationships between corresponding gene identifiers in the two lists.
Identifier navigation with MatchMiner
As shown schematically in Figures 1 and 2, MatchMiner leverages information from the four public databases listed above, and from Affymetrix, by parsing them into relational tables for use in doing translations. The LookUp function can operate interactively on single identifiers or in batch mode on a list of identifiers in a file. When used interactively for one or a few genes, it saves the user the trouble of querying five different databases and collating the data. More important, however, is batch querying of a list file, for instance a list of the dozens or hundreds or thousands of genes that show interesting differences between samples in a microarray experiment. In this mode, the user specifies the input and output identifier types, as well as the search algorithms to be used in traversing the various data sources ( Table 1). The program is context-sensitive in that it will search only the pertinent data sources (for example, only UniGene to identify IMAGE clone ids, which are not found in the other sources). An important feature is the optional output of diagnostic metadata that tell the user in which source(s) the identifier was found and whether an input identifier corresponds to more than one gene. This feature enables the user to judge the reliability of matches. The results can be displayed in HTML format or downloaded as The Merge function, the most powerful function of Match-Miner, identifies which genes are common to two input lists of identifiers and gives detailed output of the one-to-one, one-tomany, or many-to-many relationships between corresponding identifiers in the two lists. This function is used, for example, to compare datasets of different experiment types (for example, transcript expression, protein expression, arraybased comparative genomic hybridization (CGH)) by identifying the genes in common between them. The output includes summary tallies as well as a gene-by-gene listing of items matched, unmatched and not found. As with the LookUp function, diagnostic resource information is provided. Any identifier with an ambiguous gene assignment (for example, an IMAGE clone id that belongs to two different UniGene clusters) is flagged for user intervention, with all possible assignments returned.
Performance
In one illustrative case that motivated development of Match-Miner, we (X. Lee, K.J.B., F.G. Gwadry, W.C.R., G. Riddick, S. L. Pelletier, S.N., and J. N.W., unpublished data) had to match up as many as possible of 9,706 cDNA microarray clones [9,10] with HU6,800 Affymetrix chip oligonucleotide sets [11], having run both platforms on the same 60 human cancer cell samples (the NCI-60). To do so, we developed an early form of MatchMiner. The particular task was to identify all relationships between the 9,706 IMAGE clone ids and 7,129 GenBank accession numbers based on UniGene cluster membership. To complete the task manually, one gene at a time at maximum speed (about 30 seconds per gene) would take over 140 hours -even if one could keep accurate track of the results. In contrast, the current version of MatchMiner took 10 minutes on a 750 MHz Pentium III PC with 320 MB RAM to generate the merged list, specifying all possible matches between IMAGE clone ids and GenBank accession numbers. When we compared MatchMiner Merge results with those obtained using the LookUp function for a random sample of the genes, there were no discrepancies. The same task with Source required translating both lists into UniGene clusters and then further processing the data. After identification and reformatting of entries with multiple UniGene cluster associations, the resulting lists were imported into Microsoft Access and queried to create the appropriate matches. The entire procedure gave results similar to those of MatchMiner but took approximately one hour, most of that user time.
With the exception of MatchMiner, tools that can do some kind of translation are geared toward research dealing with expressed sequence, either at the RNA or protein level. However, many interesting questions can be asked from the perspective of genomic sequence. One example relates to the identification of genes represented in an array CGH experiment in which the targets on the chip are fluorescent in situ hybridization (FISH)-and site-tagged sequence (STS)mapped bacterial artificial chromosome (BAC) clones. The challenge is to begin to interpret array CGH results in the context of the biological literature and of other classes of data. BAC clones are not generally annotated by the genes they span, but rather by their position in the cytogenetic and sequence-based maps. Therefore, an association between the BAC clones and genes must be made. MatchMiner provides this function with the ability to search on the FISH clone ids. Mapping of the FISH clones to genes is done by sequence alignment of the BAC ends during off-line construction of the overall MatchMiner database (Figure 3). MatchMiner takes 5 minutes to return the gene symbol for a list of 100 FISH-mapped BACs [12]. Such a search is not possible using other tools. A summary of commonly used analogous tools and their capabilities can be found in Table 2.
As noted previously, identifiers are not always unique or uniquely assigned. For example, GenBank accession numbers are specific to a sequence, but the assignment of that sequence to a gene may change over time. Even more disconcerting, common gene names or aliases are often used by different investigators for different genes. Therefore, it is important to look in detail at the results of searches to check for correspondences other than one-to-one and to examine the data source tags to get a sense of the strength of the association between identifiers. Associating FISH-mapped BACs with genes. Schematic view of FISH-mapped BACs from 1p36.33 near the PITSLRE kinase genes (UCSC Genome Browser, June 2002 freeze). Note that a single BAC can encompass one or more genes. In MatchMiner, the FISH-mapped BAC table from UCSC is imported, and chromosomal positions are read from the table for comparison with the transcriptional start positions of UCSC 'Known Genes'. If a transcriptional start is contained within the bounds of a BAC, that BAC is associated with the corresponding gene index. Thus, a BAC containing several genes will be associated with each of those genes. One non-obvious advantage of MatchMiner is that it can combine information from more than one of the data sources to show matches that could not be made on the basis of any single source. The gene ACVR2B, which has aliases ACTR-IIB and ACTRIIB, provides an example. LocusLink and OMIM both reference the HUGO symbol ACVR2B, but LocusLink does not reference ACTRIIB, and OMIM does not reference ACTR-IIB. Therefore, if one of the aliases were used as input, the success of any search outside of Match-Miner would be data-source dependent.
Algorithm and software development
MatchMiner was written in Java and can be deployed as either a web or command-line application, the latter suitable for high-throughput pipeline purposes. In its design and implementation, we leveraged a variety of open-source tools and libraries, including jUnit (unit testing framework), CVS (configuration management), Xerces (XML parser) and Ant (build tool). Before run-time, data from UniGene, LocusLink, OMIM, UCSC and Affymetrix are downloaded and parsed to generate an integrated database implemented under MySQL. If an entry from the imported data matches a candidate gene that was already identified, it is assigned the same gene index. If an entry does not match any of the candidate genes, then a new gene index is generated. Import begins with data from the UCSC's 'Known Gene' table, followed by UCSC's EST (expressed sequence tag) table, LocusLink, UniGene, OMIM and Affymetrix. Different identifiers are stored in different tables and several tables are required to resolve the many-to-many relationships between identifiers (Figure 2a,b). The central algorithm for resolving identifiers uses an instantiation of the ChainOfResponsibility pattern [13], which combines different searches sequentially in a logical manner. In MatchMiner, it maximizes the likelihood of correctly translating back and forth from identifiers to gene indices using the different databases. The algorithm is non-trivial. For each identifier type, we establish a Chain-OfResponsibility hierarchy of the data sources based on their respective abilities to match the user input ( Table 3). The search algorithms then use this ranking. For example, when an input list of gene names is processed using the 'ALL (HUGO, Alias)' search algorithm, the list is scanned for HUGO names, and each one found is associated with the corresponding unique internal gene index. The remaining unmatched gene names are then scanned again, this time matching aliases ( Table 1). The rationale is that an official HUGO name is more likely to be the desired match, but any match is better than none. A similar approach is used when going from the unique index to an output list. For instance, if the desired output is cytogenetic location, MatchMiner first scans the UCSC build of the human genome. If the location is not found there, LocusLink and Unigene are searched ( Table 1). The ChainOfResponsibility approach enables us to combine the precision of highly focused algorithms with the greater coverage of more broadly based ones.
Although currently human-specific, MatchMiner will be expanded in the near future to incorporate data from other
Download
MatchMiner is available as a web-application or as a command line jar file at [8]. The MatchMiner database is maintained on our server and updated at approximately 6-month intervals. Detailed documentation for both implementations is available at the site.
In summary, MatchMiner is an efficient application for navigating the complex world of gene and gene product identifiers. It can batch search publicly available databases to convert between identifier types and can determine the intersection of two gene lists with different identifiers. MatchMiner will greatly enhance the ability of the research community to annotate and compare 'omic' datasets. | 2,847.6 | 2003-03-25T00:00:00.000 | [
"Computer Science",
"Biology"
] |
The Microwave-Assisted Green Synthesis of TiC Powders
Titanium carbide (TiC) is an important engineering material and has found widespread applications. Currently, TiC is typically synthesized through carbothermal reduction, requiring a high temperature (ca. 1700–2300 °C) and long reaction time (ca. 10–20 h), which is not eco-friendly. During a conventional reaction path, anatase TiO2 (A-TiO2) was first converted to rutile TiO2 (R-TiO2), which was subsequently reduced to TiC. Herein, we explored the synthesis of TiC powders with the assistance of microwave heating. In particular, we achieved the conversion of A-TiO2, which was more reactive than R-TiO2 for the carbothermal reduction, to TiC, which was directly due to quick microwave heating. As such, the carbothermal reduction started at a much lower temperature of ca. 1200 °C and finished within 30 min when reacting at 1400 °C, leading to significant energy saving. This study shows that microwave-assisted synthesis can be an effective and green process for preparing TiC powders, which is promising for future large-scale production. The influence of the reaction temperature, the reaction duration, and the carbon content on the synthesis of TiC powders was investigated.
Currently, TiC powders are mainly synthesized by the carbothermal reduction of TiO 2 , and the heat during the process of reaction is usually provided by an external heating system. The synthesis is typically in the temperature range of 1700-2300 • C for 10-20 h [7]. The size of the synthesized TiC powders not only depends on the size of raw materials, but also on the synthesis conditions. Therefore, it is difficult to prepare uniform TiC powders by conventional heating processes [8,9].
As an alternative heating technology, microwave heating has advantages such as high thermal efficiency, selective heating, quick heating, and short processing time. It has been implemented in many industrial applications in recent years [10][11][12]. The major difference between microwave heating and conventional heating is the inverse temperature profile inside microwave-heated samples [13,14]. The center of the sample becomes hotter than the surface, which is exposed to the colder furnace atmosphere [15]. Many materials including ceramics, polymers, and metallic powders can be directly exposed to microwaves [16]. In microwave carbothermal reduction process, two factors might be favorable for lowering the onset reaction temperature. One is the existence of thermal and non-thermal effects from microwave heating [17]. The other is the uniform heating achieved by the energy directly delivered to the starting materials via molecular-level interactions under an electromagnetic field [18,19]. Microwave-assisted synthesis is attractive for the preparation of TiC because it is faster and more effective than conventional heating. Carbon, one of the reactants for the synthesis of TiC, is a very effective absorber of microwave radiation, offering extra facilitation of the reaction. Cross and coworkers [20,21] made some early explorations of the synthesis of TiC via microwave heating and proved that it was viable and indeed more effective to synthesize TiC via microwave heating.
In this report, microwave heating was adopted to synthesize TiC powders by carbothermal reduction of TiO 2 . The influence of reaction temperature and carbon content on the phase composition of the produced TiC powders was systematically investigated. In particular, we aimed to achieve the direct conversion of anatase TiO 2 (A-TiO 2 ), which was more reactive than rutile TiO 2 (R-TiO 2 ), for the carbothermal reduction to TiC by taking advantage of quick microwave heating.
Synthesis
First, a 20.0 wt % A-TiO 2 ethanol dispersion was ultrasonicated for 30 min to form a uniform system, which was mixed with polyacrylic acid (PAA/TiO 2 = 0.5 wt %) to stabilize the dispersion of A-TiO 2 in ethanol. Subsequently, a pre-determined amount of carbon black was mixed with the above dispersion. The reaction precursor was obtained after evaporating ethanol.
The synthesis was carried out in a KL-2D-16 microwave furnace (Kailin Microwave Equipment Co., Guangzhou, China). During reaction, the microwave frequency was 2450 ± 50 MHz, and microwave power was 6000 W. Thermocouples were used to measure the temperatures during reaction. The entire reaction was carried out under argon protection.
The carbothermal reduction of TiO 2 is a complex process. Under a conventional heating process, before the reduction reaction, A-TiO 2 is first converted to R-TiO 2 . With the rising of temperature, the reduction consists of a series of intermediate reactions, producing various intermediate products and phases (Equations (1)-(4)). The general procedures of the carbothermal reduction reaction between carbon black and R-TiO 2 through conventional heating to generate TiC are as follows [22,23]: (1) Total reaction: In this work, we aimed to achieve a direct reduction of A-TiO 2 to TiC under the assistance of microwave heating, as illustrated in Figure 1. Such a direction reduction route is expected to occur at a lower temperature, thus saving energy and time. In order to investigate the reduction process of TiO 2 by microwave heating, we monitored the phase transition of the reaction products at various temperatures and reaction durations by X-ray diffraction, and studied the influence of carbon content on the synthesis of the TiC powders.
Characterization
Phase composition of the reaction products was determined by X-ray diffraction on a PANalytical X-ray diffractometer (XRD, monochromated Cu Kα radiation) at 25 °C. The morphology of the powders was imaged by scanning electron microscopy (SEM) on a Nova S-430 microscope operated at 20 kV, which was equipped with an energy-dispersive X-ray spectrometer (EDS).
Effect of Temperature and Reaction Time
The XRD patterns of the samples synthesized at various temperatures (1100, 1200, 1300, and 1400 °C) and durations of reaction (10 and 30 min) at a TiO2 and C molar ratio of 1:3.6 are presented in Figure 2.
Characterization
Phase composition of the reaction products was determined by X-ray diffraction on a PANalytical X-ray diffractometer (XRD, monochromated Cu Kα radiation) at 25 • C. The morphology of the powders was imaged by scanning electron microscopy (SEM) on a Nova S-430 microscope operated at 20 kV, which was equipped with an energy-dispersive X-ray spectrometer (EDS).
Effect of Temperature and Reaction Time
The XRD patterns of the samples synthesized at various temperatures (1100, 1200, 1300, and 1400 • C) and durations of reaction (10 and 30 min) at a TiO 2 and C molar ratio of 1:3.6 are presented in Figure 2.
Characterization
Phase composition of the reaction products was determined by X-ray diffraction on a PANalytical X-ray diffractometer (XRD, monochromated Cu Kα radiation) at 25 °C. The morphology of the powders was imaged by scanning electron microscopy (SEM) on a Nova S-430 microscope operated at 20 kV, which was equipped with an energy-dispersive X-ray spectrometer (EDS).
Effect of Temperature and Reaction Time
The XRD patterns of the samples synthesized at various temperatures (1100, 1200, 1300, and 1400 °C) and durations of reaction (10 and 30 min) at a TiO2 and C molar ratio of 1:3.6 are presented in Figure 2. Figure 2A displays the XRD patterns of the samples reacted at 1100 • C for 10 and 30 min. It shows that TiC phase was not formed at 1100 • C for 10 min of reaction, but a small amount of TiC was generated after 30 min of reaction. The conversion from A-TiO 2 to R-TiO 2 was observed at this temperature. A number of studies reported that A-TiO 2 began to transform to R-TiO 2 at a temperature up to 610 • C [24,25]. With an increase in reaction time, more R-TiO 2 was generated, as evidenced by stronger R-TiO 2 phase diffraction peaks in the XRD pattern. The XRD patterns of the samples synthesized at 1200 • C ( Figure 2B) showed that most A-TiO 2 was converted to be R-TiO 2 ; meanwhile, Ti 2 O 3 started to form after 10 min of reaction at 1200 • C. After 30 min of reaction at 1200 • C, the TiC phase started to appear as evidenced by the corresponding XRD peaks. The TiC phase could be clearly observed on the XRD pattern when the reaction temperature was raised to 1300 • C. With an increase in reaction time, a higher concentration of TiC was generated, as supported by the more intensive diffraction peaks of TiC ( Figure 2C). According to the XRD pattern, the virtually pure TiC phase was synthesized after 30 min of reaction at 1400 • C ( Figure 2D).
Based on the above observations, the reaction mechanism is proposed as follows. At 1100 • C, the main reaction was the transformation from A-TiO 2 to R-TiO 2 , but this is a relatively slow process: It was reported that the activity of A-TiO 2 with C was higher than that of R-TiO 2 [26][27][28], so the rate of Reaction (6) is faster than that of Reaction (1): During conventional heating, A-TiO 2 tends to transform to R-TiO 2 before the reduction reaction. This is one of the key reasons that a much higher temperature; thus, much more energy is required to convert the less active R-TiO 2 to TiC. If one can quick heat to directly convert A-TiO 2 to TiC, it is much more favorable in terms of energy consumption. Microwave heating can help achieve this process. Because the rate of microwave heating is very fast, it takes a very short time to raise the temperature from 1100 to 1200 • C. Therefore, when the temperature was quickly increased to 1200 • C, some A-TiO 2 remained. Such unconverted A-TiO 2 can quickly react with C to form TiC at relatively lower temperatures compared with the reduction reaction temperature of R-TiO 2 to TiC. Figure 2C shows that, after 30 min of reaction at 1300 • C, there was still some R-TiO 2 left but no A-TiO 2 . As such, a high heating rate is very desirable to directly reduce A-TiO 2 to TiC, which allows for the synthesis of TiC at lower temperatures while saving energy. After 30 min of reaction at 1300 • C, most TiO 2 was reacted to form TiC, which began to be the dominating phase.
With an increase in reaction temperature and time, it was observed that the (200) peak of the synthesized TiC shifted from 41.90 • to 41.70 • (Figure 3A,B). The lattice constant of the TiC (1300 • C, 10 min) and TiC (1400 • C, 30 min) were calculated to be 4.321 Å and 4.324 Å, lower than the standard value of 4.327 Å. The results show that a higher temperature and longer reaction time are beneficial for the growth of TiC, which was also reported by Preiss et al. [29,30]. Figure 2A displays the XRD patterns of the samples reacted at 1100 °C for 10 and 30 min. It shows that TiC phase was not formed at 1100 °C for 10 min of reaction, but a small amount of TiC was generated after 30 min of reaction. The conversion from A-TiO2 to R-TiO2 was observed at this temperature. A number of studies reported that A-TiO2 began to transform to R-TiO2 at a temperature up to 610 °C [24,25]. With an increase in reaction time, more R-TiO2 was generated, as evidenced by stronger R-TiO2 phase diffraction peaks in the XRD pattern. The XRD patterns of the samples synthesized at 1200 °C ( Figure 2B) showed that most A-TiO2 was converted to be R-TiO2; meanwhile, Ti2O3 started to form after 10 min of reaction at 1200 °C. After 30 min of reaction at 1200 °C, the TiC phase started to appear as evidenced by the corresponding XRD peaks. The TiC phase could be clearly observed on the XRD pattern when the reaction temperature was raised to 1300 °C. With an increase in reaction time, a higher concentration of TiC was generated, as supported by the more intensive diffraction peaks of TiC ( Figure 2C). According to the XRD pattern, the virtually pure TiC phase was synthesized after 30 min of reaction at 1400 °C ( Figure 2D).
Based on the above observations, the reaction mechanism is proposed as follows. At 1100 °C, the main reaction was the transformation from A-TiO2 to R-TiO2, but this is a relatively slow process: A − TiO → R − TiO . (5) It was reported that the activity of A-TiO2 with C was higher than that of R-TiO2 [26][27][28], so the rate of Reaction (6) is faster than that of Reaction (1): During conventional heating, A-TiO2 tends to transform to R-TiO2 before the reduction reaction. This is one of the key reasons that a much higher temperature; thus, much more energy is required to convert the less active R-TiO2 to TiC. If one can quick heat to directly convert A-TiO2 to TiC, it is much more favorable in terms of energy consumption. Microwave heating can help achieve this process. Because the rate of microwave heating is very fast, it takes a very short time to raise the temperature from 1100 to 1200 °C. Therefore, when the temperature was quickly increased to 1200 °C, some A-TiO2 remained. Such unconverted A-TiO2 can quickly react with C to form TiC at relatively lower temperatures compared with the reduction reaction temperature of R-TiO2 to TiC. Figure 2C shows that, after 30 min of reaction at 1300 °C, there was still some R-TiO2 left but no A-TiO2. As such, a high heating rate is very desirable to directly reduce A-TiO2 to TiC, which allows for the synthesis of TiC at lower temperatures while saving energy. After 30 min of reaction at 1300 °C, most TiO2 was reacted to form TiC, which began to be the dominating phase.
With an increase in reaction temperature and time, it was observed that the (200) peak of the synthesized TiC shifted from 41.90° to 41.70° ( Figure 3A,B). The lattice constant of the TiC (1300 °C, 10 min) and TiC (1400 °C, 30 min) were calculated to be 4.321 Å and 4.324 Å, lower than the standard value of 4.327 Å. The results show that a higher temperature and longer reaction time are beneficial for the growth of TiC, which was also reported by Preiss et al. [29,30]. SEM images of the samples synthesized at various temperatures (1100, 1200, 1300, and 1400 °C) for 30 min are shown in Figure 4. With an increase in reaction temperature, the size of the powders became larger, changing from ca. 0.2 μm at 1100 °C to ca. 0.4 μm at 1200 °C. When the reaction temperature was increased to 1300 °C, the TiC powders exhibited a pseudocubic morphology ( Figure 4C) with a particle size of ca. 0.7 μm. It should be noted that, although no other diffraction peaks were observed in Figure 2D, amorphous carbon should still exist because excessive carbon was added to ensure a complete carbothermal reduction, because carbon black acts as both the carbon source and the media to transform microwave dielectric to heat.
Effect of Carbon Content
TiO2 is a poor microwave absorber, while carbon black absorbs microwave very effectively. During the reaction, carbon black not only participates in the reaction but also serves as a medium to absorb microwave radiation to help heat the mixture of the reactants. Its content was reduced along SEM images of the samples synthesized at various temperatures (1100, 1200, 1300, and 1400 • C) for 30 min are shown in Figure 4. With an increase in reaction temperature, the size of the powders became larger, changing from ca. 0.2 µm at 1100 • C to ca. 0.4 µm at 1200 • C. When the reaction temperature was increased to 1300 • C, the TiC powders exhibited a pseudocubic morphology ( Figure 4C) with a particle size of ca. 0.7 µm. It should be noted that, although no other diffraction peaks were observed in Figure 2D, amorphous carbon should still exist because excessive carbon was added to ensure a complete carbothermal reduction, because carbon black acts as both the carbon source and the media to transform microwave dielectric to heat. SEM images of the samples synthesized at various temperatures (1100, 1200, 1300, and 1400 °C) for 30 min are shown in Figure 4. With an increase in reaction temperature, the size of the powders became larger, changing from ca. 0.2 μm at 1100 °C to ca. 0.4 μm at 1200 °C. When the reaction temperature was increased to 1300 °C, the TiC powders exhibited a pseudocubic morphology ( Figure 4C) with a particle size of ca. 0.7 μm. It should be noted that, although no other diffraction peaks were observed in Figure 2D, amorphous carbon should still exist because excessive carbon was added to ensure a complete carbothermal reduction, because carbon black acts as both the carbon source and the media to transform microwave dielectric to heat.
Effect of Carbon Content
TiO2 is a poor microwave absorber, while carbon black absorbs microwave very effectively. During the reaction, carbon black not only participates in the reaction but also serves as a medium to absorb microwave radiation to help heat the mixture of the reactants. Its content was reduced along
Effect of Carbon Content
TiO 2 is a poor microwave absorber, while carbon black absorbs microwave very effectively. During the reaction, carbon black not only participates in the reaction but also serves as a medium to absorb microwave radiation to help heat the mixture of the reactants. Its content was reduced along the reaction; therefore, excessive carbon black was added during the reactions. In order to study the role of carbon content in the process of the reduction of TiO 2 , mixtures of TiO 2 and C at various molar ratios (1:3.0; 1:3.2; 1:3.4, and 1:3.6) were reacted at 1400 • C for 30 min via microwave heating. Table 1 presents the results calculated from the data of the XRD and EDS measurements of the products from the samples starting at various TiO 2 and C mole ratios. The results showed that, with an increasing amount of carbon in the mixture, both the concentration and lattice constant of TiC increased, which suggests that a higher concentration of C is favorable for the growth of TiC.
Conclusions
In conclusion, our experimental results showed that pure and pseudocubic TiC phase could be synthesized by directly reducing A-TiO 2 instead of R-TiO 2 with the assistance of quick microwave heating. The lattice constant of TiC increases with an increasing reaction temperature and time. Carbon black acts as both the carbon source and the media to transform microwave dielectric to heat. A high ratio of C to TiO 2 in the starting materials is favorable for improving the conversion rate and quality of TiC powders. Compared with the carbothermal reduction using conventional heating to synthesize TiC powders, which typically requires a temperature of 1700-2400 • C for 10 to 24 h, TiC powders could be synthesized at a temperature of as low as 1200 • C during microwave heating, and the carbothermal reduction can be finished within 30 min at 1400 • C. Therefore, the preparation of TiC powders via microwave heating is much more energy-effective and thus promising for large-scale production. | 4,429.6 | 2016-11-01T00:00:00.000 | [
"Materials Science"
] |
RIS Meets Aerodynamic HAPS: A Multi-Objective Optimization Approach
In this letter, we propose a novel network architecture for integrating terrestrial and non-terrestrial networks (NTNs) to establish connection between terrestrial ground stations which are unconnected due to blockage. We propose a new network framework where reconfigurable intelligent surface (RIS) is mounted on an aerodynamic high altitude platform station (HAPS), referred to as aerodynamic HAPS-RIS. This can be one of the promising candidates among non-terrestrial RIS (NT-RIS) platforms. We formulate a mathematical model of the cascade channel gain and time-varying effects based on the predictable mobility of the aerodynamic HAPS-RIS. We propose a multi-objective optimization problem for designing the RIS phase shifts to maximize the cascade channel gain while forcing the Doppler spread to zero, and minimizing the delay spread upper bound. Considering an RIS reference element, we find a closed-form solution to this optimization problem based on the Pareto optimality of the aforementioned objective functions. Finally, we evaluate and show the effective performance of our proposed closed-form solution through numerical simulations.
I. INTRODUCTION
One of the most important targets in sixth generation wireless networks (6G) is the provision of ubiquitous connectivity.This aim can be attained by integration of terrestrial and nonterrestrial networks (NTNs), [1].To this end, reconfigurable intelligent surface (RIS) can be exploited to boost the channel gain by creating a multi-path environment.Non-terrestrial RIS (NT-RIS) is an intelligent intermediate reflection layer, where RIS is mounted on a non-terrestrial platform to connect the unconnected terrestrial infrastructures.Extensive research has been conducted to address the benefits of adopting NT-RIS in wireless networks, see [2]- [5] and the references therein.In practical cases, high altitude platform station (HAPS)-RIS is one of the promising candidates to be exploited for NT-RIS compared to other non-terrestrial platforms such as satellite-RIS and unmanned aerial vehicle (UAV)-RIS, [4], [5].HAPS operates at much higher altitude which leads to establishing line of sight (LoS) dominated connection and a much wider coverage area compared to UAV.Furthermore, HAPS is much larger than UAV so that RIS with a large number of elements can be mounted on it [2], [4].The advantages of exploiting RIS over relay is well articulated in [2], e.g., lower-cost, simpler hardware, shorter transmission delay, less power consumption, and longer communication duration.In [6], the authors prove that if the RIS is large enough it can beat the relay in terms of energy-efficiency.
Due to the large size of HAPS, large number of RIS elements can be mounted on HAPS, and hence, RIS is the better option.Even if a large number of RIS elements are deployed, the HAPS's payload is light due to the thin and lightweight materials from which the RIS elements are manufactured [2].From the perspective of HAPS mobility, there are two types of HAPSs, aerostatic and aerodynamic, [7].The investigation of HAPS-RIS communications is still in its infancy.The existing literature on this topic is mostly focused on aerostatic HAPS-RIS, [4], [5], [8], [9], while the aerodynamic HAPS-RIS is left as an open research topic.The necessity of research on this direction has been emphasised in [10].The advantages of exploiting aerodynamic over aerostatic HAPS in wireless networks are well articulated in [7], e.g., low-cost and swift deployment, and high resilience to turbulence.These features make aerodynamic HAPS a promising candidate technology in the move towards integration of terrestrial and non-terrestrial networks, [7].However, high mobility of aerodynamic HAPS leads to time-varying channel effects.Accordingly, the main research question that arises is "Can aerodynamic HAPS-RIS bring connectivity to the unconnected ground stations in presence of time-varying channel?".
There exist a number of works in the literature that consider RIS-based networks in the presence of time-varying channel, which can be classified into two groups where RIS is fixed, [11]- [13], or mobile, [14]- [16].Our proposed network architecture in this letter falls under the area of the latter one, where the RIS is mobile.In [14] and [15], the authors present efficient Doppler shift mitigation methods, including transmission protocol and RIS phase shift control, where both of RIS and user equipment are deployed in a high-mobility terrestrial vehicle.The main difference between [14] and [15], is the design of the transmission protocol.In [16], the authors present a cooperative passive beamforming and distributed channel estimation to maximize the overall channel gain between an RIS-aided low-earth orbit satellite and a ground node.While the main focus of [14]- [16] is channel estimation, to the best of our knowledge, there is no existing work which geometrically formulates all the channel metrics and timevarying effects based on predictive mobility of RIS, which can play a vital role in reducing the computational complexity.Furthermore, the authors in [14]- [16] only consider one side of the cascade channel to be time-varying, while in this letter we investigate the case where both sides of the cascade channel are time-varying.To summarize, this letter addresses the aforementioned gaps in the literature with the ensuing contributions: (i) We introduce a novel network architecture for NT-RIS assisted networks.We propose a new system model where RIS is mounted on aerodynamic HAPS to connect the unconnected terrestrial ground stations in emergency situations thanks to significant features of aerodynamic HAPS.(ii) We mathematically model the mobility pattern of each RIS element based on the dimensions of the RIS and the RIS elements, and the predictive trajectory of the aerodynamic HAPS-RIS.Next, we obtain a geometrical model for all the channel metrics and time-varying effects.To the best of our knowledge, there is no work which geometrically models the the mobility profile of a mobile RIS based on these parameters.(iii) We propose a multi-objective optimization problem in which the objective functions are the channel gain, the delay spread upper bound and the Doppler spread.We obtain a closed-from solution for the RIS phase shifts, by introducing a reference RIS element, adopting Pareto optimality.The obtained closed-form is based on the known locations of Tx and Rx, and the known time-varying position of the aerodynamic HAPS-RIS, thanks to knowing its mobility pattern.This leads to practical and simple implementation as the RIS phase shifts can be efficiently calculated by the onboard processing unit on HAPS.
II. SYSTEM MODEL AND PROBLEM FORMULATION
In this letter, we consider a heavy blockage scenario where the link between the terrestrial transmitter (Tx) and receiver (Rx) is blocked.We consider the network architecture in Fig. 1, exploiting aerodynamic HAPS-RIS to connect the unconnected ground stations.We consider the RIS to be a rectangle with the length a and the width b, which is located on the bottom of the HAPS in the xy-plane.d x and d y are the dimensions of each RIS element, which are in the range of [ λc 10 , λc 5 ] where λ c = c0 fc is the carrier wavelength [17].f c is the carrier frequency and c 0 is the speed of light.The RIS consists of P = a dx columns and Q = b dy rows of reflecting elements.The aerodynamic HAPS has a circular movement in the stratosphere with radius R 0 centered at the origin of the Cartesian coordinate system and the velocity v.It is not practical to consider different trajectories for the aerodynamic HAPS, like what is expected for UAVs.It is vital to consider the circular trajectory for the aerodynamic HAPS, which leads to quasi-stationary position, that brings resilience to turbulence [7].As the aerodynamic HAPS is moving with high speed, both sides of the cascade channel for each RIS element are time-varying.This can be clearly observed in Fig. 2 which shows the geometrical mobility pattern of RIS elements based on the predictive mobility of the aerodynamic HAPS.Definition 1.The geometrical mobility pattern of the RIS elements can be attained as a function of the predictive mobility of the aerodynamic HAPS, and the dimensions a, b, d x , and d y as (x p,q (t) , y p,q (t), z p,q (t)) = (R p,q cos( vt Rp,q + α p,q ), R p,q sin( vt Rp,q + α p,q ), 0) where We consider a LoS dominated scenario for the links between Tx/Rx and HAPS-RIS as the aerodynamic HAPS-RIS flies in a high altitude, i.e., 20 km [7].Furthermore, in HAPS-RIS scenarios, the ground stations are considered as high directional antenna gain transceivers which leads to establishing strong and dominant LoS link [8], [9].The Tx sends a passband signal where s(t) is the complex baseband signal with bandwidth B/2 which is modulated to the carrier frequency f c satisfying B ≪ 2f c , [18].Thus, the received baseband signal can be shown as where Γ p,q (t) is the cascade channel gain coefficient for the RIS element (p, q) and n(t) is the additive white Gaussian noise (AWGN).Additionally, ψ p,q (t) is the phase shift of the RIS element (p, q).Using the Friis model, [19], Γ p,q (t) is the multiplication of the ground to air and air to ground amplitude gains as where S ∈ {T, R} represents the Tx/Rx.The distance between the RIS element (p, q) and the Tx/Rx can be calculated as d S p,q (t) = (x p,q (t) − x S ) 2 + (y p,q (t) − y S ) 2 + (z p,q (t) − z S ) 2 .Moreover, g S p,q (t) is the antenna gain of RIS element (p, q) to S, which can be a function of θ S p,q (t) ∈ [0, π] and ϕ S p,q (t) ∈ [0, 2π].We consider that g S p,q (t) is zero for θ S p,q (t) ∈ π 2 , π .The term θ S p,q (t) = arccos( ) is the elevation angle from the RIS element (p, q) to S. The term ϕ S p,q (t) = arctan( ) is the azimuth angle from the RIS element (p, q) to S. Furthermore, g p,q S (t) is the antenna gain of the Tx/Rx to/from the RIS element (p, q).The terms θ p,q S (t) and ϕ p,q S (t) are the angle of elevation and azimuth from S to the RIS element (p, q), respectively.τ p,q (t) is the cascade path delay for the RIS element (p, q), which can be formulated as τ p,q (t) = S d S p,q (t) c0 .The instantaneous cascade channel gain as the ratio between the received power, P R (t), and the time-invariant transmit power, P T , can be obtained as P R (t) The time-varying effects caused by the cascade paths through the RIS elements, i.e., the Doppler spread, B Do (t), and the delay spread, T De (t), can be obtained as [13], [18] T De (t) = max p,q {τ p,q (t) + ψ p,q (t) 2πf c } − min p,q {τ p,q (t)+ ψ p,q (t) 2πf c }.
(6) We consider a multi-objective optimization problem, including analogous objective functions as in [13], to maximize (4) while minimizing ( 5) and ( 6) simultaneously.Therefore, the main optimization problem can be formulated as OP : max ∀p,q,t: ψp,q(t)≥0 P R (t) The feasible set ψ p,q (t) ≥ 0, originates from the causality requirement [18].We consider a mobile RIS where both sides of the cascade channel are time-varying while in [13] the RIS is fixed and only one side of the cascade channel is timevarying.Furthermore, we consider the link between Tx and Rx is blocked while in [13] the direct link is available.The adopted technique in [13] does not work for our proposed model to solve OP.To tackle this issue, as can be seen in Fig. 2, we consider a single RIS element as a reference with variable phase shift ψ p0,q0 (t), so that the other phase shifts can be obtained based on that.The cascade path through the reference element is called reference path.
III. PROPOSED RIS PHASE SHIFT DESIGN
To find the optimal solution of OP, let us consider the search space as the set Ψ.Even if we relax the continuous RIS phase shifts to discrete ones with M quantization levels, to simplify the problem, the search space has M P Q states.As this is a massive number for a large number of RIS elements, finding the optimal solution is intractable in terms of computational complexity.For large values of M , to get close to the continuous case, the search space Ψ becomes prohibitively large.Thus, it is evident that if the phase shifts are continuous like our proposed scenario, solving (7) is not affordable in terms of computational complexity.To tackle this issue, we find the Pareto optimal solution of OP in Proposition 1 by decomposing OP into OP 1 and OP 2 .Proposition 1.Let us decompose OP into OP 1 and OP 2 as Fig. 3.The proposed Pareto Optimal Solution Method OP 1 : ∀p, q, t : ψ p,q (t) ∈ arg max OP 2 : ∀p, q, t : ψ p,q (t) ∈ arg min As OP 1 and OP 2 can not be optimized simultaneously, we consider that OP 1 has the higher priority order compared to OP 2 , which is elaborated later in Lemma 2. As can be seen in Fig. 3, OP 1 optimizes PR(t) P T and B Do (t) simultaneously.Let us consider all the possible solutions of OP 1 is the solution set χ OP2 which is a feasible set for OP 2 .In OP 2 , we optimize the delay spread upper bound, T upp De (t), over the feasible set ψ ⊂ χ OP2 resulting from solving OP 1 .The Pareto optimal closed-form solution of (7) is where mod(µ, η) is the remainder of the division of µ by η. τ p0,q0 (t) is the delay spread through the RIS reference element.
There is no need to consider ψ p,q (t) ≤ 2π as it is already satisfied in our closed-form solution.
Proof.The Pareto optimal solution can be attained based on lemma 1 and 2. Lemma 1.As the Doppler spread, B Do (t) in (5), is a function of RIS phase shifts, using the following criterion in RIS phase shift design, this effect becomes zero.
Proof.The Doppler spread can be represented as where the Doppler spread between the reference path and other cascade paths is and the Doppler spread between the cascade paths except reference path is In order to make the Doppler spread zero, we force both B Do,1 and B Do,2 to zero, which leads to (11).Lemma 2. The Pareto optimal solution, (10), optimizes (4) and (5) simultaneously, and after that minimizes T upp De (t).
Proof.After forcing Doppler spread to zero, we have a feasible set for ψ p,q (t) based on (11).First, we integrate (11) with respect to t and substitute the result into (4).In the next step, in order to maximize the instantaneous cascade channel gain, all the terms of (4) should have the same phase.Therefore, ∀p, q choosing ψ p,q (t) = ψ p0,q0 (t) p = p 0 , q = q 0 , 2πf c ̟ p,q (t) + 2πζ p,q (t) + ψ p0,q0 (t) Otherwise, (15) and ζ p,q (t) ∈ Z, maximize (4).It is clear that (4) is the most important metric among the objective functions, which leads to maximizing signal-to-noise ratio.From ( 11) and ( 15), we see that ( 4) and ( 5) can be simultaneously optimized, irrespective of the phase shift ψ p0,q0 (t).Due to the causality requirement, ψ p,q (t) ≥ 0, we can attain the upper bound of delay spread based on (6) as {τ p,q (t)}.( 16) From ( 15) and ( 16), it is obvious that there is no single solution for optimizing OP 1 and OP 2 simultaneously.As ( 16) is an increasing function in ψ p,q (t), zero phase shift is needed ∀p, q, t to minimize (16), which is impossible according to (15).Instead, there are infinite non-inferior solutions, [20].By substituting ( 15) into ( 16), the delay spread upper bound can be obtained based on the possible solutions of OP 1 as {τ p,q (t)}.
(17) In the following, we minimize the objective function in OP 2 .Based on (15) and the causality requirement, ψ p,q (t) ≥ 0, we have from ( 18) and since ζ p,q (t) ∈ Z, the minimum value of ζ p,q (t) can be obtained as which is a decreasing function with respect to ψ p0,q0 (t).Equation ( 17) includes additional increasing function, i.e., ψp 0 ,q 0 (t) 2πfc .By substituting ζ min p,q (t) into (17), it is obvious that the variation of ψ p0,q0 (t) ∈ [0, 2π] results in a small variation, less than 1 fc , in T upp De (t).Hence, we relax t)⌉, which turns (17) into an increasing function with respect to ψ p0,q0 (t).Accordingly, the closed-form solution for the RIS phase shifts are obtained as (10) by considering ψ p0,q0 (t) = 0 and substituting ζ R p,q (t) into (15).This closed-form solution is Pareto optimal based on Th. 4.2.1 in [21].Accordingly, (10) jointly optimizes (4) and ( 5), as the first priority order, and minimizes (16) as the second priority order.Another potential solution of OP is to reverse the priority order between OP 1 and OP 2 .This reversed priority approach leads to non-efficient solution that is presented later in Section IV.
Corollary 1.With this Pareto optimal solution, the Doppler spread is zero, the maximum value for the instantaneous cascade channel gain is achieved as T upp,min De (t) = max{τ p0,q0 (t), max fc }} − min p,q {τ p,q (t)}.
(19)
As can be seen in Fig. 2, all the circular paths of the RIS elements have the same center located exactly between the Tx and Rx.With this symmetrical feature, the proposed closedform solution works for all time slots due to the periodicity of the mobility patterns of the RIS elements.
IV. NUMERICAL EVALUATIONS
In this section, we evaluate the performance of our proposed RIS phase shift design in Section III.We assume a circular path with the origin (0, 0, 0) and the radius R 0 = 3 km parallel to xy-plane.The RIS dimensions are chosen in a way such that a = 20 × b, i.e., the length is much larger than the width.This is because the RIS is mounted below the HAPS wing, as in Fig. 1.The RIS element dimensions are chosen as d x = d y = λc 5 , where f c = 2 GHz, and hence, the total number of RIS elements can be obtained as HAPS altitude and velocity of 20 km and v = 110 km/h are used in our simulations, respectively.These parameters are inline with the specifications of one of the well-known aerodynamic HAPS, HAWK30, [7], [22].The terrestrial Tx and Rx coordinates in the scale of km are (x T , y T , z T ) = (−5, 0, 20) and (x R , y R , z R ) = (5,0,20), respectively.The planar antenna gain of RIS element (p, q) to S can be considered as g S p,q θ S p,q (t) , ϕ S p,q (t) = 4π λ 2 c d x d y cos θ S p,q (t) for θ S p,q (t) ∈ 0, π 2 and zero otherwise, [13].The gains of the transmit and receive antennas are g p,q S (t) = 1.As mentioned in Lemma 2, an alternative approach to our proposed method is optimization with reversed priority, i.e., reversing the order of OP 1 and OP 2 in the optimization process.Hence, in the following, we compare our proposed method with this alternative approach.In Fig. 4(a) and Fig. 4(b), the cascade channel gain and T upp De (t) of the proposed method and reversed approach are compared at a snapshot t 0 = 10 s.Using the result of Corollary 1, in Fig. 4(a), we plot the cascade channel gain versus RIS dimensions.As can be seen, in Fig. 4(a), the reversed approach leads to a poor performance compared to our proposed method.Exploiting the proposed method makes the cascade channel gain controllable and it can be constructively increased by increasing the RIS dimensions.In contrast, the cascade channel gain based on reversed approach is uncontrollable as the only controllable parameter, i.e., the RIS phase shifts are fixed.This is due to the fact, mentioning in Lemma 2, that ψ p,q (t) is considered zero ∀p, q, t to optimize T upp De (t) with the first priority order.By substituting t = t 0 s and zero phase shifts in (4), the cascade channel gain can be formulated as . The term exp(−j2πf c τ p,q (t 0 )) can negatively affect the cascade channel gain and makes it uncontrollable.Adopting the results of Corollary 1, in Fig. 4(b), we plot T upp De (t 0 ) versus RIS dimensions to compare the proposed method and the reversed one.It can be seen that the delay spread gap between our proposed method and the reversed priority is negligible.Furthermore, by extrapolating Fig. 4(b), we can see that for a = 80 m, T upp De (t 0 ) is around 8×10 −8 s.This is due to almost linear behavior of T upp De (t 0 ) as a function of a.Therefore, (10) can keep the delay spread upper bound controllable even for a large number of RIS elements.The claims for Fig. 4 5(a) shows that by increasing the value of a from 10 m to 20 m, the cascade channel gain can be increased by 27.7 dB.As can be seen, the fluctuation is less than 0.1 dB and can be ignored as it is negligible compared to the average value of the cascade channel gain.There is no significant benefit to consider the time-varying transmit signal to compensate this negligible fluctuation.In Fig. 5(b), we plot T upp De (t) versus time to compare our proposed method with the reversed approach.As can be seen, the gap is less than 5 × 10 −10 s and it is negligible.In addition, it is clear that (10) can make T upp De (t) controllable for different time slots.
V. CONCLUSION
In this letter, we proposed a new network architecture exploiting an aerodynamic HAPS-RIS to provide connection between the unconnected ground stations.We proposed a multi-objective optimization problem for designing the RIS phase shifts based on the predictable mobility of aerodynamic HAPS-RIS.We found a closed-form solution for the RIS phase shifts, adopting Pareto optimality, based on an RIS reference element.We maximized the channel gain, forced the Doppler spread to zero, and minimized the delay spread upper bound.By exploiting this closed-form Pareto optimal solution, we do not need to constantly track the channel variations and constantly update the RIS phase shifts by solving optimization problems.Finally, we showed the performance efficacy of our proposed closed-form solution through numerical simulation.
Fig. 2 .
Fig. 2. Geometrical mobility pattern of RIS elements based on the predictable mobility of the aerodynamic HAPS.
2 ,Fig. 4 .
and the delay spread upper bound is (a) Cascade channel gain versus RIS dimensions at t = t 0 .(b) Delay spread upper bound versus RIS dimensions at t = t 0 .
Fig. 5 .
Fig. 5. (a) Cascade channel gain versus time for different RIS dimensions.(b) Delay spread upper bound versus time for different RIS dimensions.
(a) and Fig. 4(b) are feasible for any t = t 0 based on the results presented in Figs.5(a) and 5(b).In Figs.5(a) and 5(b), we analyze the cascade channel gain and T upp De (t) versus time for different RIS dimensions, respectively.Fig. | 5,440.4 | 2023-01-25T00:00:00.000 | [
"Computer Science"
] |
High Genetic Diversity and Low Differentiation of Michelia coriacea (Magnoliaceae), a Critically Endangered Endemic in Southeast Yunnan, China
Michelia coriacea, a critically endangered tree, has a restricted and fragmented distribution in Southeast Yunnan Province, China. The genetic diversity, genetic structure and gene flow in the three extant populations of this species were detected by 10 inter-simple sequence repeat (ISSR) markers and 11 simple sequence repeat (SSR) markers. Examination of genetic diversity revealed that the species maintained a relatively high level of genetic diversity at the species level (percentage of polymorphic bands) PPB = 96.36% from ISSRs; PPL (percentage of polymorphic loci) = 95.56% from SSRs, despite several fragmental populations. Low levels of genetic differentiation among the populations of M. coriacea were detected by Nei’s Gst = 0.187 for ISSR and Wright’s Fst = 0.090 for SSR markers, which is further confirmed by Bayesian model-based STRUCTURE and PCoA analysis that could not reveal a clear separation between populations, although YKP was differentiated to other two populations by ISSR markers. Meanwhile, AMOVA analysis also indicated that 22.84% and 13.90% of genetic variation existed among populations for ISSRs and SSRs, respectively. The high level of genetic diversity, low genetic differentiation, and the population, structure imply that the fragmented habitat and the isolated population of M. coriacea may be due to recent over-exploitation. Conservation and management of M. coriacea should concentrate on maintaining the high level of genetic variability through both in and ex-situ conservation actions.
Introduction
The ultimate goal of conservation biology is to maintain the evolutionary potential of species by maintaining natural levels of genetic diversity [1][2][3][4]. To achieve this goal, understanding of the species' genetic diversity and genetic structure is necessary [5]. Within a species, genetic diversity of plant populations is largely determined by factors such as mating system, gene flow, genetic drift, evolutionary and life history [6][7][8]. Compared with widespread taxa, many rare and endangered species may become genetically depressed because of their small population size [9]. Habitat fragmentation exacerbates these problems, and has therefore been recognized as one of the greatest threats to the survival of many species in small and/or isolated populations [10]. Knowledge of genetic diversity and population age structure in rare plants not only enhances our understanding of population dynamics, adaptation and evolution, but also provides useful information for biological conservation [11,12].
The recently described Michelia coriacea (Magnoliaceae) is a critically endangered species, restricted to karst areas of Southeast Yunnan Province, China, bordering northern Vietnam [13][14][15]. This area has exceptionally high species diversity, and has been recognized as one of the most important biodiversity hotspots in China [16,17], but has been severely affected by habitat fragmentation due to human activities, causing populations of rare plants to become isolated from each other.
Currently, M. coriacea occurs as scattered individuals in evergreen woods on limestone slopes or roadsides at altitude of 1300-1600 m, and has a potential distribution range of approximate 4190 km 2 [18]. These individuals are distributed across only three extant populations: Daping and Shipen, Dongma and Tiechang, and Yang-Kai-Ping (defined as populations of DS, DT and YKP in this paper) (Figure 1). In DS population, some individuals of M. coriacea are over 100 years old, having been protected as symbols of -good fortune‖ by local villagers; however, they are surrounded and isolated by farmland and villages. In DT population, the maximum age of sampled trees was less than 30 years, and in YKP population the average tree age was 10-50 years [19]. All individuals in these populations are in habitats that have been degraded and fragmented due to heavy logging and vegetation destruction. Undoubtedly, an understanding of the genetic diversity of the critically endangered M. coriacea is urgently needed for planning its conservation action. The use of molecular analysis as an integral component in the conservation of rare and endangered species is becoming more widely used [5,11,20]. The molecular markers best suited for detecting genetic diversity should be relatively easy and inexpensive to detect and these markers should have evolved rapidly enough to be variable within populations. Inter-simple sequence repeats (ISSRs) and simple sequence repeats (SSRs) have been successfully employed and recognized as useful molecular markers in the analysis of a population's genetic variation, and for other purposes in various species [21,22]. In order to achieve valuable and comprehensive information on the genetic diversity and population differentiation of M. coriacea, we employed ISSR and SSR markers to address the key issue: what are the levels of genetic diversity and differentiation within, and among, populations of M. coriacea? In addition, implications for conserving this critically endangered species are discussed.
Genetic Diversity and Genetic Structure Investigated by ISSR Markers
A total of 110 bands were presented from the 10 selected primers among 58 individuals of the three populations. Of these, 106 (96.36%) were polymorphic (Table 1). At the population level, the percentage of polymorphic bands (PPB) within each population ranged from 60.00% (DS) to 81.82% (DT) with an average of 72.73% (Table 2). Populations YKP, DT and DS contained nine, eight and zero private bands respectively. Assuming Hardy-Weinberg equilibrium, the mean Nei's gene diversity (H) was estimated to be 0.226 within populations and 0.283 at the species level, and the average Shannon information index (H pop ) within populations and species levels was 0.345 and 0.436, respectively (Table 2). In the STRUCTURE analysis of ISSR data, the value of ΔK was 903.1 for K = 2, 412.9 and 233.5 for K = 3 and K = 4, respectively, and <155 for all values of K higher than 4. Therefore K = 2 best represents the data. Structure analysis run with K = 2 showed a clear separation between YKP and the other two populations ( Figure 2).
The two-dimensional PCoA plot ( Figure 3) shows that the first principal coordinate accounts for 39.59% of total variation and separates the YKP population from the DS and DT populations. The second principal coordinate (18.88% of total variation) separated most individuals from DS from those of DT, but there was some overlap, with certain individuals from DS grouping with those from DT. AMOVA analysis among populations showed that 77.16% variation occurred within populations (Table 3). Deduced from the G st value, the level of gene flow (N m ) was estimated at 1.086 (N m > 1), indicating no differentiation among populations.
Genetic Diversity and Genetic Divergence Based on SSR Markers
A total of 45 alleles at 11 SSR loci were revealed across 58 individuals of M. coriacea. The number of alleles per locus ranged from 2 at locus MC64 to 7 at locus MC49, with an average of 4.091 per locus (Table 4). At species level, the percentage of polymorphic loci was 95%. Values for observed heterozygosity (H o ) ranged from 0.172 at locus MC45 to 0.759 locus MC8 (average 0.412), and for expected heterozygosity (H e ) from 0.247 at locus MC45 to 0.782 at locus MC8 (average 0.505). PIC values ranged from 0.417 to 0.750, with an average value of 0.502 per locus (Table 4). Table 4. SSR primers used for DNA amplifications, locus, repeat motif, primer size (bp), PCR annealing temperature (T a ), number of alleles (A), observed heterozygosity (H o ), excepted heterozygosity (H e ) and polymorphism information content (PIC). At the population level, the percentage of polymorphic loci per population ranged from 72.73% (DS) to 91.90% (DT) with an average of 82.15%. The average value for H o was 0.425 (ranging from 0.335 to 0.533) and that for H e was 0.470 (ranging from 0.429 to 0.498). Unlike private bands detected from ISSRs, private SSR alleles were present in all three populations: DS, DT and YKP contained 2, 3, and 2 private alleles respectively ( Table 5). The estimated genetic differentiation among populations (F st ) was 0.090 and the estimated level of gene flow (N m ) among populations was 2.543. AMOVA analysis showed that 13.90% of the variance existed among populations (Table 3). In the STRUCTURE analysis of SSR data, the value of ΔK was 61.2 for K = 3, 59.3 for K = 4, and <51 for all values of other K. The scores for K = 3 and K = 4 were very similar; therefore we performed runs with each K value. However, results either from K = 3 or K = 4 failed to differentiate between populations (Figure 4a,b). Similarly, PCoA analysis did not clearly separate the populations. Individuals from DS tended to score lower for coordinate 1 (accounting for 25.57% of total variation), whereas individuals from DT and YKP tended to have higher and lower scores respectively for coordinate 2 (accounting for 19.84% of total variation). However, all three populations overlapped for both coordinates, meaning that discrimination based on this data and PCoA analysis of this data was not possible ( Figure 5).
High genetic diversity can be maintained in rare plants that are adapted to an existence comprising small isolated populations (i.e., are naturally rare; [26]). However, high genetic diversity within very small populations can also be observed if very recent population size reduction has occurred, especially where that reduction has occurred within a generation or two for the species concerned; in such cases the surviving individuals are effectively samples from the larger population that existed before [11,20,27,28]. In such cases, unlike naturally rare plants, a dramatic loss of diversity in future generations is to be expected via genetic drift.
Habitat destruction appears to be recent for M. coriacea, as evidenced both by witness accounts of local people and the literature record. The species was first discovered in 1986 and then described as a new species in 1987 [29]. Initially, it was known from four localities in SE Yunnan, i.e., DS, DT, YKP and a fourth, Guangnan [29]. Despite repeated searches of the Guangnan area between 2004 and 2006, we could not find any plants of M. coriacea, indicating its extinction from Guangnan within the last 30 years. However, there is no evidence for range contraction during the same period for the three extant populations of M. coriacea in this study. In addition, our molecular data clearly indicated apparently high levels of gene flow for M. coriacea (N m = 1.086 for ISSR data, and 2.543 for SSRs), which would account for low differentiation between populations (F st = 0.090 for SSRs and G st = 0.187 for ISSRs). It should be noted that indirect estimates of Nm values must be interpreted with caution [30,31], and this data therefore should be viewed as general indicators of the magnitude of genetic exchange.
Low genetic differentiation and high gene flow between populations can result from long-distance gene dispersal either by pollen or by seed [10]. Long range seed dispersal has been implicated in maintaining links between populations in the rare Michelia formosana [32]. M. coriacea also has animal-dispersed seeds, and insect pollination may also have contributed to gene flow over distance in this species. However, seed or pollen flow appears unlikely between the modern, isolated populations. Therefore, gene flow between plants at the three extant sites via seeds and/or pollen was probably extensive and relatively unhindered in the past, before the populations became isolated.
Variation Between Populations
Populations DT (PPB = 81.82%, PPL = 91.90%) and YKP (PPB = 76.36%, PPL = 81.82%) contained relatively higher genetic diversity than did population DS (PPB = 65%, PPL = 72.73%). Because trees at DS are usually >100 years old, whereas those at DT and YKP are < 50 years old [19], range contraction within the past 50 years can be eliminated as a cause for this difference. Instead, patterns of gene flow, migration and population structure prior to range contraction can impact on modern populations [10], and may explain differences in genetic diversity between these M. coriacea populations. The fact that heterozygosity for SSR alleles was slightly higher in DS than the other two populations indicates that lower diversity here had not yet brought about inbreeding when the current generation was founded. However, the absence of seedlings from all three populations has prevented us from assessing whether range contraction will lead to an immediate inbreeding effect in the next generation.
It should be noted that there are some discrepancies between the results (e.g., levels of examined population's heterozygosity, STRUCTURE and PCoA clusterings) from different markers, suggesting that the manner of polymorphism differs because of marker specificity. Contrasting patterns are commonly found when multiple markers are used to detect genetic structure, as for example in soybean [33], cashew [34], olive [35] and Ficus carica [36]. Such contrasts between markers could be due to any or all of the following causes: (1) these two marker systems sample genomic regions with different evolutionary modes; (2) ISSR is based on consensus markers, whereas the microsatellite markers used in this study were specifically developed for M. coriacea; and (3) ISSR and SSR have different modes of inheritance (i.e., dominant versus codominant). In addition, some marker loci could be under selection, or tightly linked to loci under selection, meaning that a small minority of markers for either system might not be truly neutral, although neutrality of both ISSR and SSR markers is often assumed. To test directly for these possibilities is beyond the scope of the current study, but using two types of marker, and interpreting results of each with a degree of caution, can give a reasonable assessment of the true genetic situation for the examined populations. In the current study, population YKP was clearly separable from the other populations based on ISSR, but not SSR, markers. Hence, YKP might be the most distinct population, but SSR data indicates that this conclusion must be treated with caution.
Conservation
The maintenance of a maximal amount of genetic diversity is one of the major objectives for conserving endangered and threatened species [11], and the loss of genetic variation may largely limit adaptability to changing environments [37,38]. In the case of M. coriacea, AMOVA results shows that each of the three examined populations currently maintains a high level of within-population genetic diversity. Therefore, the endangered status of this species currently reflects a small number of extant individuals and very poor natural regeneration. Conservation measures should therefore focus on establishing large numbers of seedlings, both in and ex situ, involving many different individuals as parents, to preserve as much as possible of the existing genetic diversity in subsequent generations. Given the current lack of recruitment, establishing seedlings in situ will require management and co-operation from local communities.
In addition, the observed high diversity among relatively few adult trees also places a conservation value on each individual adult tree as a mini-reservoir of genetic diversity, in a way that would not apply for genetically depauperate, relict populations. Hence, conservation efforts to preserve each existing tree would also have a value in maintaining genetic diversity.
Conservation strategies may differ between sites. In DS, the oldest trees are protected by villagers as good luck symbols [19], so encouraging the locals to grow seedlings from these trees to bring luck to future generations might be effective. This is, however, the least diverse population. Individuals from DT and YKP occur in secondary vegetation and enjoy no protection of any kind [19]. Of these two populations, YKP might be the most distinct and hence of higher conservation value deduced from ISSR results.
Within each population, however, SSR data also indicated clusters of individuals more similar to one another than they were to others. For example, individuals DS1 and DS2 were genetically highly distinct from others is DS, and likewise YKP3, YKP4 and YKP6 were genetically distinct from others in YKP. Therefore, seedlings with a higher degree of heterozygosity could be generated by crossing individuals that, according to our results, are genetically dissimilar.
In the short term, conservation priorities must be to preserve as many as possible of the surviving individuals. In the medium term, new generations must be ensured, and in the long term, genetic health must be maintained. Therefore, in-situ efforts to conserve remaining habitats need to be combined with ex-situ research on seed propagation, with a view to establishing a new generation of plants both in cultivation and the wild. Should in-situ conservation be more successful at one site than others, then seedlings from all three sites could be established at the protected site.
The Study Species and Sampling Procedures
Michelia coriacea is diploid (2n = 38) [39]. The mature trees can reach up to 20-30 m in height and 50-60 cm in diameter [18,19,40], and live for at least 100 years. The species bears scented whitish or creamy yellow flowers with 6-7 tepals which are in anthesis from February to April [13,14].
The plant sampling and material collection were carried out in March and July 2007, and in August 2008. Prior to this, comprehensive field surveys had located only 15, 33 and 53 individuals at DS, DT and YKP, respectively [19]. It should be noted that seedling regeneration is not good, because no seedlings were detected in DS, and only a few seedlings were found in the other two populations. Therefore, we sampled all 15 mature individuals in DS, whereas 21 and 22 mature individuals were sampled from DT and YKP respectively (Figure 1 and Table 6). Healthy, young leaves were collected from each sampled individual, and dried in silica gel for subsequent DNA extraction. Table 6. Known populations of Michelia coriacea examined for ISSR and SSR analyses with population code, locality, altitude, location coordinates, numbers of individuals sampled, sample code and voucher numbers.
DNA Extraction and PCR Amplification
Total DNA was extracted from the dried leaves following the modified CTAB method described by Doyle [41]. The purified total DNA was quantified by gel electrophoresis and its quality verified by spectrophotometry. DNA samples were stored at −20 °C.
One hundred ISSR primers from the University of British Columbia (UBC) were initially screened on eight randomly selected individuals for PCR. Of these, ten primers which consistently amplified well, and produced polymorphic bands, were selected for analyzing (Table 1). To ensure repeatability of amplification and scoring, all loci were amplified 2 times independently and run on separate gels. PCR reactions were performed in a reaction volume of 15 μL, containing 9.92 μL ddH 2 O, 0.4 μL formamide, 1.5 μL 10× PCR buffer (Mg 2+ Plus), 0.25 mM of each dNTP, 0.9 U Taq polymerase, 0.6 μM primers, 0.6 μL template DNA (approximate 50 ng). Cycling conditions consisted of an initial denaturation step at 97 °C for 4 min, followed by 35 cycles of denaturation at 94 °C for 1 min, annealing temperature for 1 min (Table 1), extension at 72 °C for 1.5 min, and finally 10 min at 72 °C for final extension. Amplification products were detected by using 1.6% agarose gel stained with 0.5 pg/mL ethidium bromide, and were electrophoresed in 1× Tris-borate-EDTA (TBE) buffer (pH 8.0) at 85 V for 2 h. The bands were visualized under UV light.
SSR genotyping was performed according to the methods described in Zhao et al. [40] (Table 2); then final extension at 72 °C for 7 min. The amplified product was then separated on 6% denaturing polyacrylamide sequencing gels using silver staining. A 20 bp DNA ladder standard (Fermentas) was used as standard for scoring.
ISSR Data
Amplified ISSR fragments showing consistent amplification were scored manually as present (1) or absent (0) for each sample to form a binary matrix. The POPGENE program Version 1.31 [42] was employed, assuming Hardy-Weinberg equilibrium, to obtain the genetic diversity parameters within each population: percentage of polymorphic bands (PPB), private bands (referring to bands found only within one population), observed allele number per locus (A o ), effective allele number per locus (A e ), Nei's gene diversity (H) and Shannon index of diversity (H pop ) [31]. Genetic diversity parameters were also calculated both at species and population levels.
Genetic differentiation among populations was analyzed using Nei's gene diversity statistics [27]. The gene flow was estimated by using the equation N m = (1 − G st )/4G st , where N m is the number of migrants per generation [43]. To infer population structure and assign individuals to populations, the program STRUCTURE version 2.3.1 was used [44], following the methods described by Ma et al. [45]. We adopted the admixture model with correlated allele frequencies. No prior knowledge of the species was included in the analyzed data set. To determine the optimal number of groups (K), we ran STRUCTURE with K varying from 1 to 10, with five runs for each K value. Previous studies have found that, in many cases, the posterior probability for a given K increases slightly, even after the real K is reached [46]. Therefore, we used an ad hoc statistic, ΔK, to determine the true value of K [47]. Our parameters were 100,000 burn-in periods and 10,000 MCMC repetitions after burn-in. Furthermore, principal co-ordinate analysis (PCoA) in GenAlEx 6.0 was employed to examine further the genetic relationships among detected populations on the basis of the same ISSR data [48].
The AMOVA (analysis of molecular variance) was performed through Arlequin 3.0 program [49] to describe variance components and their significance levels for variation among individuals within and among the populations.
SSR Data
To estimate overall genetic diversity, the following measures were calculated for each SSR primer locus and each population using POPGENE program Version 1.31 [42]: observed allele number per locus (A o ), effective allele number per locus (A e ), observed heterozygosity (H o ), expected heterozygosity (H e ), and the number of private loci, i.e., those found only in a single population. The same program was used to test for deviations from Hardy-Weinberg equilibrium (HWE) and pair-wise linkage disequilibrium (LD). To measure the marker polymorphism, the polymorphism information content (PIC) for each SSR Marker was calculated according to the formula PIC = 1 − ΣP i 2 , where P i is the frequency of the allele for each SSR marker locus [50].
Wright's F st was estimated by a weighted analysis of variance with the GENEPOP 4.0 program [51]. In addition, the amount of gene flow (N m ) among populations was estimated by an indirect genetic differentiation method based on F st value, N m = (1 − F st )/4F st . Partitioning of genetic variation within and among populations was further performed by analysis of molecular variance using the Arlequin 3.0 program [49]. For these SSR data, genetic relationships among three populations were also examined using STRUCTURE and GenAlEx 6.0 with the same methods described as ISSR markers above.
Conclusions
In summary, our results from both ISSR and SSR markers show that a relatively high level of genetic diversity and low levels of genetic differentiation among populations exist in the critically endangered plant Michelia coriacea. Furthermore, Bayesian model-based STRUCTURE and PCoA analysis could not reveal a clear separation between populations, although YKP was differentiated from the other two populations by ISSR markers. We therefore presume that the fragmented habitat and the isolated populations of M. coriacea may be due to very recent contraction of its range. Based on this data, conservation strategies for this critically endangered species were also proposed. | 5,402.6 | 2012-04-01T00:00:00.000 | [
"Biology"
] |
Measurability of phi, omega and rho mesons via di-electron decays in high-temperature states produced in heavy-ion collisions
We discuss measurability of phi, omega and rho mesons via di-electron decays in high-temperature states produced in heavy-ion collisions, equivalently at different pion multiplicities per heavy-ion collision dN_{pi^{0} + pi^{+-}}/dy = 1000 and 2700 intended for the most central Au+Au collisions at sqrt(s_{NN}) = 200 GeV (RHIC) and the most central Pb+Pb collisions at sqrt(s_{NN}) = 5.5 TeV (LHC), by evaluating the signal-to-background ratios and the statistical significance for the idealized detection system in the numerical simulation. The simulation study provides a guideline to be applicable to a concrete detector design by focusing on only the key experimental issues relevant to the measurement of di-electrons. The results suggest that there are realizable parameter ranges to measure light vector mesons via di-electrons with the reasonable significance level, even in the highest multiplicity case.
I. INTRODUCTION
High energy heavy-ion collisions provide a unique opportunity to liberate quarks and gluons from nucleons and cause the phase transition from nuclear to quarkgluon matter. Heavy-ion collisions have advantage of studying the QCD phase diagram especially in hightemperature and low-baryon density domains [1][2][3][4][5].
The mass modification [6][7][8][9][10] of light vector mesons such as φ, ω and ρ is an important signature of the QCD phase transition, because their masses are sensitive to chiral condensate qq . Chiral condensate is one of the most prominent order parameters characterizing the QCD phase structure. The behavior of the chiral condensates near the critical temperature is studied by several kinds of lattice QCD calculations [11][12][13][14][15][16][17]. The light vector mesons can be candles to study the properties of quark-gluon matter produced in heavy-ion collisions. The mass modification inside quark-gluon matter is potentially visible because their lifetimes are supposed to be comparable to duration of the thermal equilibrium state, unless the interactions with hadronic matter in the later stage dominate. In addition, electron-positron pairs, which are referred to as "di-electrons", decaying from light vector mesons are considered to be a clear probe, since charged leptons carry the original information in the early stage without perturbation by hadronic matter via strong coupling in the relatively later stage of the system evolution.
In low-temperature and high-baryon density domains, the symptoms of the mass modification are intensively discussed [18][19][20][21]. In high-temperature and low-baryon density domains, the enhancement of the di-electron yield is observed in the mass range below 0.7 GeV/c 2 in central Au+Au collisions at √ s N N = 200 GeV [1,22], and it indicates a deviation from the di-electron spectrum in p+p collisions [1,[22][23][24]. Therefore, quantitative studies of * Electronic address<EMAIL_ADDRESS>the mass spectra including the light vector mesons grow in importance to understand the phenomena in the high temperature domain. Furthermore, the measurement of the light vector mesons in higher-temperature states, at the LHC energy, would give a deeper insight into these phenomena.
The detection of di-electrons is, however, challenging from the experimental point of view, because a heavyion collision event produces a huge number of particles. The experiments at RHIC and LHC energies report that O (100-1000) particles are produced at midrapidity in a heavy-ion collision [25][26][27]. The produced particles are dominated by π ± and π 0 mesons. The production cross sections of pions are approximately hundred times larger than that of φ meson, and ten times larger than that of ω/ρ meson [81]. In addition, the branching ratio (BR) to a di-electron pair is on the order of 10 −4 for φ meson and 10 −5 for ω/ρ meson. Therefore, the measurement of the signal di-electrons requires state-of-the-art technology to identify electrons and positrons among a large amount of background hadrons by combining information from Ring Imaging Cherenkov counters [28], dE/dx [29,30], Time of Flight [31], most notably, the Hadron-Blind Detector [32] and so forth.
In addition to the small yield of di-electrons from the light vector mesons, the difficulty of the measurement is severely caused by background contaminations from processes other than the light vector meson decays. Dominant sources of such backgrounds are listed as follows, 1. Dalitz decays π 0 → γe + e − and η → γe + e − , 2. pair creations by decay photons from π 0 and η meson, 3. semi-leptonic decays from hadrons, 4. charged hadron contaminations by electron misidentification.
The key issues are in the increase of combinatorial pairs from different sources. The amount of these pairs has approximately quadratic dependence on multiplicity because combination between all possible electrons and positrons must be taken, while the number of true dielectron pairs from the light vector mesons approximately linearly scales with multiplicity [82]. Detector upgrades are planned in several experiments at RHIC and LHC. For instance, the ALICE experiment at LHC plans to upgrade the internal tracking system with the relatively low detector materials and enhance low-p T tracking capability [33]. These upgrades are expected to suppress the backgrounds from the photonconversion and Dalitz decay processes, and improve the Time-of-Flight measurement for the hadron identification. Moreover, the tracking system will improve the secondary vertex resolution. It allows to separate the backgrounds from the semi-leptonic decay of open charms. The key issues from (1) to (4) above are actually relevant to the items of the upgrade plan.
The measurement of light vector mesons is still important but challenging. The detector system applied to high-energy heavy ion collisions is, in general, multipurpose. Thus it is not necessarily optimized only for the measurement of light vector mesons. In this situation, unless the key issues above are simultaneously considered in advance of the other irrelevant design issues, for instance, only installing a state-of-the-art particle identifier might not be sufficient. In order to prove the performance of an upgraded detector system, a very detailed detector simulation is required. It is usually a time-consuming process to conclude whether a reasonable signal-to-background ratio is guaranteed with the upgraded complex detector system or not. Thus the applicability of such a detailed simulation is rather limited to a specific detector system. If one knows the key issues, however, considering only these issues in the idealized case would provide a more general guideline applicable to any complex detector system.
In this paper, therefore, we do not focus on details of individual detector designs. Instead, we investigate the necessary conditions which must be minimally satisfied in advance in order to gain reasonable signal-to-background ratios with idealized detector systems for given multiplicities, collision species and energies in heavy-ion collisions.
For this purpose, we first give the estimates on the production yields of light vector mesons and the other hadrons producing di-electrons in the final state. They are applied to p+p and A+A collisions at the RHIC and LHC energy regime, in concrete, dN π 0 +π ± /dy = 3, 6, 1000 and 2700 in p+p 200 GeV, p+p 7 TeV, Au+Au 200 GeV and Pb+Pb 5.5 TeV, respectively. We then implement the key physical background processes in the numerical simulation including the Dalitz decay and the photon conversion as well as the semi-leptonic decay of hadrons under the key experimental conditions: the amount of the detector material, the charged pion contamination, the geometrical acceptance, the electron tagging efficiency, the measurable momentum cutoff and the smearing effect by momentum resolution. These experimental conditions are parameterized and implemented into the simulation. The flowchart of the simulation is explained in Section II. Starting from a set of the idealized baseline parameters, we explore the expected signalto-background ratios and the statistical significance as a function of the individual experimental parameters for given pion multiplicities in high-temperature states. The results of the simulation study are provided in Section III. The non-trivial residual effects, though some of them depend on a specific detection system, are discussed in Section IV. In Section V, we conclude that we can find reasonable parameter ranges to measure the light vector mesons via di-electrons even at dN π 0 +π ± /dy = 2700.
II. NUMERICAL SIMULATION FOR IDEALIZED DETECTION SYSTEMS
The numerical simulation is developed to estimate the signal-to-background ratios and the statistical significance of light vector mesons via di-electron decays in heavy-ion collisions. Instead of directly simulating the multi-particle production with detailed dynamics in heavy-ion collisions, high multiplicity states are first represented by the form of the total pion multiplicity dN π 0 +π ± /dy. The production of the relevant particles other than pions is determined based on the individual production cross sections relative to that of pions as a function of the transverse momentum p T . They are evaluated by the measured data points, or the extrapolation via the proper scaling for missing data points. In addition, the key experimental parameters for the di-electron measurement are set as the inputs. As the idealized baseline parameters on the experimental conditions, we choose the following set of parameters: 1. photon conversion probability P cnv : 1 %, 2. rejection factor of charged pions R π ± , which is defined by the inverse of the probability that charged pions are identified as electrons: 500, Some of experimental parameters are correlated in fact, but these correlations are neglected in this simulation. Figure 1 shows the flowchart of the numerical simulation. The step (1) sets the input parameters above. In the step (2), primary particles are generated with the weights of the invariant p T spectra. The p T spectra are provided by the experimental data and the proper scaling for missing data. The details of the input p T spectra for each are explained later. Rapidity y of a particle is uniformly generated in |y| ≤ 0.5 [25,35]. Primary particles branch into subsequent decay processes according to their branching ratios. The branching ratios are summarized in Table II of C. φ, ω and ρ mesons decay into di-electrons through the two-body decay process in the step (3). The phase space of di-electrons from the light vector mesons is determined by the Gounaris-Sakurai model [36]. π 0 and η mesons branch into 2γ or the Dalitz decay process (γe + e − ) in the step (3) or (4). The decaying γ's are subsequently converted into di-electrons with the given photon-conversion probability in the step (5). Kinematics of di-electron in the photonconversion process, that is, energy and scattering angle, are simulated by the well-established GEANT algorithm [37][38][39]. All photon-conversion points are fixed to the primary vertex points [83]. The phase space of Dalitz decaying di-electrons is determined by the Kroll-Wada formula [40,41]. The detailed formula is expressed in D.
Charged kaons decay into electrons through the threebody decay process in the step (6). In the step (7), electrons and positrons from open charms are directly generated to be consistent with the input p T spectra of single electrons [84] as shown in Fig.2. They are randomly generated in azimuth with the branching ratio of 9.5 % [46]. The correlation of a di-electron pair originating from the open charm production is neglected in this simulation. The effect on the correlation is discussed in Section IV. Charged pions are identified as electrons with the given probability corresponding to the rejection factor of charge pions in the step (8). At the final stage of the simulation, final-state electrons are filtered by the geometrical acceptance, the electron tagging efficiency and p T threshold in the step (9). The algorithms of decaying processes mentioned above are developed based on the EXODUS simulator [47]. The dN π 0 +π ± /dy and the invariant p T spectra are applied to the simulation taking the given collision species and energies into account. The input dN π 0 +π ± /dy is estimated by the measured dN ch /dy [25][26][27]. dN π 0 +π ± /dy = 3, 6 and 1000 are set for the simulation of p+p 200 GeV, p+p 7 TeV and central Au+Au 200 GeV collisions, respectively. For the simulation of central Pb+Pb 5.5 TeV collisions, the dN π 0 +π ± /dy is estimated by the extrapolation of the scaling curve as a function of collision energy [27]. The extrapolated dN π 0 +π ± /dy corresponds to 2700.
The input p T spectra for the p+p 200 GeV simulation are determined by the measured data points with the fits in the panel (a) of Fig.2. The Tsallis function, whose properties are explained in A, is used as the fitting function.
The invariant p T spectra in central Au+Au collisions are well known to be suppressed and scaled by the number of participant nucleons N part in the high p T region. The scaling parameter N part is calculated by the Monte Carlo simulation based on the Glauber model [48,49]. The N part scaling helps to extend the p T spectra in p+p collisions to those in heavy-ion collisions, even if there is lack of data points. Therefore we can determine the input p T spectra in the wide range by combining the existing data points with the scaling properties. The panel (b) of Fig.2 shows the data points and the scaling curves in Au+Au 200 GeV at the centrality class of 0-10 % [85]. The dotted curves are obtained by the N part scaling. The spectrum shape is assumed to be the same as that of p+p 200 GeV. These curves are reasonably consistent with the data points. The solid curves on the data points of φ mesons, charged kaons and single electrons from heavy flavor decays are obtained by directly fitting with the Tsallis function. For charged kaons, the Tsallis parameter q is fixed because of the missing data in the high p T region.
The panel (c) of Fig.2 shows the invariant p T spectra in p+p 7 TeV. The p T spectra of π 0 , η, φ and single electrons are determined by the data points and the fits. The dotted curves show the p T spectra of the other hadrons. The spectrum shape is estimated by the m T scaling based on the π 0 data points, where m T = p 2 T + m 2 0 and m 0 is the rest mass of a particle. Their absolute production cross sections are estimated by the inclusive ratios between pions and the other hadrons in p+p 200 GeV. The invariant p T spectra in p+p 7 TeV are commonly used for the simulations of p+p 7 TeV and Pb+Pb 5.5 TeV, since the relative production cross sections between pions and the other hadrons are expected to be common for both collision systems, as long as the particle production between 7 TeV and 5.5 TeV has little dependence on the collision energy. Table I in B summarizes these production cross sections and inclusive yields over all p T ranges in p+p 200 GeV, p+p 7 TeV and Au+Au 200 GeV at the centrality class of 0-10 %. [50] and (π + + π − )/2 [51,52] are obtained by the simultaneous fitting. The curves on the data points of (K 54] are also obtained by the simultaneous fitting. The star symbols show η → γγ [55] and η → π 0 π + π − [56]. The open diamonds show ρ → π + π − [57]. The triangles show ω → e + e − , π 0 π + π − and π 0 γ [53]. The squares show φ → e + e − and K + K − [53]. The asterisks show single electrons from heavy flavor decays [46]. (b) The invariant pT spectra in Au +Au 200 GeV at the centrality class of 0-10%. The dotted curves are scaled by the Npart and assumed to be the same spectrum shape as that of p+p 200 GeV. The scaling curves are consistent with the data points of pions [58][59][60] , η [61] and ω [62], respectively. The solid curves are the fitting results to the data points of K ± [60], φ [63] and single electrons from heavy flavor decays [64]. For K ± , the Tsallis parameter q is fixed since there is no data point in the high pT region. (c) The differential cross section in p+p 7 TeV. The data points of π 0 /η [65], φ [66] and single electrons from heavy flavor decays [67] are shown in this figure. Tsallis fitting curves are depicted as the solid curves. The dotted curves of ρ, ω and K ± are obtained by assuming the same spectrum shape of π 0 and normalizing the individual production ratios with respect to pions in p+p 200 GeV. The invariant mass spectrum of photon-conversion pairs obeys dynamics of the pair-creation process in materials. The invariant mass spectrum of Dalitz decaying pairs has a character whose leading edge is the summation of masses of decay products and the distribution continues up to their parent masses. The mass spectrum of cc → e + e − is reconstructed by randomly pairing dielectrons in azimuth. The inclusive invariant mass spectra are shown in Fig.5. The component of signal pairs, combinatorial background pairs and all background pairs is superimposed in the figures. The fraction of individual components depends on the given multiplicities, collision species and energies. The peaks of the light vector mesons are clearly seen at the multiplicities in p+p collisions, but hardly seen above dN π 0 +π ± /dy = 1000, though the statistical significance is not necessarily small. The quantitative evaluations of the signal-to-background ratios and the statistical significance are discussed in the next section.
III. SIGNAL-TO-BACKGROUND RATIOS AND THE STATISTICAL SIGNIFICANCE
The feasibility to measure φ/ω/ρ → e + e − is evaluated by the signal-to-background ratios and the statistical significance in the signal mass region. The signal mass region for each meson is defined as the invariant mass range of M φ,ω,ρ ± 3 × Γ 2 φ,ω,ρ + σ 2 φ,ω,ρ , where M φ,ω,ρ is the mass center and Γ φ,ω,ρ is the decay width. M φ,ω,ρ and Γ φ,ω,ρ are cited from the particle data group [68]. The mass resolutions σ φ,ω,ρ are calculated by the single particle simulation [86] and result in 7.6, 5.7 and 5.6 MeV/c 2 for φ, ω and ρ mesons, respectively. Figure 6 and 7 show the signal-to-background ratios S/B as a function of the experimental parameters in central Au+Au collisions at √ s N N = 200 GeV (dN π 0 +π ± /dy = 1000) and central Pb+Pb collisions at √ s N N = 5.5 TeV (dN π 0 +π ± /dy = 2700), respectively. Only one parameter is changed by fixing the other parameters at the baseline values: P cnv = 1 %, R π ± = 500, ǫ acc = 100 %, ǫ tag = 100 %, p th T = 0.1 GeV/c and The top-left figure shows the S/B as a function of photon-conversion probability P cnv . The minimum amount of detector materials typically corresponds to P cnv = 1-2 %, because photon conversions from the beam pipe and the first layer of the innermost detector are unavoidable in any detector system, even though electron trajectories coming from the off-axis point are rejected by tracking algorithm.
Thus the tendency below P cnv = 10 % is important for the detector system with typical amount of the materials.
The dependence on the rejection factor of charged pions R π ± is shown in the top-right plot of Fig.6 and 7. Typical devices for the electron identification have the rejection factor of a few hundreds in the stand-alone operation [23,28,69,70], although it varies by the principle of detection. Therefore, the information in the range of R π ± = 100-1000 are useful. The S/B can be changed by a factor of 3-5 for φ/ω meson in this range.
The bottom-left figure shows the S/B as a function of the azimuthal acceptance ǫ acc . The S/B depends on decay kinematics of the signal particles and the backgrounds. Therefore the geometrical configuration in azimuthal coverage as well as the absolute acceptance in azimuth should be taken into account. Two types of geometrical configurations are considered in this simulation. Type I simply covers the azimuthal range of 0 ≤ φ ≤ φ 1 . Type II covers two separated domains which are symmetrically arranged in azimuth with respect to the collision point, that is, the coverage is set to 0 ≤ φ ≤ φ1 2 and π ≤ φ ≤ π + φ1 2 . Both of them have the same total acceptance in azimuth with different geometry. The difference between the two geometrical configurations increases in the case of the imperfect coverage. If ǫ acc is 40 %, for instance, the S/B differs by a factor of 3-4 for φ/ω meson depending on the detector geometry.
The statistical significance S/ √ S + B depends on the square root of the number of events, in other words, de-pends on available luminosity in experiments. Depending on the available statistics in the specific collision centrality and the detector conditions, we can evaluate whether a detection system is able to measure the light vector mesons with a reasonable statistical significance or not.
IV. RESIDUAL EFFECTS
The signal-to-background ratios and the statistical significance of the light vector mesons are evaluated with the idealized detection system so far. In this section, we discuss the non-trivial aspects originating from the real data analysis and the correlations in the open charm production. As the other issues beyond the scope of the numerical simulation, we mention the track reconstruction algorithm bias, the correlation between the electron identification and the rejection of charged hadrons, and the fiducial effect on the acceptance to charged particles in the magnetic field. The studies of this section are performed by simulating 5 M events in Au+Au 200 GeV and Pb+Pb 5.5 TeV with the baseline parameters set: P cnv = 1 %, R π ± = 500, ǫ acc = 100 %, ǫ tag = 100 %, p th T = 0.1 GeV/c and σ ref pT = (0.01 · p T ) 2 + (0.0056) 2 GeV/c. The numerical simulation so far is performed under the assumption that we know the exact number of signals and backgrounds. In the real data analysis, however, the source of any electron cannot be identified. Therefore all electrons and positrons are combined into pairs and reconstructed into the invariant mass. The mass distribution of pairs from one source ("true pairs") is extracted by subtracting that of pairs from different source ("combinatorial pairs") statistically. The mass shape of combinatorial pairs is estimated by mixing an electron in an event and a positron in another event ("event mixing") [71,72]. The mass distribution of event-mixing pairs is normalized by 2 N ++ N −− /N mix +− , where N ++ , N −− and N mix +− are the numbers of positron-positron pairs, electron-electron pairs and event-mixing electronpositron pairs, respectively. This normalization factor is valid as long as electrons and positrons are produced as pairs and they have the same acceptance [1]. The mass distribution of true pairs includes the light vector mesons and the other sources. The contributions from the light vector mesons and the background sources are separately estimated by the fits based on the linear combination between the Breit-Wigner function convoluted with the Gauss function and an empirical function. We apply a series of procedures used in the real data analysis to the simulated data and evaluate how much the signal-to-background ratios change by applying these procedures. The top plots of Fig.14 show the comparison of the invariant mass spectra between the combinatorial pairs and the event-mixing pairs. The comparison is performed in central Au+Au collisions at √ s N N = 200 GeV (dN π 0 +π ± /dy = 1000) and central Pb+Pb collisions at √ s N N = 5.5 TeV (dN π 0 +π ± /dy = 2700), respectively. The ratio between the number of combinatorial pairs and that of event-mixing ones is close to unity within a few % of statistical fluctuations below the mass of 1.0 GeV/c 2 in both collision systems as shown in the middle plots of Fig.14. Therefore the event-mixing pairs in the real data analysis can provide the reliable baseline representing the combinatorial pairs which is known only at the simula-tion study [87]. The bottom plots of Fig.14 show the invariant mass distribution after subtracting the eventmixing distribution. The linear combination between the Breit-Wigner function convoluted with the Gauss function and a first-order polynomial function is used as the fitting function. In the mass range of φ meson, we use In the mass range of ω/ρ meson, we use where N ω and N ρ are the inclusive yields of ω and ρ meson, respectively. The absolute values of the inclusive yields are fixed to the measured values listed in Table I where the mass center M φ,ω,ρ and the width Γ φ,ω,ρ of the light vector mesons are fixed to their intrinsic values [68], whereas the mass resolution σ is a free parameter. A, B and C in the equations are normalization factors. The fitting ranges are from 0.9 to 1.2 GeV/c 2 for φ meson and from 0.6 to 0.9 GeV/c 2 for ω/ρ meson. The number of the light vector mesons is counted by the integration of the convolution function over the signal mass region. The definition of the signal mass region is mentioned in Section III. The signal-to-background ratios estimated by fitting are 8.4 × 10 −2 , 2.0 × 10 −2 and 4.1 × 10 −4 for φ, ω and ρ mesons, respectively, in central Au+Au collisions at √ s N N = 200 GeV (dN π 0 +π ± /dy = 1000) with the baseline parameter set. The differences of the signal-to-background ratios are 4.9 % (φ), 7.4 % (ω) and 8.8 % (ρ) compared to those by the simple counting of the simulated true pairs. In central Pb+Pb collisions at √ s N N = 5.5 TeV (dN π 0 +π ± /dy = 2700), The signal-tobackground ratios are 1.7 × 10 −2 , 6.7 × 10 −3 and 1.7 × 10 −4 for φ, ω and ρ mesons, respectively. They correspond to 3.2 % (φ), 12.1 % (ω) and 14.3 % (ρ) differences with respect to the case of the simple counting of the simulated true pairs. The differences depend on the experimental parameters. It is unlikely for them to exceed 50 % at a realistic range of the experimental parameters [88].
Electrons and positrons from open charms are randomly generated and combined into pairs in this simulation. These pairs are, in fact, azimuthally correlated at mid-rapidity, because they originate from the jets due to the large mass of charm quarks. We assume the back-toback e + e − correlation in azimuth as the extreme case of the open charm production. Realistic correlations would exist between the random pairing case and the back-toback correlated case. The top plots in Fig.15 show the invariant mass spectra reconstructed from all true pairs, combinatorial pairs and cc → e + e − , respectively, in central Au+Au collisions at √ s N N = 200 GeV (dN π 0 +π ± /dy = 1000) on the left and in central Pb+Pb collisions at √ s N N = 5.5 TeV (dN π 0 +π ± /dy = 2700) on the right. The distributions of the random di-electron pairs and the back-to-back correlated ones in azimuth are superimposed in the same plot. The middle plots show the ratio of the number of cc → e + e − as a function of the invariant mass. The denominator is the number of di-electrons with random pairing and the numerator is the number of di-electrons with the back-to-back correlation. This ratio varies by a factor of 1.5 to 3 around the mass range of the light vector mesons. The ratio between the number of combinatorial pairs in the random pairing case and in the back-to-back correlated case is consistent within only a few % in both collision systems as shown in the bottom figures. Therefore the correlation of the cc production has little influence on the signal-to-background ratios of the light vector mesons.
In addition to above issues, the other residual effects, which are beyond the scope of this study, are listed below.
• Track reconstruction algorithms bias: Track reconstruction algorithms can bias momentum measurement of charged particles. For example, the algorithm based on the combinatorial Hough transform technique [73][74][75] reconstructs higher momentum than true one, especially for a charged particle producing from the off-axis point. The photonconversion electrons at the off-axis point contribute to the background shape in the relatively higher mass region. In addition, especially under a high multiplicity environment, fake tracks are reconstructed by chance depending on the algorithms. These tracks can contribute as the additional backgrounds.
• The correlation between the tagging efficiency of electrons and the rejection factor of charged hadrons: The correlation between the electron tagging efficiency and the rejection factor of charged hadrons depends on the method of particle identification. For instance, if particles are identified by dE/dx, the correlation has a trade-off relation. Another example is the degradation by the situation where a number of particles simultaneously pass through the detector. If a hadron and an electron enter the same area of the electron identification device, either or both of them can be wrongly identified. In more general case, complicated correlations may appear, since particles are identified with a combination of multiple devices.
• The fiducial effect in the magnetic field: This simulation considers the detector acceptance under the assumption that di-electron kinematics is completely reconstructed. In real experiments, charged particles are bent in the magnetic field and entered into the imperfect coverage of the detectors. The fiducial effect becomes apparent at the edge of the acceptance. Therefore the inefficiency of the electron detection should be taken into account as a function of the magnetic field, the detector positions from the collision point and the detector configurations.
Nevertheless, our simulation study would provide a useful guideline to evaluate the effect of the non-residual factors on measurability of the light vector mesons.
V. CONCLUSION
This paper provides a guideline to evaluate the minimum requirements in order for a given idealized detection system to achieve reasonable statistical significance for the measurement of light vector mesons via di-electron decays in different temperature states, dN π 0 +π ± /dy = 1000 and 2700 intended for the most central Au+Au collisions at √ s N N = 200 GeV and the most central Pb+Pb collisions at √ s N N = 5.5 TeV, respectively. The simulation codes used for this study are openly available [76].
The results suggest that parameter ranges for the measurement of φ and ω mesons are selectable in designing a detector system depending on the number of events at the highest centrality class, even if the residual effects caused by the procedure of the real data analysis and the correlations in the open charm production are considered. The statistical significance of ρ mesons is less than that of φ and ω mesons due to its broad mass shape and the limited mass resolution, even if sufficiently high luminosity is prospected. However, the mass spectrum of ρ mesons in the vacuum can be indirectly evaluated as long as φ and ω mesons spectra are accurately determined. This would provide the baseline to understand the properties of the low-mass di-electron continuum. The production cross sections and the inclusive yields over all pT ranges at midrapidity for different collision systems. The used data points to calculate the production cross sections and the inclusive yields are cited from the publications listed in the second column for each collision system. The production cross section of single electrons is obtained by the Tsallis fit to the measured data points and converted into the cc cross section with the branching ratio of 9.5 % [46]. The production cross sections of the other particles are obtained by fitting to the measured data points with the Tsallis function, or assuming the proper scaling for missing data points. The details are explained in Section II. The errors of the production cross sections and the inclusive yields are expected to be from 10 to 30 % depending on particles. These errors are neglected in the simulation. Masses and branching ratios are cited from the particle data group [68]. The branching ratio of c → e is assumed to be 9.5 % [46].
Appendix D: Dalitz decays of pseudo-scalar mesons
Pseudo-scalar mesons such as π 0 and η mesons mainly decay into two photons. The Dalitz decay corresponds to the case where photons become off-shell and subsequently decay into di-electrons. The relation between 2 γ decay process (P → γγ) and the Dalitz decay process (P → γe + e − ) is described by Kroll-Wada formula [40,41] where M e + e − is the invariant mass of di-electrons, m e is the rest mass of electron and m P is the rest mass of a parent meson. F P Q 2 is the electro-magnetic transition form factor. Q 2 is equivalent to the square of the virtual photon mass (i.e. Q = M e + e − ). The measurements of the form factor by the experiments [79,80] show Λ P ≃ M ρ , where M ρ is the rest mass of ρ meson. The Kroll-Wada formula determines the branching ratio and the phase space of Dalitz decaying di-electron. | 7,812 | 2013-09-23T00:00:00.000 | [
"Physics"
] |
The Role of Perceived Smart Tourism Technology Experience for Tourist Satisfaction, Happiness and Revisit Intention
: The rapid advancement of smart tourism technology brings new opportunities for tourism development. More travel destinations are relying on smart technology to attract more tourists to visit and enrich their travel experience. The main purpose of this study was to explore whether tourists are satisfied with their smart tourism technology experience (i.e., informativeness, accessibility, interactivity, personalization, and security). This study also investigated the impact of smart tourism technology experience on tourists’ happiness and revisit intention. This study used a structural equation method to find the relationship among smart tourism technology attributes, travel satisfaction, happiness, and revisit intention. Surveys of a total of 527 participants who traveled to Macau from Mainland China were used for the analysis. The results showed that accessibility is the most important factor a ff ecting the smart tourism technology experience and personalization the least. Smart tourism technology experience is shown to be significantly associated with travel experience satisfaction, and travel experience satisfaction has a positive e ff ect on both tourists’ happiness and revisit intention. Finally, tourist happiness is also shown to be positively associated with revisit intention. This study provides theoretical and practical significance for the development of smart tourism in the future.
Introduction
With the rapid development of information and communication technology (ICT), the traditional tourism industry has entered an era of smart tourism and smart technologies are now widely used in the tourism industry. Smart technologies explore innovative ways to create memorable experiences for tourists by extending destination co-creation space [1]. From the perspective of tourists, the position of smart technology in travel has become more important. In the initial stage, tourists mainly used ICT for travel information searching and decision-making [2]. With this trend, many tourism-related businesses have adopted various smart technologies for promoting and marketing their destinations. To develop a smart tourism destination, government and destination marketing organizations (DMOs) often establish an evaluation system according to the policy for smart cities [3]. However, the ultimate goal of smart tourism is to create a more convenient and enjoyable travel experience for tourists.
Nowadays, as an important element of experience, smart technologies play an irreplaceable role in travel. Most tourists use smart technologies such as location queries, local restaurant reviews, or mobile payments through smart phones during their travel. Smart technologies are used throughout the whole travel process, Sustainability 2020, 12, 6592 2 of 14 including DMOs websites, tourism apps, social media, and virtual reality for tourists to arrange and enrich their trips. Researchers have recognized the potential of smart technologies and predicted that the smart technologies used by tourists will become more diversified [4]. Especially with the popularity and development of smart phones, tourists can use travel-related apps to plan their travel anytime and anywhere [5].
Tourism development can improve quality of life and travel as a source of happiness [6][7][8]. The pursuit of novelty and high-quality tourism has become a new kind of life experience, often considered an important way to pursue happiness. Most studies on happiness and tourism focus on investigating the influence of destination value and tourist interaction on tourists' happiness [9][10][11][12][13]. A few scholars have explored the relationship between smart tourism experience and happiness. Lee, Lee, Chung, and Koo [14] found that tourists in South Korea are likely to place more value on what they perceive from their destination travel experiences than what they perceive from their experiences with smart tourism technology (STT) services when they evaluate their overall happiness. Kim and Hall [15] investigated the hedonic motivation adoption frameworks of virtual reality (VR) tourism and found that perceived enjoyment deeply affects subjective wellbeing.
Despite these findings, little is known about how smart tourism experiences boost happiness. To date, no studies have built a holistic conceptualization about perceived smart tourism experience and happiness in the Chinese population. In order to fill this gap, this paper aims to develop and investigate a conceptually comprehensive model on STT attributes, travel satisfaction, happiness, and revisit intention. Therefore, the main purpose of this paper is to explore tourists' experiences of smart tourism and then, to study whether the smart tourism experience can boost tourists' happiness.
Three aspects were identified as the objectives of this study: (1) To investigate the main attributes affecting the use of STT by Chinese tourists and the relative importance of these factors to the satisfaction of Chinese tourists' experience; (2) To examine whether travel satisfaction towards each STT attribute affects Chinese tourists' happiness; (3) To demonstrate the influence of Chinese tourists' perception of STT on their revisit intention through their satisfaction and happiness with the tourism experience.
Smart Tourism
Smart, as a new concept, often involves practical devices in the context of economic and social development, including smartphones, smart TVs, and smart cars. Smart here means intelligent, eco-friendly, sustainable, integrated, and ubiquitous [16]. The term of smart has been applied to tourism based on the way in which integrated technologies, real-time data, and physical infrastructure have been combined into a single complex environment much like a city, thus, making great achievements. The practical applications of smart tourism develop faster than academic work because they are initiated as marketing strategies and government projects. However, there is a lack of agreement in the literature about the definition of smart tourism; the term can variously refer to a type of management, a trend, or an information service.
Zhang, Li, and Liu [17] defined smart tourism as a systematic and intensive management transformation. They believed that smart tourism could lead to resource optimization and value co-creation between tourists and providers. Thus, they constructed a capabilities-attributes-applications framework to describe the principle of smart tourism. The framework concerns the application background of smart tourism, which provides market strategies for DMOs and private companies to achieve public welfare and profit. According to Gretzel, Sigala, Xiang, and Koo [18], smart tourism is a new trend in the tourism industry with three components and layers: smart destination, smart business ecosystem, and smart experience, all of which are based on data collection, exchange, and processing. Li, Hu, Huang, and Duan [19] intended that the essence of smart tourism is the ubiquitous tour with information service. They emphasized that the information service is everywhere and can exist in any part of the travel process, allowing tourists to access it freely. However, smart tourism is not only a simple application of ICT but Sustainability 2020, 12, 6592 3 of 14 also an ecosystem that enables tourists, DMOs, and other tourism stakeholder interactions, thus, creating more value, especially the co-creation value generated by tourists. It is a mobile information system that is combined with information and physical infrastructure to create a new experience for tourists [20].
Benefits of Smart Devices in Tourism
With the development of information technology (IT), all industries have inevitably embraced new technologies or experienced their benefits, and tourism is not an exception [21]. The application of smart devices in the context of the tourism industry is becoming increasingly extensive, which maximizes the value of tourism resources and produces enormous social and economic benefits. Examples of smart devices include wearable and portable devices-smartphones, smart glasses, smart watches. In addition, all venues and departments in the tourism industry tap into smart devices, such as self-service check-in kiosks in hotels, flight check-in service machines in airports, self-service ticket machines, and tour guide systems in travel attractions. Tourists benefit from convenient and efficient services by adopting these smart devices.
Due to the innovation and improvement of ICT and portability with practicability, wearable and portable devices are favored by tourists. It is worth mentioning that smartphones play key roles in the leisure experience [22]. Smartphones combined with mobile networks, the internet of things (IoT), and near field communication (NFC) technologies have generated various tourism-related applications, changing the whole industry. With smart devices, more tourists plan travel on their own instead of through third parties such as travel agencies. Smart technologies enable people to book airline tickets, hotels, and other tourism products on the platform of mobile sites [23] and easily obtain information on destination transportation, accommodation, and attractions on their smartphones when they need it. More specifically, tourists use smart phones to browse websites, social networks, and service platforms, which not only supply the updates and real-time information on the destination but also directly communicate with other tourists and tourism marketers to make better travel decisions [24]. Moreover, tourists can connect to WiFi services and make mobile payments (such as for bus tickets) by scanning a QR code at the destination. Smart devices with new technologies bring new development opportunities for tourism.
Smart Tourism and Perceived STTs
ICT is the key factor, both the carrier and manifestation, of smart tourism. STTs include not only smart devices but also, for instance, social platforms, cloud computing, big data, IoT, artificial intelligence (AI), virtual reality (VR), augmented reality (AR), mixed reality, NFC, and radio-frequency identification (RFID), which are related to tourism activities. Especially, VR and AR are emerging STTs. These technologies have become popular in recent years in the context of tourism. Park and Stangl [25] investigated the AR experience from the perspective of sensation-seeking and identified experience-seeking and boredom-susceptibility as two key elements in the AR experience. STTs rely more on the value created by tourists than on the technology itself. The research on STTs can be divided into two themes: traditional online information channels and other new technologies. Online information is generated by tourists, and social media is one popular platform for seeking travel information. No and Kim [26] identified four types of online tourism information sources: blogs, public websites, company websites, and social media websites. No and Kim [26] also identified five features of online information: accessibility, security, information-trust, interaction, and personalization. Their results showed that security is the dominant attribute for public websites. Huang, Goo, Nam, and Yoo [27] summarized the attributes of STT as informativeness, accessibility, interactivity, and personalization.
Informativeness
Informativeness represents a combination of the quality, credibility, and accuracy of information received from STTs at tourism destinations [27]. Informativeness is important to STTs and can directly Sustainability 2020, 12, 6592 4 of 14 influence tourists' attitudes toward them. When STTs provide relevant, sufficient, and accurate information on activities, accommodation, and transportation, the time and effort in searching the information is reduced, and tourists are satisfied with their experience. Informativeness stimulates tourists' rational judgement about the destination and helps them make efficient decisions.
Accessibility
Accessibility represents the extent to which travelers can easily access and use the information offered at the destination by using different types of STTs [27]. Accessibility determines the usability of STTs at the destination. Individuals tend to explore more information about the destination when STTs are highly accessible.
Interactivity
Interactivity is defined as a facilitator that promotes travelers' real-time feedback and active communications when using STTs [27]. This affects tourists' responses to STTs. In social media services, when tourists perceive a high level of interactivity, they tend to adopt the service and communicate more with tourism suppliers through purchasing behavior, commenting, and feedback [28].
Personalization
Personalization refers to the ability of a traveler to obtain specific information to suit his or her personal trip planning needs by using various types of STTs [4,26]. According to their previous purchasing behavior, personality, and preference, tourists can receive suitable recommendations through big data or cloud computing.
Security
Security is defined as the safety of personal information while using various types of STTs [27]. Tourists tend to use STTs at the destination when they feel their personal information is safe. Many previous studies consider security as a core attribute of perceived STTs [26,27].
Happiness and Tourists' Travel Satisfaction
Happiness is usually interpreted as a quality of life or level of hedonic happiness [6,8]. Subjective wellbeing or life satisfaction can be identified as an indicator of happiness. Empirical studies have shown that tourism or travel is a process of seeking hedonic experience, and tourists' happiness varies according to their personality, destination types, and types of travel activities [13,29,30]. Travel prolongs happiness by reducing hedonic adaptation, especially in terms of expectation and serendipity [31]. A positive experience during a trip can increase people's overall happiness, and interaction can be identified as one of the most important factors that enhances happiness [32]. Lee et al. [14] investigated the tourists' value-seeking processes and concluded that tourists' happiness can be increased through travel experience satisfaction and service experience satisfaction.
Smart tourism involves all aspects of tourism, including transportation, accommodation, and attractions. When tourists have positive emotions and attitudes toward STTs, their experience in the destination will be satisfied. As a result, travel satisfaction produces tourist happiness.
This study focuses on perceived STTs in travel satisfaction; therefore, the following hypotheses are proposed:
Tourists' Travel Satisfaction and Revisit Intention
In the field of tourism studies, tourists' satisfaction plays an essential role in predicating behavioral intention. Behavioral intention, also known as loyalty, refers to recommendation intention and revisit intention toward the destination. Tourists' revisit intention reflects the degree of the willingness of tourists to revisit the destination. Tourists' satisfactory experiences produce intention to revisit the destination [33]. Meng and Han [34] found that working-holiday tourism satisfaction with the destination can positively and significantly influence intention to revisit and word-of-mouth intention.
In summary, many scholars have attempted to create constructs that can increase tourists' satisfaction and revisit intention. Previous studies have found that satisfaction has a positive association with revisit intention [35][36][37]. Therefore, this study proposes the following hypotheses:
Hypothesis 3 (H3).
Tourist travel satisfaction is positively associated with tourist revisit intention.
Research Architecture
The main purpose of this study is to understand Chinese tourists' perceived STT experience and how perceived STT affects overall travel experience satisfaction. Therefore, this study investigated how tourists' travel satisfaction affects happiness and revisit intention to the destination, based on five perceived STT attributes. For this purpose, we selected tourists who traveled Macau since Macau local government has announced smart tourism as an official development strategy. In addition, Macau is a popular cultural and leisure destination attracting many tourists from China.
Research Hypotheses
Tourists' perceived STT was selected and classified according to the literature review, which identified five attributes: informativeness, accessibility, interactivity, personalization, and security. The perceived STT experience of Chinese tourists was assumed to have an impact on tourists' travel satisfaction and tourist happiness from travel experience satisfaction. Finally, revisit intention was posited to identify the relationships between travel experience satisfaction and tourist happiness. The research hypothesis model is shown in Figure 1.
Questionnaire Design
The measurement items were adopted from the previous literature and modified for this study. The measures of the eight constructs consist of perceived STT experience, travel satisfaction, happiness, and revisit intention. We reconstructed perceived STT experience for our study based on informativeness, accessibility, interactivity, personalization, and security. The five items related to informativeness were adapted from Luo [38], No and Kim [26], Lee et al. [14], and Yoo et al. [20]. The five items related to accessibility were adapted from No and Kim [26] and Lee et al. [14]. The four items related to interactivity were adapted from No and Kim [26], Yoo et al. [20], and Lee et al. [14]. The four items related to personalization were adapted from No and Kim [26], Huang et al. [27], and Lee et al. [14]. Finally, the five items related to security were adapted from Mills and Morrison [39], No and Kim [26], and Huang et al. [27]. For the construct of overall travel experience satisfaction, six items were adapted from Neal, Sirgy, and Uysal [40], Lee et al. [14], Kim, Woo, and Uysal [41], and Su, Huang, and Chen [42]. The four items related to happiness were adapted from Neal et al. [40], Su et al. [42], and Lee et al. [14], and the four items related to revisit intention were adapted from Kim et al. [41] and Kim, Lee, Uysal, Kim, and Ahn [43].
Tourists' perceived STT was selected and classified according to the literature review, which identified five attributes: informativeness, accessibility, interactivity, personalization, and security. The perceived STT experience of Chinese tourists was assumed to have an impact on tourists' travel satisfaction and tourist happiness from travel experience satisfaction. Finally, revisit intention was posited to identify the relationships between travel experience satisfaction and tourist happiness. The research hypothesis model is shown in Figure 1. The multi-measurement items were used to prevent measurement errors. All items in this study were measured on a seven-point Likert scale, ranging from strongly disagree (1) to strongly agree (7). The survey was written in English and translated into Chinese. A pilot test was conducted to check face validity. When the study questionnaires were distributed, respondents were asked to read an introductory paper on STT before filing out the questionnaires.
The questionnaires were divided into two parts, with a seven-point Likert scale used in the first part. All questions were adopted from tourism and technology studies and modified for the context of STT. The second part collected the demographic information of the respondents, including gender, age, educational background, occupational background, income level, city of residence, frequency of travel, and length of time the respondents had been using STTs.
Sample Collection
To select an appropriate sample, we asked the screening question: "Have you ever used smart tourism technologies in Macau?" Those who answered that they had not used smart tourism technologies in Macau were excluded.
A total of 150 pretest questionnaires were distributed, and 127 valid copies were collected. The statistics indicated that the α value of each item was between 0.802 and 0.92, which was greater than 0.8. Therefore, the questionnaire was considered highly reliable. In addition, after the distribution and communication process of the pretest, the wording of some questions was adjusted to avoid vague sentences before the final questionnaires were officially released. The official distribution sites of the questionnaires were tourist attractions in Macau, and all respondents were Chinese tourists visiting Macau. A total of 587 copies were distributed by a simple random sampling method, and the valid response rate (N = 527) was high, at 89.77%.
Descriptive Analysis
According to the descriptive analysis of demographic data, 52.8% of the subjects were women aged between 21 and 30 years (47.1%), and 49.7% of the total subjects had a university education background and a monthly salary between 40,001 and 80,000 RMB (34.9%). Among all the respondents, 129 worked in the service industry, and most came from mainland China. A total of 310 (58.8%) had been involved in leisure and travel activities once or twice per year. Regarding their history of smart tourism technologies usage, 43.1% had been using such technologies for three to four years, and 41.1% for more than four years. The demographic information of the sample is presented in Table 1.
Reliability Analysis
Reliability is an important factor in testing whether the questionnaire results have internal consistency [44]. According to Koufteros [45], Cronbach's alpha and composite reliability (CR) are two common methods to test the reliability level. Cronbach [46] proposed the reference criteria: when α is less than 0.6, it reflects low reliability; when the α coefficient is between 0.6 and 0.8, it indicates that the reliability is acceptable; when the α coefficient is greater than 0.8, it indicates that the reliability is excellent. Nunnally [47] also suggested that when α is greater than 0.7, reliability is considered high. In addition, the recommended value of composite reliability (CR) should exceed 0.6, and the higher the value, the better the reliability [48,49].
Therefore, this study used Smart PLS 3.0 to calculate Cronbach's alpha and CR. The results showed that the α values of the eight variables in this study were between 0.847 and 0.920, while the CR values were between 0.897 and 0.940. In other words, the α and CR values reached the standard value requirements.
Validity Test
Fornell and Larcker [48] suggested that the validity test should consist of convergent validity (CV) and discriminant validity (DV) to reflect the authenticity and accuracy of the questionnaire. CV measures the correlation of different measurements of the same variable, while DV measures the non-correlation between items with different variables. According to Anderson and Gerbing [50], to determine convergent validity, the first step is to compute the standardized load on each variable. If the load coefficient is greater than 0.7, this indicates that the validity of each construct is excellent. The second step is to calculate the average variance extracted (AVE). This should generally be greater than 0.5. DV can be measured by comparing the degree of correlation between the square root value of AVE and the latent variables [48]. When the measurement results show that the square root value of AVE of each variable is greater than that of the correlation coefficient and the value at least 0.5, the questionnaires have high DV.
The load coefficients of all the questions in the questionnaires were between 0.751 and 0.904, which is greater than the suggested value of 0.7, and the AVE of each variable was between 0.687 and Sustainability 2020, 12, 6592 8 of 14 0.795, which is greater than the suggested value of 0.5. In conclusion, all variables in this research model featured high CV. Table 2 presents the results of the load factor and AVE. In Table 2, the square root of the AVE value on the diagonal is larger than the correlation coefficient value at the lower left corner of the diagonal. This indicates adequate discriminant validity between Sustainability 2020, 12, 6592 9 of 14 the latent variables. In conclusion, the analytical results of CV and DV confirm that this questionnaire was satisfactory. Table 3 shows analysis of discriminant validity and all diagonal values exceed the inter-construct correlations with acceptable level.
Structural Model and Hypotheses Test
There are eight variables in this study, including five first-order variables used as indicators to create a second-order variable, perceived STT experience. This study first analyzed whether the first-order variables are related to the second-order variable (perceived STT experience) then tested the hypotheses model by using Smart PLS 3.0. First, a bootstrapping technique was used to determine the path estimates and t-statistics for the relative importance of the five first-order variables to perceived STT experience. All five variables were significantly associated with perceived STT experience. Among these five paths, accessibility was the most significant variable (path coefficient is 0.285, T value is 35.093), followed by informativeness (path coefficient is 0.254, T value is 31.044), security (path coefficient is 0.239, T value is 30.062), interactivity (path coefficient is 0.212, T value is 36.293), and personalization (path coefficient is 0.207, T value is 35.359).
Then, the proposed hypotheses were analyzed with SEM, adapting the bootstrapping technique with a sample size of 5000. In addition, in order to examine the explanatory power and predictive relevance of the variables in the research model, the explanatory variance R 2 value was calculated using the PLS algorithm to measure the explanatory power, and the predictive relevance was calculated using the blindfolding method. When the Q 2 value of the variables is greater than 0, it indicates that the model has predictive relevance [51] (pp. 193-221). Table 4 and Figure 2 shows that each path coefficient is greater than 0.2, the T value is greater than 3.29, and the P value is less than 0.001, which means these paths are significant. The results indicate that perceived STTs are positively associated with tourists' travel satisfaction, which supports H1. The study also found that tourists' travel satisfaction was significantly associated with tourist happiness, supporting H2. Regarding the relationship with tourist revisit intention, there was a positive relationship between tourists' travel satisfaction and happiness, which supports H3 and H4 (See Table 5 and Figure 3). Moreover, the total effect (0.737) of tourists' travel satisfaction on revisit intention is greater than the value of path coefficient (0.356), indicating that as an intermediary variable, tourists' happiness weakened the effect of revisit intention. In conclusion, all hypotheses were supported in this research model. supports H1. The study also found that tourists' travel satisfaction was significantly associated with tourist happiness, supporting H2. Regarding the relationship with tourist revisit intention, there was a positive relationship between tourists' travel satisfaction and happiness, which supports H3 and H4 (See Table 5 and Figure 3). Moreover, the total effect (0.737) of tourists' travel satisfaction on revisit intention is greater than the value of path coefficient (0.356), indicating that as an intermediary variable, tourists' happiness weakened the effect of revisit intention. In conclusion, all hypotheses were supported in this research model.
Conclusions
The results of this study enriched the theoretical implications of smart tourism. The study
Conclusions
The results of this study enriched the theoretical implications of smart tourism. The study adopted the attributes of STT proposed by Huang et al. [27] and added a new attribute, security. In other words, this study transformed smart tourism into a measurable model and identified the importance of these five attributes (informativeness, accessibility, interactivity, personalization, and security). The results showed that accessibility was the most significant contributor to tourists' perceived STT experience. The reasons may be that tourists can easily use STTs at the destination at any time when they are highly accessible. With easy access to STT, tourists spend less time and effort investigating how to use these technologies, which enables them to enjoy technology-based travel experiences at the destination. Informativeness was another highly influential contributor to perceived STT experience followed by accessibility. When embracing STTs at destinations, tourists can find information on food and transportation at the destination. STTs enable tourists to have more opportunities to engage in a wide range of activities and events. Moreover, tourists displayed relatively low satisfaction with personalization in the context of STT experience. Most of the Chinese tourist participants were familiar with STTs and had used them for more than three years. Therefore, for Chinese tourists, the ordinary technology used in tourism might not be affected since they pursued unique and novelty technology-based travel experiences. In this regard, STT may ignore their personal requirements.
Theoretical Implications
Based on these results, several important theoretical contributions of this research were found. First, the findings of this study provided a deeper understanding of the relationship between two concepts (STT experience and tourist happiness) and developed a research model for the relationship among perceived STT experience, travel experience satisfaction, tourist happiness, and revisit intention. Based on the empirical analysis results, this study revealed that tourists intend to revisit the destination when they are satisfied with the smart tourism experience. Second, this study emphasized the relationship between STT experience and tourist happiness. High satisfaction with STT experience can create high travel experience satisfaction, thus, improving happiness.
Practical Implications, Limitations, and Future Research
This study offers practical implications for DMOs. It revealed that most tourists have a positive intention to use STTs, and DMOs can create specific activities and experiences for tourists by developing STTs, especially in terms of personalization. For instance, when tourists want to find a restaurant, technology can recommend the nearest restaurant according to their preferences and guide them on a suitable route to the restaurant.
Although the study has many useful theoretical and practical implications, there are also some limitations. First, the sample is limited and may not be representative of the whole population. Although this study did not aim at young adults, targeted samples were mostly under 50 years old. In addition, since many older adults have difficulties with smart technologies, these age groups need to be explored with future research. Second, this study was conducted in Macau, which may have a unique tourist type and city environment. The research framework may not be applicable to other destinations, and the results arising from it may be different. Thus, an extended comparative study of multiple cities is needed. Third, this study added security as an attribute to measure perceived STT experience, and five attributes in total were examined. The applicability of these indicators remains to be investigated. Future studies can investigate whether there are any other factors affecting the STT experience for better understanding of current STT. In addition, to generalize the research, more diverse samples from other cities or countries are needed since this study was only conducted on Chinese tourists who visited Macau. | 6,580 | 2020-08-14T00:00:00.000 | [
"Business",
"Computer Science"
] |
The Application of "Sets" of Discrete Mathematics everyday in life
Mathematics is related to something that can be calculated or something expressed in terms of quantity (number). There are so many economic variables (concepts) that are quantified, such as the price of goods, the amount of goods demanded and offered, the money supply, the level of profit sharing margin, national income, investment level, and so on. Mathematics does not only play a role in quantifying economic variables, but also explores the relationship between economic variables. The relationship of an economic variable with other economic variables is often expressed in the form of an economic model. Because economic variables can be quantified, these economic models can be expressed in the form of mathematical symbols / models.
Background
Mathematics is a lesson that has a lot of formulas and also people who learn it must memorize a lot. Memorizing is not enough, because we must try to apply it in everyday life. Mathematics is also a medium that is able to solve problems, from simple problems to even complex problems.
Often we ask ourselves, "what are the benefits of learning mathematics? Is mathematics used in everyday life? What are the benefits of the set? " Surely we often ask this question to ourselves, to our friends, or to our math teacher. This question occurs because we are already upset or give up to learn a lesson that they think is very boring and unnecessary.
"Set". Some people don't know what the meaning of the set is so that sometimes people misinterpret it. Actually the word set is related to grouping. There are some who have known the association of associations with groupings, finally they conclude themselves even though they have not been able to describe it clearly.
Set is commonly used in mathematics and daily life. In everyday life we find that understanding as in the Amikom Student Association, a collection of books, stamp collections, study groups, and others. The set and collection words are used in the same definition, even though both have the same meaning. Thus the word set and collection.
The benefit of the association is to help people who work to think rationally, critically, straightly, steadily, orderly, methodical and coherent, improve their ability to think abstractly, meticulously, and 154. CSRID Journal, Vol. 10 No. 3 Juni 2018, Hal. 153-161 ISSN: 2085-1367https://www.doi.org/10.22303/csrid.10.3.2018.153-161 eISSN :2460 objectively, improve intelligence and improve abilities sharply and independently, with and encourage people to think for themselves by using systematic principles, increasing love for truth and avoiding thinking errors, errors and errors, are able to analyze events.
A. Definition of Sets
The set was introduced by George Cantor (1845 -1918), a German mathematician. He is the Father theory of Sets, because he was the first to develop the branch of mathematics. He said that the sets is a collection of objects. These items can be abstract or concrete objects. Basically, objects in a set do not have to have the same character/character or set is a collection of clearly defined objects or objects.
Sets are The set is a collection of objects or objects or symbols that have a meaning that can be clearly defined which are members of the set and which are not members of the set. In our daily lives, we often talk about discrete objects, such as books, pencils, classes, computers, students, and so on. Or we can tell, The set is a collection of objects or symbol that have a meaning that can be clearly defined which are members of set and which are not the members of set.
Let's look at the objects around us, for example a group of students B, a group of students studying in class Y, etc. if we observe all the objects around us that are used as examples above can be clearly defined and which members can be distinguished and which are not. Objects contained in the set are called elements, members, and unsure.
Delicious food set, beautiful girl set and beautiful flower set are examples of sets that cannot be clearly defined. Why? Because of the delicious food, the beauty of the girl and the beauty of flowers for some people is very relative. The beauty of flowers for someone is not necessarily beautiful for others. So it's relative for everyone. Object or objects included in the set are called members or elements of the set. Generally the writing of sets uses capital letters A, B, C and so on, and the set members are written in lowercase letters.
Intuitively the set is a collection of objects that have certain properties. The objects in the set are called members (elements) of the set. The particular nature of the members of the set is called the nature of the set. The nature of the set is clearly defined.
B. Type of Sets
There are a lot of type of Sets : 1. Subset Set A is said to be a Subset of Set B written A ⊂ B , if each member A is member of B. they have some Requirements : If A ={1,2,3,4,5} and B={2,4} , then B ⊂ A Because every elements of B is set member of A. Explanation : From the definition above, the set of parts must have an element of set A also an element of set B. The meaning of the two sets must be interrelated.
Nullset
Nullset is a set that does not have the same member element at all. Nullset have some Requirements : Blank set = A or {} Blank set is single. An empty set is a subset of each set. NOTE : an empty set cannot be declared {0} because {0} ≠ {} Explanation : from the definition above the blank set is a set that does not have a single member and usually an empty set is denoted by the Greek Letter ø (phi).
Universes set
The set of universes is usually denoted by "U" or "S" (Universum) which means a set that contains all the members discussed or in the other words the set of obejcts being discussed.
Equal set
If each member of set A is also a member of set B, and vice versa. Denote by A= B Requirements : two sets of members must be the same. Example : A = {q,w,e} B = {q,w,e} then A = B Equal set or set is the same, has two sets whose members are the same member of set A {q,w,e} then set B will also have members namely {q,w,e}. CSRID Journal, Vol. 10 No. 3 Juni 2018, Hal. 153-161 ISSN: 2085-1367https://www.doi.org/10.22303/csrid.10.3.2018.153-161 eISSN :2460 7. Equivalent set The equivalent set is a set whose members are as many as other sets. They have some requirements : Cardinal numbers are expressed by notation n(A) A≈B, said to be aquivalent to set B, Example : 2. State the set by mentioning or registering its members, this method is also called Description. Namely by the way the members of the set are written in curly brackets and between members with one another are separated by commas.
Disjoint set
Example : A = {apple, watermelon, guava, orange, mango} This is for groups with few or limited members B = {Jogjakarta, Semarang, Palembang, Padang, … , Aceh} This is for groups with many but limited members C = {2,3,4,5,6,10,11,…} This is for groups with a large number of members 3. State the set with the set-up notation, by writing down general characteristics or general characteristics (roles) of its members. The way to declare a set with a set-up notation is to follow the following rules : a. The object or object is represented by a variable (a,b,c,…,z) b. Writhe down the terms of membership behind the sign '|' Example : Read : the set of each x such that x is less than 7 and x is the real number B = {(x,y) | y + x = 7, x and y is real number} Read : a pairs set of x and y such that y + x is equal to 7 for x and y is the real number.
4. The set can also be presented graphically (Venn Diagram) Presentation of setd with Venn Diagrams was discovered by a British Mathematician named John Venn in 1881. The set of universes by rectangles and other sets with circles in the quadrilateral.
E. The Law of Algebra in the Sets
The laws of the set are called set algebra laws. There are many laws are contained in the set algebra, but here only spelled put 1-1. Some of these laws are similar to algebraic laws on real number systems such as a ( b + c ) = ab + ac ,namely distributive law.
F. The benefits of Sets
By studying the Sets, it is expected that the logical ability will be increasingly honed and will spur us so that we are able to think logically, because in life logic has an important role because logic is related to reason. There are many logical uses include : 1. It can be helping everyone who learns logic to think rationally, critically, straightly, permanently, orderly, methodically, and coherently. 2. It can be improve the ability to think abstractly, carefully, and objectively. 3. It can be increase intelligence and improve the ability to think sharply and independently. 4. It can be forching and encouraging people to think for themselves by using systematic principels. 5. It can be increase love of truth and avoid mistakes in thingking, errors and error 6. It can be able to analyze an event There are some example of sets :
G. The practice a Sets in daily Life
A sets is some collection that is considered as a unit that is combined in a circle. There are so many implementations that we can see in everyday life. Some examples that are often used by teachers and lecturers are examples of hobbies, because many hobbies in life are the same. For example, there are hobbies who play soccer, while those who like playing basketball. But as for those who like both. In this case we can use the set to know it. Example questions There are 20 children in the class and 5 people who like soccer, 5 people like basketball, 10 people like both. And the results will be like this. There are many things that we can see in everyday life about the set. Where we are grouped every time in making a big project. For sure, every project like that needs grouping. There was a team that made about this and the tone of the team that made it. Some people help both because of experts in both. Like a set curve in a work environment.
RESEARCH METHODS
Member of research, the name of sets mostly using capital letters like A,B,C, and X. while members of sets are usually denoted by lowercase letters such as a,b,c,x,and y. for example H is the sets of all vowels in the latin alphabet so the objects included in the set H are a,i,u,e, and o. the objects that enter in a set are referred to as members of the set. The notation for declaring members of a set is "" while the notation for non-members is "". Therefore a H, iH, u H, e H, and o H while b H, c H and d H. the term member used above can be replaced with the term elements or element. The special symbols used in set teory are : | 2,707.6 | 2021-03-03T00:00:00.000 | [
"Economics"
] |
Generalized Moment Correction for Long-Ranged Electrostatics
Describing long-ranged electrostatics using short-ranged pair potentials is appealing because the computational complexity scales linearly with the number of particles. The foundation of the approach presented here is to mimic the long-ranged medium response by cancelling electric multipoles within a small cutoff sphere. We propose a rigorous and formally exact new method that cancels up to infinitely many multipole moments and is free of operational damping parameters often required in existing theories. Using molecular dynamics simulations of water with and without added salt, we discuss radial distribution functions, Kirkwood–Buff integrals, dielectrics, diffusion coefficients, and angular correlations in relation to existing electrostatic models. We find that the proposed method is an efficient and accurate alternative for handling long-ranged electrostatics as compared to Ewald summation schemes. The methodology and proposed parameterization are applicable also for dipole–dipole interactions.
S1 Derivation of image charges
By using r p = c p r andẑ p = z p /z it is possible to transform Eq. 7 (main text) into Eq. S1.
The solution 1 to Eq. S1, by using c p = q −p , iŝ In the bottom product of Eq. S2 we factor out q −p , and then split all products into cases when i < p and i > p. These modifications are shown in Eq. S3.
Further modification by variable substitution (i = i−p in the top and bottom right products, and i = −i + p in the bottom left product) gives Eq. S4.
From Eq. S4 and onward we will re-letter the new index symbol i with the old i when using variable substitution, i.e. i → i in this case. This to avoid multiple indexes and thus confusions with other entities. Further simplification of the two left products (by factoring out q −i from the top one) gives Eq. S5.
We now factor out q −i−p from the top product and q −i from the bottom product giving Eq. S6, where we also note the cancellation of the (−1) P −p factors.
Simplification of the left products yield By making the variable substitution i = P − i − p + 1 in the top product we get Eq. S8 where we note that the products together are equal to the q-binomial coefficient 2 P Thus we now have arrived atẑ (S9)
S2 Self-energy
Starting from Eq. 7 (main text) we note that if a particle is positioned in the origin, i.e. r = 0, then the equation becomes (S10) Here we have indexed the image charges with primes as to distinguish them from the charges when we calculate the potential from a particle at position r > 0. Note that in Eq. S10 there is only a charge present due to the centered particle and no higher order moments. Assuming that the image particles needed to cancel this charge (and all higher order moments generated by themselves in the process) are positioned at r p = c p r where r > 0 is any point, then Eq. S10 converts to Eq. S11 which has its solution 1 shown in Eq. S12 where we have used By condensing these products into one, and splitting the result as to give products for i < p and i > p, we get . (S13) Variable substitution using i = i − p in the right product giveŝ and by factoring out q −p+i from the left product we get .
(S15) Using these moments, the self-energy becomes Reshuffling the terms in Eq. S16 gives Eq. S17.
The denominators in Eq. S17 are polynomials with only non-negative powers. Thus, if q → 0 only the constant term 1 will be none-vanishing. In the same limit the numerator will be zero for every p > 1 and thus the entire far-right sum will equal one in the limit q → 0, which stems from the p = 1 term. The final expression (independent of P ) for the far right sum in the limit q → 0 is thus 1 as is shown in Eq. S18. The choice of q seems somewhat arbitrary however our choice of q → 0 comes from the following arguments: In the original derivation for the potential we chose to mirror the particle position in the cut-off as to get the image particle positions. However, this is not possible to do when the particle is in the origin (since then we would have to divide by zero). Thus, we choose to mirror an identical particle infinitesimally close (q → 0) to the origin.
S3 Dielectric constant
The dielectric constant, ε r , has been derived within a known theoretical framework 3 where the key equation is Here M 2 are the fluctuations of the dipole moment M = N i=1 µ i , k B is the Boltzmann constant, and V the volume of the unit cell. Different values ofT (0) is used depending on the method. In the following derivations we want to stress that q = q(r). In order to get first higher order interactions, we write where ∇ is the gradient operator. Further second higher order interactions then are obtained as Note that S(q) is not angle-dependent and thus However, by further apply ∇ T to this we get The total expression for ∇ T ∇ (S(q)/r) can be parted like In order to get the proper evaluation of the dielectric constant we have to evaluate the and For k = 0, i.e. evaluation of the static dielectric constant, the spherical Bessel functions becomes j 0 (0) = 1 and j 2 (0) = 0. Therefore A(0) has the trivial solution zero (since the singularity of a(r) in r = 0 has been explicitly dealt with 3 ) and we now only have to evaluate B(0). Thus, we have where the limit ∞ has changed to R c due to the fact that we use S(q) ≡ 0 for q > 1, that is r > R c . Integration by parts gives which is true for all the tested pair potentials except q-potential using P = 1 where B(0) = 0.
S4 Larger systems
The density, dielectric constant, Kirkwood factor G K , standard deviation of the total energy, and diffusion coefficient, for the different potentials are presented in Table S1. Here * indicate a large system, i.e. a system where R c is less than a fourth of the cubic system side-length.
In this case the number of water molecules was N = 5000. The three consistent differences we see are a larger diffusion coefficient, G K , and σ E in a larger system. System dependencies of the first are well-known 4 and our results are consistent with this observation. Increases of the second are small and could fall under the uncertainty given by the standard deviation.
Yet the increase of the standard deviation of the energy is noticeable for all q-potentials and Ewald. However both SP1 and SP3 are consistently low in this regard. Potential R c = 1.28 nm, * R c = 1.28 nm R c = 1.60 nm ρ ε r G K σ E D ρ ε r G K σ E D ρ ε r G K σ E D q(P = 1) 1099 -3.5 --1106 -3. | 1,708 | 2019-04-23T00:00:00.000 | [
"Physics"
] |
Experimental Implementation of the Method of Generation of a Sequence of Ultrashort Gigawatt Cherenkov Superradiance Pulses with a Nanosecond Repetition Period
A periodic sequence of ultrashort superradiance pulses during the current pulse of an electron beam with a duration of about 40 ns has been generated in an experiment with a relativistic backward-wave oscillator with wave reflectors at the edges of the electron–wave interaction region. The pulse repetition period has been specified by the electron–wave feedback time and is 5.9 ns, which corresponds to the repetition frequency of 170 MHz at a FWHM duration of about 0.8 ns. The frequency of microwave oscillations is 10 GHz. The peak power of pulses is 0.8–1.3 GW. The corresponding conversion coefficients defined as the ratio of the peak power of the ultrashort microwave pulse to the power of the electron beam are 0.7–1.2.
INTRODUCTION
Interest in sources of intense microwave radiation is due significantly to studies of the electromagnetic compatibility of electronic devices and to developments for electronic warfare. Pulse-periodic relativistic microwave oscillators and amplifiers operating either in the quasistationary generation regime (the duration of a microwave pulse is close to the duration of the feeding electron beam and is about 100 oscillation cycles) or in the regime of ultrashort (few-cycle) pulses (USPs) [1][2][3] are primarily used as such sources. In the latter regime, the peak power of USPs can exceed the power of the electron beam. In both regimes, the maximum repetition frequency of microwave pulses is determined by the capabilities of a source of high-voltage nanosecond pulses applied to the vacuum diode of the microwave oscillator, where a high-current electron beam is generated, and is usually limited to a value of about 1 kHz. Approaches and schemes were proposed in theoretical works [4][5][6] to significantly increase the repetition frequency of USPs. In particular, combinations of relativistic backward-wave and traveling-wave oscillators, one of which was used as an active element (amplifier) and the other served as a nonlinear saturated absorber in the feedback circuit, were analyzed in [4,5]. This scheme was experimentally confirmed in [7]. Another approach [6] is based on the partial reflection of USPs from the output of the relativistic backward-wave oscillator in the superradiance regime. The numerical simulation showed that the repetition frequency of USPs in such devices is determined by the characteristic time in the "wave pulse-electron beam" feedback circuit and reaches hundreds of megahertz, and the peak power of USPs exceeds the power of the feeding electron beam. To estimate the efficiency of such regimes, the conversion coefficient K that is the ratio of the peak power of USPs to the power of the electron beam is used [2].
In this work, we report the results of the experiment on the generation of the periodic sequence of USPs conducted with a relativistic superradiant backwardwave oscillator with wave reflectors at the edges of the interaction region described in [6]. The preliminary numerical experiment showed that a periodic sequence of USPs with a repetition frequency of about 200 MHz and is formed during the current beam pulse.
EXPERIMENTAL SETUP AND THE SYSTEM OF DETECTION OF MICROWAVE SIGNALS
The layout of the experimental setup with the microwave detection system is shown in Fig. 1. Highvoltage pulses were obtained from a SINUS high-current pulsed generator [8] with a triple forming line (1),
OPTICS AND LASER PHYSICS
which provides voltage pulses with an amplitude up to 330 kV and a FWHM duration of 36 ns. The tubular relativistic electron beam was generated in a coaxial diode with magnetic insulation and an edge explosive emission cathode 35 mm in diameter. A pulsed solenoid (2) with the length of the uniform magnetic field segment of about 600 mm was used to transport the beam through the electrodynamic system of the microwave oscillator. The operation of the relativistic backward-wave oscillator in the superradiance regime is based on the cumulative transfer of the energy from electrons to an ultrashort electromagnetic pulse propagating against the electron beam [2,3]. To fabricate the electrodynamic system under study (Fig. 2), we performed the numerical simulation using the axisymmetric 2.5-D version of the completely electromagnetic particle-in-cell code KARAT [9]. The system had two reflectors, one of which was placed at the input of the slow-wave structure from the side of the cathode unit and ensured the total reflection of the incident microwave wave. The second reflector, placed at the output of the generator from the side of the electron collector, returned about 5% of power to the slow-wave structure. Because of the presence of this feedback circuit, each formed USP initiates the next pulse. The slow-wave structure of the generator consisted of 45 corrugations with a period of 12 mm and an average diameter of approximately 1.3λ, where λ is the radiation wavelength. The depth of the corrugation increased smoothly from the cathode edge of the system to the collector edge. The last seven corrugations had the same amplitude and composed a uniform segment, in which each USP was formed. A directional coupler (3) based on a circular waveguide placed at the output of the microwave oscillator was used, together with lamp detector no. 1, to detect the amplitude and shape of USPs in the output waveguide. The transient attenuation of the coupler measured using an Agilent 8719 ET (50 MHz-12.5 GHz) network analyzer in the frequency range of 9-12 GHz was 69-71 dB. A conical horn (4) with an output window about 200 mm in diameter was used to extract radiation. Receiving antennas in the form of the open end of the 23 × 10-mm rectangular waveguide were placed at a distance of 4.0 m from the aperture of the emitting horn and were used to detect the amplitude and shape of USPs in open space (antenna 5 together with lamp detector no. 2) and for spectral measurements (antenna 6). The radiation spectrum was determined by the heterodyne method (generator 7 and mixer 8 in Fig. 1) by the fast Fourier transform of the intermediate-frequency signal from a Tektronix MSO 64 (6 GHz, 25 GSa/s) oscilloscope. The antenna measurement region was separated from the environment by microwave absorbers. To visually control the spatial distribution of the microwave power flux density, we
RESULTS OF THE EXPERIMENT
At a voltage amplitude of 270 kV across the vacuum diode of the electron accelerator, a beam current of 4.0 kA, and a guiding magnetic field of 2.2 T, a periodic sequence of USPs (Fig. 3) with a FWHM duration of approximately 0.8 ns and a repetition frequency of 170 MHz (a repetition period of 5.9 ns) was generated. This pulse repetition period corresponds to the feedback period in the generator estimated as , where is the velocity of electrons in the beam, is the group velocity of the opposite electromagnetic wave, and L is the length of the (Fig. 2). In a series of 20 pulses, the maximum spread of the amplitudes of USPs was no more than 15% (Fig. 4). The frequency of oscillations in each USP was about 10 GHz (Fig. 5).
The peak power in the first USP, which was obtained by integrating the spatial distribution of the power flux density (corresponding to the TM 01 wave), was P 1 = (1.3 ± 0.2) GW. Knowing the peak amplitudes of other USPs and power-voltage characteristic of the lamp detector, we estimated their peak powers as P 2 = (1.0 ± 0.2) GW, P 3 = (1.1 ± 0.2) GW, P 4 = (1.0 ± 0.2) GW, P 5 = (0.8 ± 0.1) GW, and P 6 = (0.2 ± 0.03) GW. The conversion coefficients determined as the ratio of the peak power of each USP to the power of the electron beam at the time marked by the triangle in Fig. 3 are K 1 = 1.2 ± 0.2, K 2 = 0.9 ± 0.2, K 3 = 1.0 ± 0.2, K 4 = 0.9 ± 0.2, K 5 = 0.7 ± 0.1, and K 6 = 0.2 ± 0.03. The luminescence of the neon lamp panel under microwave irradiation had a ring shape.
Simultaneously, the radiation power was independently measured using the directional waveguide coupler placed in the output waveguide of the generator; a signal from the coupler was guided to lamp detector no. 1 and was processed taking into account the power-voltage characteristic of this detector. This measurement gave the estimates P 1 = (1.4 ± 0.2) GW, P 2 = (1.0 ± 0.2) GW, P 3 = (1.1 ± 0.2) GW, P 4 = (1.1 ± 0.2) GW, P 5 = (0.8 ± 0.1) GW, and P 6 = (0.3 ± 0.04) GW for the peak powers of the six USPs.
In a separate series of experiments involving the detection of signals using the coupler, the energy of microwave radiation was measured with the vacuum calorimeter [10], which was placed immediately behind the coupler. Several measurements performed at the parameters of the generator indicated above showed that the microwave energy per measurement is in the range of 8.5-9.6 J. The analysis of the corresponding oscillograms under the assumption that "noisy" generation at frequencies near 10 GHz primarily occurs in intervals between single USPs gives an estimate of 1.4-1.5 GW for the peak power of the first USP. Suggesting that the total duration of microwave generation is about 30 ns (Fig. 3), the average microwave power during the electron beam pulse can be estimated as = 8.5-9.6 J/30 ns = 0.28-0.32 GW. The energy efficiency of the generator defined as the ratio of the microwave energy to the energy of the electron beam is about 23%.
The maximum peak power of the first USP and its reduction for the next pulses should be attributed primarily to the gradual deviation of the parameters of the generator from the optimal parameters during the electron beam pulse because of the expansion of the cathode plasma. This expansion is indicated by the monotonic decrease in the voltage in the vacuum diode during the pulse (Figs. 3, 4). The accumulation of decelerated electrons in the device, which are produced in a significant quantity in each event of generation of USPs, probably also makes a contribution. During the interval between two USPs, such electrons do not leave the slow-wave structure, slowly drifting toward the collector and penetrating into the vacuum diode (see Fig. 3 in [6]). Numerous strongly decelerated electrons also affect the formation of the electron beam and the electron-wave interaction and change the conditions of generation of all USPs except for the first. At the same time, this negative effect in the considered system with end reflectors is weaker than that in the single-pass generator without reflectors, where the power of the second USP was several times lower than the power of the first USP.
CONCLUSIONS
The experimental results have demonstrated the efficiency of the proposed generation scheme of the periodic sequence of USPs and have confirmed that the theoretical model of the device is applicable. At the same time, the peak powers of USPs and the corresponding conversion coefficients obtained in the experiment proved to be approximately half the values obtained in the axisymmetric particle-in-cell simulation. The most probable reason for this relation is the diocotron instability in an initially azimuthally homogeneous electron beam, which is developed over the entire length of its transport and results in the azimuthal filamentation and radial spread of the beam. Comparison of the signatures of the beam on polycaprolactam targets placed at the input and output of the slow-wave structure of the device in the regime without microwave generation has shown that the thickness of the wall of the tubular beam increases from 0.8 mm near the edge of the cathode to 3 mm in the region with the maximum depth of corrugation. As a result, in this region, where USPs are formed, the estimated coupling impedance of the TM 01 operating wave with the electron beam is approximately half the value in the absence of instability. The azimuthal inhomogeneity of the beam can promote the development of asymmetric oscillations, but the generated microwave spectrum and power flux density distribution obtained in the described experiments indicate the selective excitation of the TM 01 mode. GHz is the intermediate frequency. | 2,846.2 | 2022-04-01T00:00:00.000 | [
"Physics"
] |
Uncertainty assessment applied to marine subsurface datasets
A recently released voxel model quantifying aggregate resources of the Belgian part of the North Sea includes lithological properties of all Quaternary sediments and modelling-related uncertainty. As the underlying borehole data come from various sources and cover a long time-span, data-related uncertainties should be accounted for as well. Applying a tiered data-uncertainty assessment to a composite lithology dataset with uniform, standardized lithological descriptions and rigorously completed metadata fields, uncertainties were qualified and quantified for positioning, sampling and vintage. The uncertainty on horizontal positioning combines navigational errors, on-board and off-deck offsets and underwater drift. Sampling-gear uncertainty evaluates the suitability of each instrument in terms of its efficiency of sediment yield per lithological class. Vintage uncertainty provides a likelihood of temporal change since the moment of sampling, using the mobility of fine-scale bedforms as an indicator. For each uncertainty component, quality flags from 1 (very uncertain) to 5 (very certain) were defined and converted into corresponding uncertainty percentages meeting the input requirements of the voxel model. Obviously, an uncertainty-based data selection procedure, aimed at improving the confidence of data products, reduces data density. Whether or not this density reduction is detrimental to the spatial coverage of data products, will depend on their intended use. At the very least, demonstrable reductions in spatial coverage will help to highlight the need for future data acquisition and to optimize survey plans. By opening up our subsurface model with associated data uncertainties in a public decision support application, policy makers and other end users are better able to visualize overall confidence and identify areas with insufficient coverage meeting their needs. Having to work with a borehole dataset that is increasingly limited with depth below the seabed, engineering geologists and geospatial analysts in particular will profit from a better visualization of data-related uncertainty. Thematic collection: This article is part of the Mapping the Geology and Topography of the European Seas (EMODnet) collection available at: https://www.lyellcollection.org/cc/EMODnet
Contributing towards a more sustainable society, pan-European data initiatives in the field of geology are on the rise. In order to streamline access to the diverse databases and services involved, the umbrella organization of all geological surveys in Europe, EuroGeoSurveys, piloted the European Geological Data Infrastructure (EGDI). EU co-funded projects include EMODnet (the European Marine Observation and Data network; Martín Míguez et al. 2019) and GeoERA (Establishing the European Geological Surveys Research Area to deliver a Geological Service for Europe; Vidovic et al. 2020).
For the marine realm, high-quality substrate and habitat maps are generated from the resulting databases, underpinning Europe's Blue Growth strategy and its Marine Strategy Framework Directive (MSFD), supporting sustainable growth in the marine and maritime sectors. A better management of the seabed and its subsurface is needed, as the pressures from human activities intensify (Halpern et al. 2008). Seabed-sediment maps of EMODnet Geology, for example, are instrumental in assessing the status of the seabed from a transnational habitat-mapping and MSFD perspective. Each European marine data initiative has the potential to enhance the effectiveness of marine spatial plans covering aggregate extraction, dredging and disposal of sediment, fisheries and windfarm development. Such plans are needed to optimize the assignment of specific zones for each activity and to designate marine protected areas at the most suitable locations (Douvere 2008;Douvere and Ehler 2011). Belgium, pioneer in science-based spatial planning, is at the forefront of integrating socio-economic, ecological and institutional aspects of human activities at sea (Compendium for Coast and Sea; Devriese et al. 2018).
In all of these initiatives, data and datasets from different origins, time periods and owners are harmonized and merged, but the quality of the supporting data is quantified seldomly. However, the applied value of scientific findings on environmental status and seabed-habitat changes may be limited by uncertainties related to metadata and the quality of the underlying geological data (van Heteren and Van Lancker 2015). Traditionally, data uncertainties were neglected or at least left unquantified in seabed-substrate and -habitat maps (e.g. 1:250 000 series of geological maps of the UK continental shelf areas; British Geological Survey 1977Survey -2000. In the latest EMODnet Geology data products, data density is not considered, nor data quality. Instead, a highest confidence score is assigned when sediment sampling as well as remote sensing are used to create a seabed-sediment map (Kaskela et al. 2019). Generally, data are not discarded, even when old or of poor quality, since data are usually in short supply.
Dealing with uncertainty is an inherent element of the geological interpretation (Bond 2015;Pérez-Díaz et al. 2020) and therefore quantification of the full spectrum of data-related uncertainties requires some additional steps. Quality flagging is the most basic approach to quantifying uncertainty within a dataset and is done by assessing metadata fields. It can be limited to indicating the presence or absence of data, expressed in only a few categories (e.g. 1 to 5, or low to high), or be very complex with a full range of quantitative error ranges (Bárdossy and Fodor 2001). Modern data products come with indicative measures of confidence (e.g. a combination of methods; Kaskela et al. 2019), or some actions to improve confidence (e.g. the usage of historical data; Stephens et al. 2011). McBreen et al. (2011) combined measurements of uncertainty with information about data quality to produce a confidence map for the seabed-habitat map of the UK. They took into account factors such as age, data density and data-collection techniques. Garlan et al. (2018) took confidence a step further by considering not only these previous factors, but also data consistency, map scale and positioning precision. In this light, it is no surprise that automated procedures, although helpful in assigning data quality, will always be far from perfect.
Uncertainties in 3D models are even more complicated than those of 2D maps, but can be incorporated into the final data products more easily, as a parameter that can be visualized separately. Interpolation (e.g. Kriging) and simulation (e.g. stochastic) techniques create 'modelling uncertainty' that can easily be calculated but may have many different components (Wellmann et al. 2011). Entropy, an overall measure of modelling uncertainty based on probability distributions and calculations (Shannon 1948), is increasingly provided as a model parameter (Stafleu et al. 2011;Lindsay et al. 2012;Wellmann and Regenauer-Lieb 2012;Hademenos et al. 2018).
Particularly challenging for both data and model uncertainties is their effective implementation in user-specific applications (e.g. aggregate-resource quantification, assessments of environmental status and habitat change). Intuitively, end users have confidence in colourful models, whether their reliability is credible or notoriously overrated (e.g. Cowan 2017). Communicating the logic and relevance of uncertainty assessments to end users will remain difficult until convincing evidence can be presented that risks can be reduced, or money saved by taking uncertainty into account during decision making. This paper presents a uniform step-by-step approach enabling consistent assessment of data uncertainty for a borehole dataset concerning the Quaternary of the Belgian Continental Shelf. Originally, the dataset was used for the creation of a voxel-based aggregate resource model (TILES consortium 2018a;Van Lancker et al. 2019). Here, we emphasize the methodology of the uncertainty assessment and the creation of confidence maps. By including data uncertainties in any 2D or 3D model, it is possible to visualize the influence of both, data-related as well as model-related, uncertainties and to compare calculations made using subsets of data meeting different quality criteria. These visualizations and comparisons can be queried in an associated decision-support tool and are key elements of data-gap analyses, a starting point for further optimization of the proposed workflow.
Study area
The Belgian part of the North Sea (BPNS), only 3455 km 2 and having a 65 km-long coastline, has the ideal size and borehole-data volume to test methodologies assessing data uncertainty within a composite marine geological dataset. Its shallow-marine environment reaches depths up to 45 m LAT (Lowest Astronomical Tide) and is dominated by several groups of mostly stable sand banks and associated swales (Van Cauwenberghe 1971;Lanckneus and De Moor 1991). Offshore, these large morphological entities are mostly covered with amalgamating sand waves and megaripples of different size. Nearshore, some isolated sand-wave patches occur.
In the southern Bight of the North Sea, sand waves show typically oscillatory migration at rates up to 10 m a −1 offshore and up to 20 m a −1 near the coast (Lanckneus et al. 2001;van Dijk and Kleinhans 2005).
Fine sand occurs predominantly in the nearshore, with extensive mud (clay and silt) fields towards the east, whilst medium to coarse sand is most abundant farther offshore (Verfaillie et al. 2006;Van Lancker et al. 2007). Gravel beds are limited to offshore swales, where the Quaternary cover is thinnest (Le Bot et al. 2005;Van Lancker et al. 2007). Paleogene clay crops out in this same area, where the Quaternary is absent (Mathys 2009). Information on seabed sediments and its subsurface is now available in a subsurface model of the entire Quaternary (Hademenos et al. 2018; TILES consortium 2018a) (Fig. 1).
In the Belgian marine realm, the number of activities affecting the seabed is substantial. Aquaculture, coastal protection, dredging and dumping, fisheries, military use, nature conservation, offshore energy, power and telecommunication cables, sand and gravel extraction and ports have different impacts to different depths, both separately and cumulatively (Compendium for Coast and Sea; Devriese et al. 2018). Various stakeholders are involved, including those related to shipping, tourism, cultural heritage and scientific research, all ensuring that tests on data uncertainty can be evaluated by decision makers that will profit directly from better tools for marine spatial planning.
Methodology
Assessing data uncertainty of geological datasets is complex and requires a tiered approach with a multiple-step workflow (Fig. 2). Following compilation of a standardized and harmonized marine subsurface dataset and corresponding metadata, data uncertainty was scored for horizontal positioning, sampling and vintage. Next, each uncertainty parameter was mapped individually along with measured average data density. This step was repeated for various uncertainty filters, each reducing the number of contributing data points but lowering the uncertainty and thus optimizing the maps for areas with a high-enough data density. Data uncertainty was incorporated into a voxel model for the subsurface, using ordinary kriging. Finally, all uncertainties were made available for querying in a decision support system (DSS; TILES consortium 2018b) so that different combinations of uncertainty could be visualized according to user needs (De Tré et al. 2018).
Geological datasets and their metadata
In the framework of the TILES project (Van Lancker et al. 2019), a lithology dataset was created containing geological descriptions of 1491 sediment cores, 348 grab samples and 30 drillings taken from the Belgian seabed (SediLITHO@SEA; . It complements the sediment-related datasets for grain-size parameters (SediSURF@SEA; Van Lancker et al. 2007) and full particle-size distribution curves (SediCURVE@SEA; Van Lancker et al. 2012). The assembled information merges contributions of science institutes, national geological surveys and universities with a common interest in marine sediments, as well as descriptions from project-based sampling campaigns commissioned by authorities and partly owned by private companies.
Lithological data and associated metadata were harmonized and standardized to facilitate the generation of seamless seabed maps (Van Lancker and van Heteren 2013) following internationally proposed or agreed guidelines (e.g. Geo-Seas for geological and geophysical data (van Heteren 2010), SeaDataNet for oceanographic data, and INSPIRE for spatial information). To ensure machine-readability, interoperability and compatibility of the data, lithological descriptions available as text were transferred to code. Main lithology was classified according to the Wentworth (1922) scheme; the full lithology including admixtures according to the Folk (1954) classification. Other lithological descriptors in the coded dataset are grain-size range with related mean and median; compositional percentages of clay, silt, mud (all fractions finer than sand), sand, gravel and shell matter; and minor constituents like organic matter and glauconite. Colours were converted into Munsell code listing hue, value and chroma. Details on the coding process are provided in .
Metadata were quality-controlled and completed for borehole identifier; coordinates with geodetic reference datum and type of navigation system; data originator; subcontractor and laboratory; ship or platform; borehole age (or vintage); penetration depth; sampling equipment; and analytical method. The date and time of sampling were traced back from on-board documents and included in Coordinated Universal Time (UTC), a common international standard. Seabed depth was converted to metres below mean sealevel (MSL), as the subsurface voxel model of the BPNS is vertically referenced to that datum (Hademenos et al. 2018).
Although not a perfectly uniform reference level, it serves the need for a unified system between Belgium and the Netherlands.
FAIR principles (findability, accessibility, interoperability and reusability) are guiding in creating datasets with complete metadata using controlled vocabularies and universal standards (developed by the Open Geospatial Consortium). The lithology dataset complies with the ISO 19115-1:2014 standard, which defines the schema required for describing information and services by means of metadata, and with the GeoSciML standard, a collaborative OGC-CGI product for geological data transfer. Models and digital maps made from the lithological data are visualized in web services (e.g. WGS, WMS).
Data uncertainty
By completing, harmonizing and standardizing borehole data and metadata, and by translating text fields into code, the assignment of uncertainty values to different attributes could be semi-automated in a spreadsheet. Uncertainty attributes were added to the dataset and associated qualitative or quantitative values were filled in either for entire boreholes or for each interval described. Scores between '1' and '5' were manually tabulated and cover the full range from very uncertain to very certain information. Lost or incomplete metadata were flagged with a '0'. Assigning scores was done on the basis of reviewed literature, estimated or measured errors, expert knowledge or the usage of external data from the environmental setting.
The uncertainty on the horizontal positioning of boreholes and grab samples concerns navigational accuracy (instrumental error), on-board and off-deck offsets (human error) and underwater drift of used gear (environmental error). The on-board offset is determined by the lengthways and crossways distances between the radio beacon or GPS receiver near the bridge and the location of instrument deployment on deck. This offset, a function of vessel orientation during drilling, is not always reported, incorporated or measured accurately. An extra offset should be included for the outside (safety) operating distances of instruments behind or beside the vessel. Underwater drift is an estimate between the deployment position of gear and its sampling position on the seabed. Lightweight gear is particularly susceptible. Heavy coring equipment can be positioned more accurately and its horizontal offset to the point of deployment is small. Ideally, all of these offsets should be reported and corrected for. It is impossible, however, to perfectly reconstruct offsets for vintage datasets. To obtain an indicative value, the uncertainty of horizontal positioning is estimated from maximum metric errors as (a) reported in literature on the accuracy of the navigation systems, (b) derived from image analysis of known vessels (for the on-board and off-deck offsets) and (c) calculated from underwater drift (a function of gear characteristics, local maximum current velocities and free-fall velocity in seawater).
Sampling uncertainty reflects the efficiency of each gear type in relation to the seabed substrate that was sampled, as derived from an extensive literature review supplemented by collaborative knowledge. Multiple sources were consulted to provide the best possible information on the advantages and disadvantages of each sampling device. Equipment includes surficial grab samplers (Hamon, Shipek, Van Veen) and subsurface sediment corers (box corer, flush corer, gravity corer, piston corer, vibrocorer and rotary drill). The lithological property used to determine the efficiency of sampling devices combines Wentworth (1922) and Folk (1954) characteristics. The BPNS substrate consists of various amounts of clay, silt, sand, gravel and shell hash (Houbolt 1968;Verfaillie et al. 2006;Kaskela et al. 2019); hence, sampling uncertainty is highly variable.
Assigning uncertainty to vintage or the timestamp of the sample required a dedicated approach and is not simply related to its age. Lithologies of older borehole samples, for example, may have been described with more care and in more detail than those of more recent samples. The time elapsed since sampling is more critical in areas with large and highly dynamic bedforms than in stable flat areas. In typical sandy shelf environments, erosion and deposition vary over time. Where bedforms, especially sand waves, are highly mobile and show large sedimentological differences from crest to trough (Lanckneus et al. 2001), they introduce uncertainty that impacts sample representativeness. In extreme cases, samples taken in the past may not even be suitable to map today's seabed. To estimate the degree of vintage uncertainty, sample locations were first classified according to a geomorphologically relevant benthic position index (BPI) (Fig. 3). Depending on the bathymetric position of a sample relative to the surrounding seabed, it was assigned to a crest, slope, flat or depression. These four categories were interpreted in terms of seabed stability. BPI was calculated following the approach of Verfaillie et al. (2009), but using a more recent 20 × 20 m digital terrain model available from Flemish Hydrography and an optimized, more detailed parameterization (Kint et al. 2019). The same bathymetry model was used as top surface of the voxel model of the Belgian Continental Shelf (Hademenos et al. 2018). In the context of uncertainty assessments, a fine-scale BPI turned out to be most meaningful as it accounts for the most relevant bedforms (sand waves).
Mapping uncertainty parameters
To highlight areas with the highest uncertainties, uncertainty parameters ( positioning, sampling and vintage) need to be mapped separately. Four steps are best taken: determination of the average data density to provide insight into how many data points contributed to each grid cell of a data product, providing information on lateral and depth variability; direct mapping of measured or categorized errors and accuracies; transformation of the measured values or categorical quality flags into uncertainty percentages, thus obtaining continuous variables suitable for 3D interpolation; and a selection of data subsets based on the uncertainty maps themselves. Repeating these steps is necessary to strike an optimal balance between map quality and coverage. The geographic information system QGIS, a Free and Open Source Software (FOSS) package that supports viewing, editing and analysis of geospatial data, served as a working platform.
Ordinary block kriging with logarithmic transformation was used as a 2D interpolation technique. A block size of 80 km, overlapping the BPNS, and a cell size of 200 m, corresponding to the horizontal grid size of the voxel model (see below), were chosen. A maximum search distance of 5000 m was needed to find 1 to 10 nearest data points. Neighbouring boreholes from the Netherlands, the UK and France were used to reduce edge effects along the BPNS border.
Simple subsets of the lithology dataset were selected to obtain data products with reduced data uncertainty while maintaining acceptable levels of data density so that map coverage was not reduced significantly. The number of boreholes and the average borehole density in the BPNS were quantified for each of the data selections. Within these constraints, examples involved removal of samples with a positioning error of more than 200 m and elimination of boreholes with a penetration depth less than 1 m, both equivalent to the TILES voxel dimensions of 200 × 200 × 1 m. Two-dimensional mapping is only done for positioning accuracy in metres and not for the quality flagging of sampling and vintage.
For the transformation of metric positioning errors into uncertainty percentages, minimum and maximum thresholds were set. Corresponding to acceptable positioning limits for the voxel model, the best accuracy of 0 m was translated into a value of 100%, whilst the worst accuracy, set at 1000 m (5 voxels) or more, was translated into a value of 0%. Intermediate accuracies were assigned a percentage value in between (e.g. 75% for 250 m accuracy).
Incorporating uncertainty percentages in 3D geological models
In the Netherlands, Sequential Indicator Simulation (SIS; Goovaerts 1997; Chiles and Delfiner 2012) has been used to obtain 100 statistically equally probable simulations of the distribution of lithological classes in subsurface voxel models (Stafleu et al. 2011). Hademenos et al. (2018) applied this method to the BPNS marine geological dataset, profiting from abundant seismic profiles to constrain bounding surfaces delineating the different lithostratigraphic units. They used co-kriging or block kriging for the geostatistical interpolation of lithology-and stratigraphy-related attributes. The grid resolution (i.e. the size of a single voxel), set to 200 × 200 × 1 m (x; y; z) and adopted in the present study, was chosen on the basis of data density, scale of the observed geological features, and computing time (speed of interpolation). The modelling provided three measures quantifying uncertainty: probabilities of each simulated lithological class (lithoclass), modelling-related uncertainty, and the kriging error in the modelled stratigraphy (Hademenos et al. 2018). Isatis® (Geovariances 2011), a geostatistical modelling software package, was used to perform the simulations.
Data uncertainty for positioning, sampling and vintage has been incorporated in the voxelization process. Three-dimensional modelling of data-uncertainty percentages was done using the ordinary kriging method. Although kriging is a method designed to interpolate measurements of natural phenomena, modelling has been applied successfully to datasets with non-natural parameters such as uncertainty (Silva and Costa 2016;Samsonova et al. 2018). As such, the TILES subsurface model now includes not only the lithoclass probabilities (for clay, silt, fine-medium-coarse sand and gravel) and the modelling-related uncertainty (entropy), but also the series of data uncertainties (for positioning, sampling and vintage).
Using the uncertainty assessment in the DSS
In principle, all uncertainties could be summed up in a standard way. However, combining all percentages of the uncertainty parameters into one overall uncertainty percentage is neither straightforward nor always valuable, as it masks the origin of the predominant uncertainty component. Additionally, data products serve multiple end users, and each of them may assign different weights to each uncertainty factor depending on the intended objective. Therefore, it was decided to make all uncertainties queryable in a custom-made decision support application that addresses the entire voxel model and allows exports as ASCII XYZ files.
In the DSS, policy makers and other end users have the possibility to produce suitability maps (plan view) and profile plots (crosssections) of a specific research location in the BPNS. Queries can be made on lithology (most likely lithoclass, associated probabilities and average percentages for all lithoclasses), lithostratigraphy, heterogeneity, data density, modelling-related uncertainty (entropy) and data uncertainties ( positioning, sampling and vintage). Key to an optimized, informed use is the translation of data-uncertainty percentages into understandable terminology (very unreliable to near perfect). The DSS is very versatile, offering the decision maker a lot of flexibility, enabling a comparison of scenarios as well as effects of applying quality filters in science-based decision making (Van Lancker et al. 2017De Tré et al. 2018).
Uncertainty parameterization
The main factor in horizontal positioning uncertainty, the navigation system (Table 1), was translated into a coded quality flagging as a function of spatial accuracy (Table 2). Boreholes with older navigational information from before the 1990s (903 boreholes) are slightly more common than recent boreholes with high positioning accuracy (739 boreholes). The other offset attributes are supplemented for this uncertainty assessment, raising the spatial accuracy to the voxel resolution limit of 200 m. These latter errors are not yet used for uncertainty calculation and visualization in the DSS.
Expert judgment was used to assign a relative scale for the sampling uncertainty that ranges from 1 (very uncertain) to 5 (very certain) to the various devices used (Table 3). The score of a device depends on the type of sediment being sampled, as derived from the data fields on main and secondary lithology (Table 4).
Quality flagging of vintage uncertainty was based on relating each sampling location to a fine-scale BPI (distinguishing crests, slopes, flats or broad swales, and local depressions) and translating these indices into scores from 1 (high seabed dynamics and low certainty) to 5 (low seabed dynamics and high certainty). The highest certainty corresponds to sand banks and swales, the lowest certainty to crests or slopes of migrating sand waves. Intermediate values were assigned to small depressions and intermediate flats.
Data selection for uncertainty mapping v. data density
To visualize the effects of data selections intended to improve the confidence of data products on overall quality and coverage, uncertainty maps were created. Figure 4 shows how data subsets with the most accurate ( positioning error σ ≤ 200 m) and Nautical navigation instruments that measure the vertical angle between a celestial body and the horizon. Using these historical devices an accuracy of 200 m could be achieved under clear weather conditions and up to 3 km in more challenging situations (Eaton 1972). Decca Navigator System (DNS) First-generation, hyperbolic radio-navigation system for ships. Radio signals are transmitted from fixed land-based navigational beacons (1 master station and 3 secondary or slave stations: red, green, purple) organized into chains and using phase comparison of low frequencies: 70-129 kHz (Blanchard 2014). DNS performance was dependent on weather and day/night regime. If time was recorded, instrumental error can be calculated (Decca Navigator Company 1976; Kubicki and Diesing 2006). Near the stations and under ideal conditions the accuracy was in the order of 25-50 m, decreasing to 200-250 m during summer nights or at great distances from the coast (during full daylight coverage), and to 700-1000 m during winter nights or under bad weather conditions (Eaton 1972;Heyse 1975;Last 1992;Fisher 1993;Specht et al. 2016). Decca Hi-Fix/6 Position Fixing System A second-generation radio-navigation system emerging in the 1960s and 1970s with booming offshore exploration for oil and gas in the North Sea works on the same basic principle as the DNS. A given chain comprises 6 stations (1 master and up to 5 secondary radio beacons) and employs radiated frequencies in the band 1.6-5 MHz (Powell 2015). By using a higher radio frequency, the accuracy improved, yet at the expense of the range. The Decca Hi-fix/6 Positioning Fixing System provided an accuracy up to 10-15 m during the day and at best 40-50 m by night (Bradley 1971;Eaton 1972;Hovland and Indreeide 1980). Sea-fix was a derivative of Hi-fix and was similar in its operating principles.
Trisponder Positioning System (TPS)
A line-of-sight range-range radar-positioning system operating in the X-band range of frequencies with an accuracy of 5-7 m within a line-of-sight range of 20 km (Eaton 1972;Mortimer 1972).
Racal Hyperfix
Decca became part of Racal, which introduced the third-generation radio-navigation systems. A land-based short-range radionavigation system operating in the frequency band 1.6-3.4 MHz in three ways: in hyperbolic (see DNS), circular (i.e. a rangerange operation with a minimum of two shore-based stations) and combined mode. Although the accuracy remained in the same order, 10 m by day (Gerwick 2007) and 40-50 m by night (Gillissen 1990;Gillissen and Elema 1996), it offered a better range and was designed as a highly flexible system, meeting the needs of a wide variety of users. The positioning error by the Baltic Sea Chain varied between 5 and 20 m. Syledis A medium-range radio-navigation system employing a spread spectrum pulse-correlation technique, which allows it to recover accurate range information (5-10 m) from relatively long, low-power modulated pulses (Janes et al. 1985;Denduyver and Van Cauwenberghe 1994;Specht et al. 2016).
Global Positioning System (GPS)
A space-based radio-navigation system with up to 31 medium Earth-orbiting satellites providing location and time information from the late 1980s onwards. It replaced the Decca radio-navigation systems. An initial accuracy of 20 m was achieved with an increasing number of satellites (Husti and Plugers 1988
Incorporating data uncertainty into 3D geological models
Another subset of the lithology dataset was selected by Hademenos et al. (2018) based on the availability of seismic data. Figure 5 visualizes two types of data uncertainty impacting the subsurface voxel model of the BPNS. Overall, the positioning accuracy is very high. Only far offshore and near the French coast is the accuracy significantly lower. Nearshore and around several offshore sand A box that, owing to its weight, enters the seabed through its free fall and is shut by sliding the cutting edge of the spade-closing lever arm, up to the point where the spade completely covers the bottom of the box (e.g. Reineck box corer). An undisturbed block sample with little distortion is retrieved. For those surface samples varying strongly in grain size and porosity (Santschi et al. 2001), a loss of sediments is unavoidable. Box corers are generally used for sampling cohesive clay or soft sandy sediments (Rumohr 1999;Taft and Jones 2001;IAEA 2003). In silty sediments, a box corer might penetrate beyond its own size (Rumohr 1999). Strong currents may cause the box to penetrate at an angle or to be pulled from the sediment in an upright manner, resulting in a disturbed sample. Unsuitable for gravel sampling. Geodoff corer The Geodoff can be used as a vibrocorer (see below) taking little-disturbed sediment samples up to 7 m long, or as an airlift counter-flush system collecting completely mixed sediment samples up to 12 m long (Oele et al. 1983), typically in depth intervals of 0.5 or 1 m. Grab sampler Jaws or buckets shut upon impact on the seabed. Standard grabs (variable weights) are suitable for sampling clayey to sandy sediments (Rumohr 1999). For hard and sandy seabed surfaces long-armed grabs are recommended (Kingston 1988). Less efficient for gravel because of a possible outwash of fine material during retrieval, especially when coarse particles prevent the jaws from shutting completely. Like box corers, they tend to land unevenly on the bottom in rough waters, resulting in a smaller or even no (only water) sample (Smith and McIntyre 1954). Hamon grabs are suitable for a large range of sediment substrates, especially unconsolidated and poorly-sorted sediments, i.e. coarse gravelly sediments and gravels (Oele et al. 1983;Guerra and Freitas 2012;Eleftheriou and Moore 2013). Shipek grabs are erratic in clayey and silty environments and disturbance is considerable (Taft and Jones 2001;CEFAS 2002). The Van Veen grab is a sampling technique for finegrained to sandy firm and soft material, and unsuitable for sediments coarser than medium sand (de Groot et al. 1982;Rumohr 1999;CEFAS 2002;IAEA 2003;Guerra and Freitas 2012). Gravity corer A simple, open sampling tube with a weight of 350 to 1000 kg at the top, which falls freely onto the seabed. Restricted to soft and fine-grained unconsolidated sediments; mud and (firm) clayey seabed surfaces (Oele et al. 1983;IAEA 2003). Unsuitable for sandy or gravelly sediments. Problems arise with sands becoming firmer upon impact by force, resulting in minimal penetration or even blockage when material is too coarse. Emery and Dietz (1941), Hvorslev and Stetson (1946), Emery and Hülsemann (1964) and Lebel et al. (1982) noted a considerable 'shortening' of the retrieved sediment column in open-barrel cores. The 'coupe Gilson' is a historical, small-scale gravitational coring device. Piston corer A gravity corer with an additional internal piston, which is positioned just above the water-sediment interface. A 'counterweight' ensures that the core barrel penetrates the sediment through a fall from a fixed height above the seabed, so that the cored material cannot flow out of the long and heavy tube. Same issues as with the gravity corer. Common vertical disturbances by fine-grained flow-ins (Hvorslev and Stetson 1946;Ericson and Wollin 1953;Kullenberg 1955;Richards 1961;Ross and Riedel 1967;Chmelik et al. 1968;McCoy and von Herzen 1971;Stow and Aksu 1978;Buckley et al. 1994). Pulse drill A cased drilling system in which the bailer moves up and down collecting the loose material. The pulse, a tube with cutting edge and horizontal flap, is attached to a winch and removes the sediment. A valve mechanism ensures that the bored material does not fall back into the borehole when the bailer is raised.
Rotary drill
Drill usable in soft sediments as well as rock (clay, sand, claystone, sandstone, chalk, marl). The sample is taken by means of a destructive drill head that penetrates the sediment or rock by rotational force and brought up by drilling fluid. It is 'flushed out' as a mixed and disturbed sediment sample. The different fractions are separated, and the water is pumped back into the borehole for re-use. A guided pneumatic hammer can be used to take undisturbed and continuous samples at any depth after each drilling phase. Vibrocorer A vibrocorer (e.g. Geodoff I, Geodoff MK II, Trilflip Zenkovitch) is equipped with a vibrator, which is driven either electrically or by compressed air (vibrohammer) (Oele et al. 1983). The vibration force liquefies the substrate at the core cutter, enabling the vibrocorer to penetrate the seabed, aided further by the weight of the vibrator. Typically, vibrocorers are used in firm sandy sediments and gravels. Relatively undisturbed samples are taken, although soft sediment deformation may result from the liquefaction.
banks, sampling uncertainty is limited. In most areas further offshore, high-quality sampling is missing. In the well-sampled windfarm area near the border with the Netherlands, data selection (on the basis of quality criteria) or data weighting can be solutions to optimize the model. Data gaps (white patches) represent here areas for which an uncertainty parameter cannot be modelled.
Integration of data uncertainty in the DSS Figure 6 illustrates the sandbank architecture of the well-investigated Middelkerke Bank (De Moor and Lanckneus 1993; Heyse and De Moor 1996), west of the port of Zeebrugge. Two parallel transects are drawn following a sequence of boreholes. The respective crosssections show a fine-grained sand bank with medium sand on its top and scattered at depth, and a clayey base layer. Positioning data are near perfect. The sampling uncertainty differs between the two cross-sections. Cross-section 1 is based on little-disturbed vibrocores, whilst cross-section 2 relies on mixed borehole samples obtained by counter-flushing. Vintage uncertainty is much higher, reflecting the presence and potential migration of dynamic sand waves on the crest and slope of the sand bank. Overall, the voxel modelling results become increasingly unreliable where the mean borehole penetration is reached and exceeded.
Discussion
Towards a flexible approach of data-uncertainty quantification and visualization Any parameter of geological information stored in a database can be a source of uncertainty (Bárdossy and Fodor 2001). Whether it is the precise tracing of sample locations from the past or reconstructing which definitions of sand-size fractions were used in legacy borehole descriptions, correcting for all errors will generally be impossible. Crucial metadata may be missing and known sources of error, such as marine (weather) conditions, may have had nonsystematic effects. Even universally automated corrections for anomalies in modern-day borehole data and metadata, made on board at the time of sampling, will be imperfect. As not all sources of error will impact the uncertainty of a data product equally, and because the degree of impact also differs per end user, the selection of relevant data uncertainties in a DSS should be adaptable to best fit decision-making, mapping purpose or research objective. For instance, although the accuracy of the navigation systems is an order of magnitude better than the resolution of the current 200 m voxel model, it will not be a limiting factor in quantifying the spatial variability of aggregate resources (e.g. Hademenos et al. 2018). However, positional anomalies will become more important when assessing local sediment or habitat changes using models with much smaller cell size (e.g. Cooper et al. 2007;Montereale-Gavazzi et al. 2018). Ideally, an uncertainty framework should be defined and regularly updated, focusing on minimum and maximum threshold of acceptability. Aside from data optimization and informed data elimination when needed, assigning uncertainty-based weights per data point or borehole interval will be an essential future endeavour. By implementing data weighting in the interpolation process, the vast majority of data can contribute to each data product, with weight dependent on data quality (low-quality data will receive smaller weights, whilst high-quality data will obtain more decisive weights). Weighting is of particular interest when combining visual borehole descriptions and laboratory measurements, which both have their advantages and disadvantages (van Heteren and Van Lancker 2015). Striking an optimal balance between data reduction and data weighting will be an iterative process aimed at optimal data coverage and minimal data uncertainty.
In this paper, uncertainties were quantified on the field acquisition of lithology data, not on the quality of lithological descriptions, laboratory measurements or sediment-classification systems. A useful next step in data-uncertainty quantification concerns automated quality flagging of these descriptions for each borehole interval. A possible approach, implemented for the dataset of the Dutch subsurface, links quality to the number of key features described. Quality flags for laboratory results such as particle-size and loss-on-ignition analyses can be based on the suitability of devices used to measure different sediment types and fractions. Similar to sampling gear, each analytical technique (laser: Coulter counter, Malvern Instruments; X-ray: sedigraphs; sieving; and settling tubes) has a unique set of benefits and drawbacks. Misalignment of sediment-classification systems or granularities, both between datasets and in relation to intended end use, also needs to be tackled. Apart from Wentworth (1922) and Folk (1954), the most common classification schemes in geology, some original data entries followed industrial norms or national standards (such as BSI (British Standards Institution), NEN (NEderlandse Norm) and ISO (International Organization for Standardization)). Harmonization and standardization efforts introduce additional data uncertainty that should be quantified.
Uncertainty products meeting present-day user needs
As multiple data products can be generated from the same dataset by including data uncertainty, clear communication on the map or model making and on implemented thresholds of data uncertainty is indispensable. End users, and particularly decision makers, need a tool that is both intuitive and well-documented. Summing up all uncertainty percentages is the most straightforward, but lacks the flexibility needed by each user to generate output matching their purpose and to trace back the predominant uncertainty component. To verify or critically examine the DSS outcome, end users can make use of a user-friendly national data portal (TILES consortium 2018c) that holds for each borehole or sample: (a) original documents with lithological descriptions and metadata; (b) laboratory results with grain-size data and information on composition; (c) standardized and coded sheets from the originals with added data-quality flags indicating the level of uncertainty on location, gear and vintage; and (d) photographic material of cores and samples. As upcoming updates of standard GIS software will include the possibility of analysing voxel models, our voxel-based uncertainty approach can soon be adopted by offshore engineers and environmental scientists.
Marine habitat mappers are an important user group that will profit from quantified uncertainty assessments. They use sediment type of the upper voxel in the subsurface model for the BPNS (voxels representing the upper 1 m of the seabed) in the context of the European MSFD, which requires monitoring of environmental status and habitat change over a six-yearly evaluation cycle to achieve a good environmental status (GES; e.g. Korpinen et al. 2013). The assessed broad-scale habitats relate directly to the distribution of mud, sand, coarse and mixed substrates (e.g. 1:250 000 seabed substrate map of Europe; European Commission 2019). For Belgian waters, no transitions are allowed from one habitat into another (Belgian State 2012), and ongoing seabed-change assessments focus primarily on this requirement . The incorporation of data uncertainty assists in distinguishing 'real' changes of sediment type compared to apparent or statistically insignificant changes caused by positioning-, sampling-, description-and interpretation-related inconsistencies or other sources of error. In order to ensure the protection of marine biodiversity in gravel-rich areas (Houziaux et al. 2008;Montereale-Gavazzi et al. 2018), it is particularly important to be aware of inadequate or insufficient sampling of the gravel beds.
Engineers stand to profit particularly from the quantification of uncertainty. The design of wind turbine foundations, cable and pipeline infrastructure and radar masts, for example, requires reliable, well-constrained values of geological and geotechnical properties (Hoek 1999;Gkoumas 2010) and thus careful data selection or weighting. When selecting stable repository sites for dumping of dredged material or identifying viable sand and gravel reserves, it is necessary to minimize geological risk (e.g. Hack et al. 2006). Kruiver et al. (2017) showed how a voxel model of the shallow subsurface above the Groningen gas field could be used to provide information for seismic hazard and risk analysis. In attributing the voxel model with shear wave velocity, the uncertainty of the velocity measurements was taken into account. In addition, efforts were made to mitigate the recognized data and model uncertainty. The pioneering study highlights the added value of novel uncertainty assessments that account for geological variability and data uncertainty. Such quantification requires close co-operation between data holders, geologists and geotechnical engineers, combining expert subsurface knowledge and a practical perspective.
Finally, any geospatial analyst, marine or terrestrial, benefits from combining newly created mapping products (2D or 3D) with confidence assessments. The relevance of instrumentation and gear accuracy and precision has long been recognized in satellite remote sensing (e.g. confidence maps in Torbick et al. 2016;Martos et al. 2017), with significant advances being made on the quantification of uncertainty factors, jointly forming uncertainty budgets (Ruddick et al. 2019). Uncertainty flags and percentages are equally suited to combined uncertainty analyses in budgets, and thus show great potential in an increasingly rational future use of marine subsurface datasets.
Conclusion
Harmonized, standardized and coded borehole data and metadata make it possible to automate the assignment of uncertainty values to relevant attributes. Quality flags for positioning, sampling and vintage can easily be converted into corresponding uncertainty percentages meeting the input requirements of existing 3D subsurface models.
Application of uncertainty filters reduces data density, impacting the degree of spatial coverage. Optimization of maps and models is only possible where data density is high enough. Any particular density reduction is not equally detrimental to all intended uses. To balance coverage and map quality, four steps are best taken in an iterative process: determination of the average data density; direct mapping of data quality; transformation of quality information into uncertainty percentages suitable for 3D interpolation; and optimizing the selection of data subsets on the basis of uncertainty maps.
A subsurface model with associated data uncertainties is most powerful when embedded in a decision support system (DSS) with understandable terminology, enabling policy makers and other end users to compare scenarios, visualize overall confidence and provide feedback needed to finetune the model. Summing all uncertainty percentages, although straightforward, is not recommended as it precludes end users from generating dedicated output and from identifying the predominant uncertainty component.
Marine habitat mappers are an important user group that will profit from an intuitive and well-documented decision tool. In Marine Strategy Framework Directive (MSFD)-related monitoring of environmental status and habitat change, uncertainty quantification may help establish the statistical significance of observed seabed-sediment changes. Marine engineers can use data-uncertainty filters to optimize construction and infrastructure designs, and to reduce risk. Reproducible confidence maps of the presented uncertainty indicators will support geospatial analysts in their interpretative findings.
Including the full suite of data uncertainties in subsurface models is a work in progress. Loss of information can be minimized by weighting rather than eliminating data, which is of particular interest when working with visual borehole descriptions as well as laboratory measurements. Automated quality flagging of such uncertainty components is another future challenge. | 9,986.2 | 2020-10-07T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
A Simulation Tool for Interference Analysis in MIMO Wavelength Division LiFi Indoor Networks
In this paper we propose a novel simulation tool for indoor Light Fidelity (LiFi) networks based on Wavelength Division (WD) with real optical filters characteristics. Firstly we present the measured passband spectra of optical filters, along with a system model validation relying on such acquired spectra. Secondly, we propose a simulation tool developed to extend the work of adaptive wavelength division multiple access to the multiple-input multiple-output case, suitable for conducting Monte Carlo simulations. Then, we validate such tool by considering an example scenario with fixed positions and orientations, including increasing number of users in an indoor LiFi network using WD. In order to better clarify the interference contributions to the quality of service provided, we consider the first user as reference, and evaluate how the presence of progressively higher number of users in its vicinity impacts the interference that the main user is experiencing. We then analyse how the signal-to-interference-plus-noise ratio, interference-to-noise ratio and signal-to-interference ratio figures of the main user change depending on how many interfering users are included in the considered scenario.
I. INTRODUCTION
As reported, the global demand for high speed wireless data keeps increasing and forecasts show that there will be 5.7 billions of connected mobile devices by 2023 [1].The radio frequency (RF) technology is commonly used to satisfy these demands, but it may not be sufficient anymore in the near future.On the other hand, Optical Wireless Communications (OWC) and Light Fidelity (LiFi) [2], which is its networked counterpart, are now mature enough technologies to be seen as a viable complement to RF.This is because part of the data traffic can be securely offloaded to LiFi networks, which uses the visible light and near infrared spectra and thus avoids all interference with common RF technologies [3].Thanks to the improved capacity introduced, and the absence of interference with RF, LiFi also gives space to other technologies like the Internet of Things (IoT) [4]- [10] which require an even higher number of connected devices.In OWC a light source is used to transmit a signal in the optical domain, by modulating its intensity according to the employed modulation scheme.At the receiver side, a photodetector (PD) is acting as a transducer from the optical to the electrical domain.In fact, these devices (i.e.PIN photodiodes, avalanche photodiodes, single photon avalanche diodes) output a current flow which is proportional to the amount of light impinging on the PD.This current can then be amplified and converted in voltage with a transimpedance amplifier (TIA), and the original signal can be reconstructed and demodulated [11].
In a system based on wavelength division (WD), a thinfilm optical filter is mounted in front of the PD to improve channel separation.In fact, Red-Green-Blue-Amber (RGBA) Light Emitting Diodes (LEDs) are used in WD systems in place of regular phosphor-coated white lighting fixtures.This has two notable advantages: firstly, as the use of phosphors introduces a response delay in regular lighting fixtures, RGBA LEDs can achieve a much higher data rate.Secondly, since each individual LED inside the RGBA fixture can be addressed individually, they form independent parallel channels.Combining both these advantages, WD systems have the potential of achieving very high transmission speeds with respect to regular single colour networks [12].One of the challenges of WD systems comes with user mobility.The work in [13] tackles user mobility in LiFi.In fact, the authors have conducted an experiment to sample the orientation of real users, and the polar angle (the inclination with respect to the floor) at which their mobile devices were being held.This data has then been well fitted with a Gaussian distribution for the polar angle and a uniform distribution for the orientation, in the case of walking users.As a direct consequence, it is unlikely that the light from a fixture acting as an Access Point (AP) in a LiFi network will always arrive at the PD of a mobile device with an angle of incidence (AoI) of 0 • .Thus, the spectral characteristic (i.e., the transmissivity curve) of the optical filter will suffer from two concurrent effects as the AoI increases: the central wavelength (CWL) of the optical filter will shift towards shorter wavelengths, and the shape of the filter's passband will degrade, lowering both the width and the peak of the transmissivity curve.The work in [14] describes both effects.However, a mathematical equation is only given for the first effect, the passband shift.It allows, given a starting CWL for the optical filter's passband, to calculate what the shifted CWL will be at any given AoI.With this framework in mind, the work in [15] proposes a new scheduling scheme (i.e. a resource allocation scheme) that is able to limit the interference in an indoor WD-based LiFi network while accounting for and adapting to user mobility.This is referred to as Adaptive wavelength division multiple access (WDMA), and is compared to a fixed scheduling scheme (thereby referred to as Classic WDMA).Additionally, the work in [16] provides a mathematical framework and system design insights for such networks.In [17], the authors present a simulation tool for a WiFi/LiFi hybrid network solution.However, a TDMA multiuser access implementation is used for LiFi, which considerably reduces network performances with respect to WDMA because the time resource is shared among all users.To the best of the authors' knowledge, the contributions of this paper are hereby summarised.Firstly, we characterise 4 optical filters at increasing AoIs, to investigate their spectral degradation.Then we validate the model used throughout this work by comparing theoretical and measured power reception.Secondly, we extend the scheduling scheme in [15] to the Multiple-Input Multiple-Output (MIMO) scenario, and develop a simulation tool that can be used to carry out estimations with the Monte Carlo method.Network data rate, both aggregate and per user, and connection loss probability for a chosen number of users can be estimated in this way.This tool employs the measured optical filters spectra for increased accuracy with respect to ideal optical filters.Finally, we analyse the interference of a single user in a specific scenario with an increasing number of interfering users, until the interference is maximised (i.e.all APs in the field of view of the main user are transmitting to other users).
The rest of the paper is organised as follows: in Section II, the related system model is introduced.In Section III, the LiFi indoor scenario adopted is discussed.In Section IV, all results are presented and discussed.Finally, conclusions are drawn in Section V.
II. SYSTEM MODEL
We use the widely adopted equation proposed in [18] for the free-space optical path loss of a Lambertian source: where m is the Lambertian emission order of the emitted light beam, d is the Euclidean distance between the transmitter and the receiver, ϕ and ψ are the transmitter emission angle and receiver AoI respectively, and A(ψ) eff is the effective active area of the receiver.It is possible to define the Lambertian emission order as: where ϕ 1/2 is the half emission angle of the transmitter.The receiver effective area can be defined as: where A det is the detector area, G OC is the gain of an optical concentrator and rect(ψ) is a rectangular function that allows to factor the effect of its Field of View (FOV), defined as: where ψ FOV is the concentrator FOV.At this point, it is possible to write the equation for the average optical power that hits the external layer of the optical filter put in front of the receiver: where P opt tx is the average transmitted optical power.From here, we formulate the expression for electrical current generated after the PD as: where S tx (λ) is the normalised transmitter emission spectrum, T OF (λ) is the transmission characteristic of the optical filter before the PD, R(λ) is the responsivity of the PD, and α and β are respectively the lower and upper limit of the considered wavelength range (set as α = 400 nm and β = 700 nm in this paper).It also has to be noted that each PD is simultaneously hit by the light beams coming from every active transmitter in range other than the one delivering the useful signal, resulting in an interference component in the generated electrical signal.If I sig is the average electrical current generated by the desired signal component, I int the one generated from the interference, and I noise the one generated as a result of noise contributions (thermal and background light), it is possible to write the expression of the electrical signal-to-interferenceplus-noise ratio (SINR) measured after the TIA: where G TIA is the gain of the TIA.It is then possible to define the signal-to-noise ratio (SNR), the interference-to-noise ratio (INR) and the signal-to-interference ratio (SIR) in the same manner: and Finally, as shown in [14], the shifted CWL of an optical filter, λ OF (ψ), can be formulated as: where ψ is the AoI of the impinging light, λ OF (ψ = 0) is the CWL of the considered transmission spectrum when the light hits the receiver perpendicularly, and n e is the effective refractive index of the specific optical filter employed.
A. Extension to the MIMO case
In this work we refer to the scheduling scheme presented in [15], in which each user is assigned 1 WD channel based on highest SNR.This proved to substantially increase the network's average data rate with respect to a fixed allocation scheme, while also lowering all users' connection loss probability.However, while these benefits cannot be denied, large portions of the network's capabilities still remain untapped especially when user density is low.This is a fundamental limitation that can be overcome by adding an additional step after every user has been served with at least 1 channel.In this step, channels continue to be allocated with highest SNR, until either every user has been served with the maximum amount of channels allowed by their front end (4 in the case of RGBA), or every transmitter in each user's sight has already been allocated.In this way the network will be able to better serve users with more favourable conditions, while still being able to provide basic services to users with low spatial diversity between them.In this work, we have developed a simulation tool (based on the provided system model) able to estimate the results in terms of network maximum, average and per user data rate, and connection loss probability.This tool can simulate a scenario with variable room dimensions and AP arrangements, and is based on the Monte Carlo method.Furthermore, we use this tool to examine a particular case (described in the next subsection) with an increasing number of users.
B. Example scenario
This paper considers a square room that is 6 m long, 6 m wide, and 3 m high.fitted with 25 RGBA LiFi APs on the ceiling.Thus, each AP is made up of 4 coloured LEDs, each emitting up to 3.2 W of optical power.In this way, considering a common efficiency value for LEDs η = 90 lm/W, the room also benefits from adequate illumination for most tasks.The room size in this example has been chosen to represent an arbitrarily large space accommodating a high number of users.The number of APs has been chosen so that they are close enough to provide uniform illumination and replicate common situations regarding offices and meeting rooms.If other arrangements are desirable, even with rectangular rooms rather than square, all these parameters can be changed in the simulator.In this room, 5 concurrent users are introduced gradually and placed at specific positions and orientations.Additionally, their mobile devices are being held at specific inclinations.Table I and Fig. 1 give a summary of the geometric positioning for each user.These positions and orientations have been specifically chosen to represent an indoor scenario that is both likely to happen in a real use case, and compatible with the user mobility statistics reported in [13].User 1 will be referred to as the "main" user, and we will observe how the interference experienced by said user grows and evolves as the room is filled with other users (n. 2 -5), referred to as "interfering" users.Each mobile device is equipped with 4 independent LiFi receivers, each of those fitted with a differently coloured optical filter to comply with the WD paradigm.
As more users are added to the room, the main user will experience interference from all APs in his field of view.In addition, he will experience cross-talk interference from the channels assigned to himself, due to spectral leakage.Relevant system parameters are summarised in Table II.
A. Optical Filters characterisation
In this section we present the results of the experimental characterisation of 4 optical filters made by Thorlabs, whose model are: Blue, FB450-40; Green, FB550-40; Amber, FB590-10; Red, FB650-40.The objective is to investigate their transmission characteristics with an increasing AoI of the impinging light.Such curves are also used in the simulation tool, and were obtained experimentally with a spectrometer, which captures the wavelength-dependent distribution of emitted power from an optical source (i.e. its emissivity).Additionally, once the emissivity has been acquired, it is possible to obtain the transmissivity of an optical filter interposed between the source and the spectrometer, by means of a subtraction algorithm included in the spectrometer software.The spectral characteristics have been acquired in a dark environment (to avoid interference from background illumination) at increasing AoI, starting at 0 • up to 40 • , with an acquisition made every 5 • .In this experiment, only the optical filters were rotated of the intended quantity while leaving the spectrometer at an AoI = 0 • .In this way, the optical path loss remains unchanged throughout the whole experiment.Fig. 2 shows the results of the characterisation, plotted with the relevant LED normalised spectrum as a reference.The most important thing to note is that, especially at higher AoIs, there is a nonnegligible difference both with respect of an ideal rectangular shape usually assumed for optical filters, but also of curves found in datasheets.This means that, wherever possible, a similar characterisation of the employed optical filters should be conducted even in works entirely based on simulations, so that the end results are closer to reality.This is still an open challenge, as past literature gives precise indications only regarding what is to be expected in terms of CWL shift.On the contrary, given an optical filter, it is not easy to foresee what the spectral degradation will be with respect to the AoI, as detailed information on the internal structure of the individual filter would be required.Obtaining such knowledge about optical filters is a challenge for two reasons.
Firstly, details about construction of the optical filters are part of the manufacturers' intellectual property, and are not easily accessible in normal circumstances.Secondly, a spectral degradation model derived in this way would only be specific to the particular optical filter model considered, and only if a large number of samples of the same optical filter are tested in order to rule out the impact of high variability between samples.In the next section of this work, the measured transmissivity curves have been applied to all users.In order to validate the system model adopted in the simulations, we have used a similar experimental setup, using a power meter instead of a spectrometer.A RGBA LED optical source is Fig. 2. Optical filters comparison.Each row contains ideal, datasheet and measured spectra for each colour.At higher AoIs, the optical filters' characteristics show a shift towards shorter wavelengths.However, only measured spectra also report substantial spectral degradation.Conversely, ideal and datasheet curves have no reliable information on this effect.
placed at a fixed distance from the power meter, and an optical filter is interposed between the two.In order to rule out crosstalk between colours due to spectral leakage, only one of the coloured LEDs is active at any given time.The optical filter (of the same colour as the LED source) can be rotated as before, and the received power has been acquired at AoI = 0 • , 10 • , 20 • and 30 • .These acquisitions were made in a dark environment, and have been compared with the theoretical values, using the same measured optical filters characteristics.
A close match between experimental and theoretical data can be appreciated in Fig. 3 in the next page.It is important to note that, given all earlier considerations, using ideal characteristics for optical filters (or even the ones taken from the datasheet) would have yielded inaccurate results compared with actual devices.
B. Interference Analysis
This analysis is carried out with respect to a specific scenario as outlined in Section IIIb, with users taking a fixed position and orientation in space, which is compatible with user mobility statistics.It should be noted that observing a single fixed scenario does not provide enough insight to draw general conclusions.Nonetheless, we carry it out as a validation for the tool detailed in Section IIIa, so that future works can be conducted with confidence that its results are trustworthy.Given the aforementioned location-related and Fig. 3. Measured VS Theoretical received power.A close match between experimental and theoretical is obtained by using measured optical filters spectra, which also account for the spectral degradation at higher AoIs.geometric parameters for every user, the adaptive allocation procedure yields the following result.User 1 (the main user) is allocated the Blue and Amber channels from AP 2, while Green and Red from AP 7. User 2 is allocated the Blue and Amber channels from AP 1, and Green and Red from AP 2. User 3 is allocated all channels from AP 3. User 4 is allocated the Blue and Amber channels from AP 7, and Green and Red channels from AP 1. Finally, user 5 is allocated all channels from AP 8.It is important to note that in this configuration, this allocation for the main user does not change as more users are introduced.For this scenario, the SINR, INR and SIR values for each receiver of the main user, with increasing numbers of interfering users, are shown in Fig. 4, 5 and 6 respectively.In Fig. 4, we can see how the contribution to the interference introduced by the first interfering user causes a substantial drop in SINR with respect to a scenario in which no other users than the main one are present.Notably, the Amber receiver is the most robust to these changes, with SINR dropping by only 10.66 dB when the interference is at its maximum.This is because the pass band spectrum for the Amber optical filter is only 10 nm wide, and therefore more robust to interference.This is opposed to a 26.66 dB and 26.77 dB drop for the Blue and Green receivers respectively, and 32.06 dB drop for the Red receiver.Fig. 5 in the next page confirms these findings by showing how the interference grows along with the presence of interfering users.Here we can note how the Blue and Red receivers have a negative INR without other users, meaning that there is no cross-talk interference coming from other channels allocated to the main user.Conversely, both the Green and Amber receivers have positive INR even with no interfering users, because in both cases the other three active channels allocated to the main user are providing cross-talk interference.Finally, Fig. 6 in the next page shows how the signal becomes progressively weaker when compared to the interference (which, with 4 interfering users, is at its maximum for the main user).It can be inferred that for this particular case, the first interfering user (that is, user 2) is introducing the most extensive contribution to the main user's interference.This is mainly due to its position and orientation, as AP 2 (which is partly allocated to user 2), was a viable allocating option for user 1 as well.Other users are introducing a much lower contribution to the interference figure for the main user as the APs allocated to them, while still in the range of user 1, would not constitute a good allocation option for that user because of distance and AoI.Despite this, compared to using a fixed scheme based only on AP closest to the user (such as the one compared in [15]), the users have the flexibility of using more than one AP thus maximising opportunities to employ all 4 channels available to them.This is especially important for users located closer to the edge of a cell.
V. CONCLUSIONS
In this paper, we have presented the results from a characterisation of 4 optical filters, with the objective of investigating the spectral degradation generated by the light impinging at an AoI different than 0 • .Results show that such degradation becomes more significant as the AoI increases.Moreover, the spectral characteristics from all filters result substantially different from the ideal rectangular shape that is usually assumed in literature.For these particular filters there is also a remarkable difference with the curves given in datasheets, and additionally, no spectral degradation information is made available.The only way of obtaining such information is by conducting a full characterisation of the specific models that are being considered, and this makes it challenging to readily obtain accurate and trustworthy results when investigating LiFi Networks that use WDMA in a user mobility context, as the AoI of the impinging light can vary greatly.Exploring the spectral degradation for other models and other manufacturers of optical filters would be an interesting option for future works, as this particular aspect can greatly impact the performances of such systems.We have also developed and validated a simulation framework that is able to leverage the measured optical filters spectra to provide substantially more accurate results with respect to using ideal curves, or ones taken from datasheets.This tool is based on an existing scheduling scheme, which we have extended to the MIMO scenario.In fact, it was previously unable to allocate more than 1 channel per user, leaving many untapped resources in the network.To validate this tool, we have also conducted an interference analysis of a WD-based system, where 5 concurrent users were gradually added to the room and the interference for the first one monitored by means of SINR, INR and SIR curves.In future works, this framework will be used within the Monte Carlo method to evaluate network performances with an increasing numbers of users.
Fig. 1 .
Fig. 1.Indoor LiFi Network Scenario.The room dimensions are 6x6x3 m.It includes 5 users, where user 1 (main) is a blue square while the other users (interfering) are depicted as gray circles.Each user has a solid black arrow showing the orientation of the device they are holding, while the dashed and coloured arrows (blue and grey) indicate the users' orientation in space.While walking, users are facing their devices in order to see the screens.The receivers in turn, being fixed on the screen, are facing the opposite direction.
Fig. 4 .
Fig. 4. SINR of the main user for increasing interfering users.
Fig. 5 .
Fig. 5. INR of the main user for increasing interfering users.
Fig. 6 .
Fig.6.SIR of the main user for increasing interfering users. | 5,325.6 | 2023-05-28T00:00:00.000 | [
"Engineering",
"Physics",
"Computer Science"
] |
Proteomic Signatures Reveal Differences in Stress Response, Antioxidant Defense and Proteasomal Activity in Fertile Men with High Seminal ROS Levels
Elevated levels of reactive oxygen species (ROS) are a major cause of male infertility. However, some men with high seminal ROS levels are still fertile. The main objective of this study was to understand the molecular mechanism(s) responsible for the preservation of fertility in those men. Semen samples from fertile men were divided into two groups: control (n = 10, ROS < 102.2 RLU/s/106 sperm) and ROS+ (n = 10, ROS > 102.2 RLU/s/106 sperm). Proteomic analysis of seminal plasma and spermatozoa was used to identify the differentially expressed proteins (DEPs) between the experimental groups, from which some proteins were validated by Western blot (WB). A total of 44 and 371 DEPs were identified between the study groups in the seminal plasma and spermatozoa, respectively. The identified DEPs were primarily involved in oxidoreductase, endopeptidase inhibitor, and antioxidant activities. We validated by WB the underexpression of NADH:ubiquinone oxidoreductase core subunit S1 (p = 0.01), as well as the overexpression of superoxide dismutase 1 (p = 0.03) and peroxiredoxin 4 (p = 0.04) in spermatozoa of ROS+ group. Our data suggest that fertile men with high ROS levels possess an effective antioxidant defense system that protects sperm proteins, as well as an active proteasomal system for degradation of defective proteins.
Introduction
A common end to numerous pathways that lead to defective sperm function is the increase in reactive oxygen species (ROS) levels in semen [1,2]. Physiological levels of ROS in the semen are essential for an optimal sperm function and fertilization, as they participate in motility acquisition, capacitation, and acrosome reaction [3,4]. However, when the rate of ROS generation exceeds the cells' antioxidant defense capacity, it leads to oxidative stress (OS), which may damage sperm DNA, lipids and proteins, thus compromising sperm fertilizing potential [3]. Spermatozoa possess a limited intrinsic antioxidant machinery that make them dependent on seminal plasma defense system [5]. This characteristic increases the interest regarding the clinical utility of seminal OS testing in infertility clinics [6,7].
Besides routine semen analysis, advanced sperm function tests for the assessment of ROS levels, total antioxidant capacity, sperm DNA fragmentation and compaction, as well as genetic testing are currently used for the evaluation of male fertility status [8]. Nevertheless, these tests are unable to establish the etiology of infertility, leading to the classification of many cases as idiopathic [8]. Even though the chances of conception are increased by assisted reproductive technology (ART) in these patients, the genomic stability of the embryo is not guaranteed [9]. OS-induced sperm DNA damage is the cause of infertility in many men [10,11]. In fact, many infertile men with high ROS levels show sperm DNA fragmentation and poor chromatin packaging [9]. This is associated with lower fertilization and pregnancy rates in ART, impaired embryo development and quality; and increased risk of spontaneous abortions, birth defects and childhood diseases such as cancer [10][11][12]. In recent years, proteomic analysis of the semen has helped in understanding the biological pathways associated with male infertility [13]. Our group has extensively studied the proteomic profile of both seminal plasma and spermatozoa from men with different fertility-related conditions, giving attention to ROS levels [14][15][16]. During these investigations, we noticed that some healthy men who presented high ROS levels in their ejaculates were able to father children. The cutoff to classify a semen sample as containing high ROS levels was 102.2 relative light units per second per million of spermatozoa (RLU/s/10 6 sperm), as previously established [17]. Therefore, we decided to explore the molecular mechanisms by which these men preserve their fertility. The goal of this study was to compare the proteome of seminal plasma and spermatozoa from fertile men with high ROS levels with that of fertile men with physiological ROS levels. We aimed to identify possible alterations in the expression levels of key antioxidant proteins, as well as the underlying pathways responsible for the protection of spermatozoa from ROS attack.
Semen Analysis and ROS Levels
All samples in both the groups were normozoospermic according to World Health Organization (WHO) 2010 criteria [18] (Table 1). There were no significant differences in semen parameters between the control and the ROS+ groups. ROS levels were higher (p = 0.0001) in ROS+ group compared to the control group (Table 1).
Global Proteomic Profile of Seminal Plasma and Spermatozoa
Proteomic analysis of seminal plasma resulted in the identification of 351 proteins in the control group and 344 proteins in ROS+ group. From a total of 377 proteins in both groups, 44 were differentially expressed proteins (DEPs) (Figure 1a). One of the seminal plasma DEPs was unique to the control group (2%), while 29 were overexpressed (66%), and 14 underexpressed (32%) in ROS+ group (Figure 1b). Proteomic analysis of seminal plasma resulted in the identification of 351 proteins in the control group and 344 proteins in ROS+ group. From a total of 377 proteins in both groups, 44 were differentially expressed proteins (DEPs) (Figure 1a). One of the seminal plasma DEPs was unique to the control group (2%), while 29 were overexpressed (66%), and 14 underexpressed (32%) in ROS+ group (Figure 1b). In spermatozoa, 885 and 567 proteins were identified in the control and ROS+ groups, respectively. A total of 1144 proteins where identified after the comparison between both groups, from which 371 proteins were differentially expressed (Figure 1a). The majority (45%) of the spermatozoa DEPs were unique to the control group (168 proteins), while only 16 proteins were unique to the ROS+ group (4%). Besides, 95 DEPs were underexpressed (26%) and 92 overexpressed (25%) in ROS+ group (Figure 1c).
Functional Annotations and Pathway Analysis
Protein annotations revealed that the DEPs identified in seminal plasma belong to exosomes, different vesicles, secretory granules, and extracellular proteins (Figure 2a). However, membranebound organelle proteins were also detected in seminal plasma (Figure 2a). In spermatozoa, the identified DEPs belong to various subcellular locations such as mitochondria and flagellum cytoskeleton (Figure 2b). In spermatozoa, 885 and 567 proteins were identified in the control and ROS+ groups, respectively. A total of 1144 proteins where identified after the comparison between both groups, from which 371 proteins were differentially expressed ( Figure 1a). The majority (45%) of the spermatozoa DEPs were unique to the control group (168 proteins), while only 16 proteins were unique to the ROS+ group (4%). Besides, 95 DEPs were underexpressed (26%) and 92 overexpressed (25%) in ROS+ group (Figure 1c).
Functional Annotations and Pathway Analysis
Protein annotations revealed that the DEPs identified in seminal plasma belong to exosomes, different vesicles, secretory granules, and extracellular proteins ( Figure 2a). However, membrane-bound organelle proteins were also detected in seminal plasma (Figure 2a). In spermatozoa, the identified DEPs belong to various subcellular locations such as mitochondria and flagellum cytoskeleton (Figure 2b). Functional enrichment analysis of seminal plasma DEPs using STRING online software showed the biological processes and molecular functions in which they were involved. According to the biological processes, 4 DEPs were involved in acute phase response, 6 in protein folding and 18 in regulation of biological quality. Regarding the molecular functions, 4 DEPs were associated with antioxidant activity and 7 with endopeptidase inhibitor activity. Haptoglobin (HP), peroxiredoxin 4 (PRDX4) and S100 calcium-binding protein A9 (S100A9) were the main proteins involved in antioxidant activity, while serpin B6 (SERPINB6) and complement C3 (C3) were among the proteins involved in endopeptidase inhibitor activity. According to the Ingenuity Pathway Analysis (IPA) semenogelins I (SEMG1) and II (SEMG2) were in the top list of downregulated proteins in seminal plasma with a higher fold change between the groups. On the other hand, HP and C3 were among the top list of upregulated proteins with a higher fold change in ROS+ relative to control group. These two proteins were also classified as positive acute phase response proteins, which was one of the toxicity functions identified by the Tox lists tool (Supplementary Figure S1a). PRDX4 and S100A9 were associated with OS as identified by the IPA Tox lists tool (Supplementary Figure S1a). These seven DEPs were selected for validation by Western blot (WB) and compared with the results obtained by the proteomic results (Table 2). In spermatozoa, the functional enrichment analysis of Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) software showed that, among the biological processes, 76 proteins were Functional enrichment analysis of seminal plasma DEPs using STRING online software showed the biological processes and molecular functions in which they were involved. According to the biological processes, 4 DEPs were involved in acute phase response, 6 in protein folding and 18 in regulation of biological quality. Regarding the molecular functions, 4 DEPs were associated with antioxidant activity and 7 with endopeptidase inhibitor activity. Haptoglobin (HP), peroxiredoxin 4 (PRDX4) and S100 calcium-binding protein A9 (S100A9) were the main proteins involved in antioxidant activity, while serpin B6 (SERPINB6) and complement C3 (C3) were among the proteins involved in endopeptidase inhibitor activity. According to the Ingenuity Pathway Analysis (IPA) semenogelins I (SEMG1) and II (SEMG2) were in the top list of downregulated proteins in seminal plasma with a higher fold change between the groups. On the other hand, HP and C3 were among the top list of upregulated proteins with a higher fold change in ROS+ relative to control group. These two proteins were also classified as positive acute phase response proteins, which was one of the toxicity functions identified by the Tox lists tool (Supplementary Figure S1a). PRDX4 and S100A9 were associated with OS as identified by the IPA Tox lists tool (Supplementary Figure S1a). These seven DEPs were selected for validation by Western blot (WB) and compared with the results obtained by the proteomic results ( Table 2). In spermatozoa, the functional enrichment analysis of Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) software showed that, among the biological processes, 76 proteins were associated with response to stress, 19 with protein folding, 37 were involved in oxidation-reduction processes, and 42 in the regulation of response to stress. Regarding the molecular functions, 11 proteins presented antioxidant activity, including superoxide dismutase 1 (SOD1), PRDX4, thioredoxin reductase 1 and 2 (TXNRD1 and TXNRD2). Moreover, 28 proteins were associated with oxidoreductase activity, comprising NADH-ubiquinone oxidoreductase core subunit S1 (NDUFS1), TXNRD2, SOD1 and PRDX4. After performing the IPA analysis, similar results were observed by the IPA Tox lists tool (Supplementary Figure S1b). PRDX4, SOD1 and TXNRD2 were associated with OS, while NDUFS1 and TXNRD2 were related to mitochondrial dysfunction. Besides, SOD1 and TXNRD1 were also associated with NRF2-mediated OS response. 5 proteins were selected for validation by WB and compared with the results obtained by the proteomic analysis (Table 3).
Upstream Regulators
Using the upstream analysis tool of IPA, several cytokines were predicted to be responsible for the altered expression levels of seminal plasma proteins in the dataset. Interleukin-1 alpha and beta (IL1A and IL1B), interleukin-6 (IL6), Interleukin-22 (IL22), and tumor necrosis factor (TNF) were predicted to be activated, explaining the overexpression of DEPs such as S100A9, C3 and HP. They may also be responsible for the underexpression of prostate-specific antigen (KLK3), lipoprotein lipase (LPL) and chaperone heat shock protein HSP 90-beta (HSP90AB1) (Supplementary Figure S2).
In spermatozoa, two upstream regulators were predicted to be activated in this dataset: nuclear factor erythroid 2-related factor 2 (NFE2L2) and TNF. The transcription regulator NFE2L2 was shown to regulate the overexpression of proteins involved in oxidation-reduction processes, such as SOD1, SOD2 and 6-phosphogluconate dehydrogenase, decarboxylating (PGD) (Supplementary Figure S3). Its activation may also explain the overexpression of some proteasomes (PSMB2 and PSMB5). The cytokine TNF was also predicted to be activated and regulate the overexpression of SOD2, fibronectin (FN1), ion-binding proteins (GPD2, HSPG2, LCN2), as well as the underexpression of prohibitin (PHB) ( Figure S3).
Western Blot
All the selected seminal plasma proteins (SEMG1, SEMG2, HP, SERPINB6 and PRDX4) were identified by WB, however, there were no significant alterations in their expression levels between the control and the ROS+ groups (Figure 3a). In sperm proteins, there was a decrease in NDUFS1 (p = 0.01) protein expression levels in the ROS+ group relative to the control (Figure 3b). An overexpression of PRDX4 (p = 0.04) and SOD1 (p = 0.03) was observed in ROS+ group when compared to the control group. There were no significant alterations in the protein expression of TXNRD1 and TXNRD2 (Figure 3b).
Discussion
High seminal ROS levels have been widely debated as a major cause of male infertility [19,20]. Nevertheless, the role of ROS at physiological concentrations in regulation of sperm function cannot be ignored [3,21]. In the present study, we report a comparative proteomic analysis of seminal plasma and spermatozoa from fertile men exhibiting higher ROS levels than the pre-established reference level with respect to fertile men with basal ROS levels. This is important to gain a better insight into the role of ROS in sperm function in general and to understand sperm dysfunction under pathophysiological conditions with elevated ROS level.
In semen, the principal source of ROS are morphologically abnormal, immature spermatozoa, and leukocytes [22]. As both groups were negative for leukocytes (Endtz negative), the elevated ROS generation may be attributed to the presence of immature cells in these samples. Recently, we have reported the presence of immature cells with different proteome profile in the ejaculated semen of fertile men [23]. Therefore, the difference in the proteome profile of spermatozoa in the control and ROS+ groups may be due to the presence of comparatively more number of immature spermatozoa in the latter group. This was corroborated by our proteomic results that showed an underexpression of sperm surface protein Sp17 (SPA17) in ROS+ group. This protein is weakly expressed in spermatocytes, while a high expression was reported in early and late spermatids, which suggests In sperm proteins, there was a decrease in NDUFS1 (p = 0.01) protein expression levels in the ROS+ group relative to the control (Figure 3b). An overexpression of PRDX4 (p = 0.04) and SOD1 (p = 0.03) was observed in ROS+ group when compared to the control group. There were no significant alterations in the protein expression of TXNRD1 and TXNRD2 (Figure 3b).
Discussion
High seminal ROS levels have been widely debated as a major cause of male infertility [19,20]. Nevertheless, the role of ROS at physiological concentrations in regulation of sperm function cannot be ignored [3,21]. In the present study, we report a comparative proteomic analysis of seminal plasma and spermatozoa from fertile men exhibiting higher ROS levels than the pre-established reference level with respect to fertile men with basal ROS levels. This is important to gain a better insight into the role of ROS in sperm function in general and to understand sperm dysfunction under pathophysiological conditions with elevated ROS level.
In semen, the principal source of ROS are morphologically abnormal, immature spermatozoa, and leukocytes [22]. As both groups were negative for leukocytes (Endtz negative), the elevated ROS generation may be attributed to the presence of immature cells in these samples. Recently, we have reported the presence of immature cells with different proteome profile in the ejaculated semen of fertile men [23]. Therefore, the difference in the proteome profile of spermatozoa in the control and ROS+ groups may be due to the presence of comparatively more number of immature spermatozoa in the latter group. This was corroborated by our proteomic results that showed an underexpression of sperm surface protein Sp17 (SPA17) in ROS+ group. This protein is weakly expressed in spermatocytes, while a high expression was reported in early and late spermatids, which suggests that most of the ejaculated spermatozoa express SPA17 protein. This also supports its role in the sperm differentiation [24,25]. Similarly, underexpression of annexins (1-6) points towards failure of apoptosis in these samples, resulting in the increase in immature/or undifferentiated spermatozoa.
After bioinformatic analysis of the seminal plasma DEPs, we focused on SEMG1, SEMG2, SERPINB6, HP, PRDX4, S100A9 and C3. SEMGI and SEMGII are highly abundant in seminal plasma and are responsible for the formation of the characteristic gel-like coagulum after ejaculation [26]. They play an important role in protecting the spermatozoa and in the fertilization process [27]. The underexpression of SEMG1 and SEMG2 in ROS+ men was accompanied by the underexpression of KLK3, which is one of the trypsin-like serine proteases responsible for semenogelins digestion to attain semen liquefaction [28]. Moreover, an overexpression of SERPINB6 was observed in ROS+ men. This protein is a member of the serpins protein family that is involved in the regulation of trypsin-like serine proteases activity [29]. The alterations in the expression profile of these proteins resulted in normal liquefaction of semen samples in ROS+ group, an important factor for the preservation of sperm fertilizing potential.
HP, PRDX4 and S100A9 were identified as the main seminal plasma proteins involved in antioxidant activity, which were overexpressed in ROS+ samples. HP in human fluids binds to hemoglobin to inhibit its oxidative potential as a free molecule [30]. In the presence of hydrogen peroxide (H 2 O 2 ), one of the main ROS in semen, hemoglobin can act as a peroxidase [31], thus generating more ROS. Overexpression of HP in the seminal plasma of ROS+ men can prevent an oxidative chain reaction. PRDX4 belongs to the family of peroxiredoxins, which are major players of the antioxidant defense system in semen. This protein was previously identified in both seminal plasma and spermatozoa of human semen samples [32]. PRDX4 contain two cysteine residues in its active site, which are major targets for ROS [33]. As ROS are neutralized after binding to PRDX4, the overexpression of this protein in the seminal plasma of ROS+ men confers higher protection against increased ROS levels. S100A9 is a calcium-and zinc-binding protein associated with stress response [34]. It is considered a danger-or damage-associated molecular pattern (DAMP) molecule, as, in response to various stimuli, it can bind to pro-inflammatory receptors and initiate an inflammatory reaction [35]. In this particular study, the stimuli for the overexpression of this protein was the high ROS levels in semen of ROS+ men. In fact, there is a direct link between high ROS levels and inflammation [36]. A previous proteomic study also identified the overexpression of S100A9 in the seminal plasma of smoking men [37], which also reflects an environment with high ROS levels. Overexpression of S100A9 was associated with the activation of NADPH oxidase [38], which may be one of the reasons for the accumulation of ROS in semen. S100A9 pro-inflammatory activity starts with the activation of the nuclear factor-kappa B (NF-κB), which consequently induces cytokine secretion [38]. This may explain why many interleukins were predicted to be active in the seminal plasma of ROS+ men, including IL1A, IL1B, IL6, IL22, and TNF (Supplementary Figure S2). These inflammatory factors were identified as the upstream regulators of many proteins in the dataset and are implicated in the regulation of sperm fertilization processes during sperm transit through the female reproductive tract [39]. Accordingly, Tox lists showed that many positive acute phase response proteins were upregulated in ROS+ men. This may also be related to the observed overexpression of protein C3, which is a mediator of local inflammatory processes and immune responses [40]. For instance, it has been demonstrated that cytokines IL1A, IL1B, IL6 and TNF can lead to increased C3 secretion [41]. In human seminal plasma, C3 complement system is regulated by complement-inhibiting factors to protect spermatozoa from damage by chronic inflammation [42]. Although all the selected proteins were identified by WB, the results were not concordant with the proteomic data (Figure 3a).
Spermatozoa proteomic data showed 371 DEPs, from which 5 were selected for validation by WB: NDUFS1, SOD1, PRDX4, TXNRD1, and TXNRD2. NDUFS1 is one of the subunits of the mitochondrial complex I, which is the starting point of oxidative phosphorylation (OXPHOS). Complex I is responsible for NADH oxidation, thus providing electrons for the respiratory chain [43]. Mitochondrial function is crucial for sperm fertilization, not only for ATP production to obtain energy, but also for the physiological production of ROS. NDUFS1 is the largest subunit of complex I and is essential for the proper assembly of the complex required for its function [44]. The underexpression of NDUFS1 in the spermatozoa of ROS+ men may impair complex I assembly and result in its dysfunction, which is one of the most common mitochondrial dysfunctions observed in humans [44]. Moreover, subunits of complex IV (COX4I1 and COX5A) and complex V (ATP5H) were also underexpressed in ROS+ group. These alterations contribute to the higher production of ROS levels in this group. We were able to validate the underexpression of NDUFS1 by WB. Mitochondrial dysfunction in mature spermatozoa may contribute to the high ROS levels in ROS+ group.
The preponderance for OS in spermatozoa of ROS+ group is counteracted by the increased antioxidant defense. Both cytosolic and mitochondrial superoxide dismutase (SOD1 and SOD2, respectively), mitochondrial thioredoxin reductase 2 (TXNRD2), and PRDX4 were overexpressed in spermatozoa of ROS+ group. Moreover, cytosolic thioredoxin reductase 1 (TXNRD1) was uniquely expressed in ROS+ group providing additional defense. SOD1 belongs to the superoxide dismutase family and is one of the first line of antioxidant defense enzymes against ROS attack in spermatozoa [45]. The overexpression of SOD1, which was further confirmed by the WB analysis (Figure 3b), may explain the higher antioxidant protection in spermatozoa of ROS+ men. This protein provides protection against the attack from superoxide anion radicals. SOD1 and SOD2 increased activity was predicted to be regulated by NFE2L2 and TNF, which were identified as their activated upstream regulators (Supplementary Figure S3). These transcription factors were described as important regulators of antioxidant responses [46].
PRDX4 is one of the main proteins responsible for reduction of peroxides in spermatozoa [33]. It can be found in sperm plasma membrane, acrosome, nucleus, and cytosol [33]. The binding of ROS to the active site of PRDX4 leads to the oxidation of its cysteine residues and the enzyme becomes inactive [47]. Without an active thioredoxins system, PRDX4 would remain permanently inactive in an environment with high ROS levels, thus being unable to scavenge other forms of ROS. The thioredoxin system is constituted by thioredoxins, thioredoxins reductases and NADPH [48]. Thioredoxins reductases, including TXNRD1 (cytosolic) and TXNRD2 (mitochondrial), play a key role in maintaining the cyclicity of this system; they are responsible for maintaining thioredoxins in their reduced (active) state in a NADPH-dependent manner [33]. Subsequently, thioredoxins act as electron donors for peroxiredoxins, facilitating their reduction and reactivation [47]. Based on our proteomic data, PRDX4 and TXNRD2 were overexpressed, while TXNRD1 was unique in the spermatozoa of ROS+ men. This indicates that this ROS-scavenging system is highly enhanced and responsible for the redox homeostasis in fertile men. In fact, lower levels of peroxiredoxins have been reported in the spermatozoa of infertile men [49]. Through WB, we were able to validate the overexpression of PRDX4 in ROS+ men (Figure 3b), although no differences were found for TXNRD1 and TXNRD2 between the experimental groups.
ROS can also cause oxidative modification of proteins leading to loss of structure and function or gain in undesirable function. These proteins result in structural changes by oxidative modification, and expose the hydrophobic interior of the protein, which is recognized by 20S proteasome for its effective clearance [50]. IPA pathway analysis of DEPs identified the overexpression of 11 proteasome subunits, namely, PSMA1, PSMA2, PSMA3, PSMA4, PSMA5, PSMA6, PSMA7, PSMB1, PSMB2, PSMB3, PSMB5 in ROS+ group, which indicate an efficient regulation of the protein turnover [51]. Future studies need to be done to validate the proteasomal pathway in fertile ROS+ men.
The discrepancies between the proteomic and WB results may be related to the differences in the specificity and sensitivity of the two techniques. In shotgun proteomics, liquid chromatography-tandem mass spectrometry (LC-MS/MS) data recognizes a protein when at least two peptide fragments are detected for the protein of interest. However, in WB, the detection of protein is based on the epitope against which the primary antibody is generated. As in LC-MS/MS only tryptic digestion is considered, it was easy to match the peptide sequence and identify this from the database. In the case of seminal plasma, various mucolytic and proteolytic enzymes often cleave the matrix proteins to release the spermatozoa after liquefaction. In our study, we used completely liquefied semen samples, therefore, the peptide fragments may acquire different molecular masses than the predicted ones, making the detection by WB difficult. For example, semenogelins, which are highly abundant proteins in seminal plasma, are cleaved into smaller peptides during the process of liquefaction and show multiple bands in WB. This makes the quantitation at a specific molecular weight unpractical. A limitation of this study was the small sample size due to the difficulty to enroll sufficient number of men who are fertile and positive for ROS and willing to participate in a study.
This study represents an important step towards the understanding of the molecular dynamics of sperm and seminal plasma involved in fertility preservation. We confirmed our hypothesis by demonstrating the overexpression of several antioxidant proteins in both seminal plasma and spermatozoa of proven fertile men with high ROS levels. These results indicate that in an environment of higher ROS production, some men possess the molecular machinery essential to modulate the expression of several seminal proteins to control ROS deleterious effects. Our findings suggest that the DEPs involved in proteasomal pathway and antioxidant defense may be targeted for development of new antioxidant therapies for infertile men with high seminal ROS levels.
Ethical Approval
This study was conducted after approval by the Institutional Review Board (IRB) from the Cleveland Clinic.
Semen Analysis
A total of 20 semen samples from healthy volunteers with proven fertility were used in this study after informed written consent. The inclusion criteria were: normozoospermic men according to the WHO 2010 guidelines [18], who fathered a child in the last two years. Semen samples were collected by masturbation into a sterile container after 2-5 days of sexual abstinence and immediately incubated at 37 • C for 30 min to allow liquefaction. After complete liquefaction, the volume, pH, viscosity and color were evaluated. For hyperviscous samples, the viscosity was broken down by repeated pipetting to avoid interference of proteolytic enzymes in proteomic analysis [5]. Microscopic evaluation of the samples including sperm motility, concentration, and presence of round cells was performed using a disposable Leja counting chamber (Spectrum Technologies, Healdsburg, CA). Endtz test [52] was performed for samples with round cells >1 × 10 6 /mL and samples with leukocytospermia were excluded.
Protein Extraction and Quantification
Spermatozoa were separated from the seminal plasma by centrifugation at 400× g for 20 min, washed 3 times in phosphate buffer saline (PBS) and finally re-suspended in radio-immunoprecipitation assay buffer (RIPA) supplemented with EDTA-free protease inhibitor cocktail (cOmplete ULTRA Tablets; Roche, Indianapolis, IN, USA) and digested overnight at 4 • C. The sperm lysates were centrifuged at 14,000× g for 30 min at 4 • C and the supernatant was taken for the experiments. Seminal plasma was further centrifuged at 10,000× g for 10 min to eliminate possible remaining cells or debris, checked under microscope for presence of spermatozoa, if any, and centrifuged again to get clear seminal plasma devoid of spermatozoa. PBS supplemented with protease inhibitor was added to seminal plasma and it was again centrifuged at 10,000× g for 10 min. Total protein content of both the fractions i.e., seminal plasma and spermatozoa were estimated by bicinchoninic acid method using Pierce BCA Protein Assay kit (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions.
Quantitative Proteomic Analysis
From the 20 semen samples collected, ten were used for the quantitative proteomic analysis. Five protein samples of seminal plasma and spermatozoa were randomly selected from experimental group (control and ROS+) to maintain the biological variability. After extraction of proteins, the proteomic analysis of seminal plasma and spermatozoa fractions was carried out by LC-MS/MS. Four pooled samples were prepared: (i) spermatozoa proteins (n = 5) from control group; (ii) seminal plasma proteins (n = 5) from control group; (iii) spermatozoa proteins (n = 5) from ROS+ group; and (iv) seminal plasma proteins (n = 5) from ROS+ group. Each pool was regarded as an individual sample for the proteomic analysis. To maintain the technical variability, each of these four pooled samples were run in triplicate during LC-MS/MS analysis. Proteins were analyzed in a Finnigan LTQ-Obitrap Elite hybrid mass spectrometer system using the previously described conditions [4,54]. The resulting spectra were analyzed by the Proteome Discoverer (Thermo Fisher Scientific, Waltham, MA, USA; version 1.4.1.288) software. Database-searching algorithms from Mascot, SEQUEST and X!Tandem software were used to identify peptides/proteins from the mass spectra. The search was defined to the human protein reference database. Search results were then uploaded into the program Scaffold (Proteome Software Inc., Portland, OR, USA; version 4.0.6.1), which uses probability and statistical methods for label-free quantitation and identification of DEPs. Only protein identifications with a 99.0% probability to achieve a false discovery rate less than 1.0% and containing at least two identified peptides were considered. The abundance of each protein (very low, low, medium or high) was determined by the spectral counts. The expression profile of the DEPs between the experimental groups is based on the normalized spectral abundance factor (NSAF) ratio, which allows the identification of the proteins that are unique, underexpressed or overexpressed. The categorization of overall abundance and the identification of DEPs between the experimental groups was performed with the previously described criteria [54].
Bioinformatic Analysis
Publicly available bioinformatics annotation tools and databases such as GO Term Finder, GO Term Mapper, UniProt, and Software Tools for Researching Annotations of Proteins (STRAP) were used for functional annotation and enrichment analysis [55,56]. For the large list of proteins derived from proteomic study, Database for Annotation, Visualization and Integrated Discovery (DAVID) (http://david.niaid.nih.gov), and proprietary software package such as IPA from Ingenuity ® Systems were used to obtain consensus based, comprehensive functional context, and to conduct Tox lists and upstream analysis related to the identified DEPs. Tox lists provide a list of processes that may be affected by the altered proteomic profile, while upstream analysis tool allows the identification of the upstream regulators that may be responsible for the expression changes observed in the dataset. STRING (https://string-db.org/) was used for protein-protein interaction analysis. Based on the bioinformatic analysis, key proteins were selected for validation by WB for both seminal plasma and spermatozoa. The proteins were selected based on their involvement in ROS-related mechanisms, including in the antioxidant defense system and mitochondrial function. Besides, we focused on proteins already described in the literature as important for spermatozoa or seminal plasma functions.
Western Blot
The remaining 10 semen samples were used for validation of proteomic data by WB. Five protein samples from each experimental group (control and ROS+) were used individually to validate the selected proteins of seminal plasma (n = 5) and spermatozoa (n = 5). 25 µgof each spermatozoa protein sample and 50 µg of each seminal plasma protein sample were mixed with 4× Laemmli sample buffer (BioRad, Hercules, CA, USA) in a ratio 1:3 and completed up to 25 µL with PBS. Polyvinylidene difluoride (PVDF) membranes were incubated overnight (4 • C) with specific primary antibodies followed by the respective secondary antibodies at room temperature, for 90 min (Supplementary Table S1). Membranes were reacted with enhanced chemiluminescence (ECL) reagent (GE Healthcare, Marlborough, MA, USA) for 5 min and read with the ChemiDoc™ MP Imaging System (BioRad, Hercules, CA, USA) to detect the chemiluminescence signals. Densities from each band were obtained with Image Lab™ Software (BioRad, Hercules, CA, USA) according to standard methods and divided by the corresponding total protein lane density. Results were expressed as fold change relative to the control group.
Statistical Analysis
Semen parameters and WB results were tested for normality using the Kolmogorov-Smirnov test. As data did not present a normal distribution, results were analyzed by a non-parametric Mann-Whitney test for independent samples, using the MedCalc Software (V. 17.8; MedCalc Software, Ostend, Belgium). All data are presented as mean ± SEM and differences with p < 0.05 were considered statistically significant. Funding: Financial support for this study was provided by the American Center for Reproductive Medicine, Cleveland Clinic, Ohio, USA. Tania R. Dias was supported by "Fundação para a Ciência e a Tecnologia" (FCT, SFRH/BD/109284/2015) and Fulbright Program (E0585639). | 7,519.4 | 2019-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Emerging Role of the Microbiota in Breast Cancer Progression
Emerging evidence suggests a profound association between the microbiota composition in the gastrointestinal tract and breast cancer progression. The gut microbiota plays a crucial role in modulating the immune response, releasing metabolites, and modulating estrogen levels, all of which have implications for breast cancer growth. However, recent research has unveiled a novel aspect of the relationship between the microbiota and breast cancer, focusing on microbes residing within the mammary tissue, which was once considered sterile. These localized microbial communities have been found to change in the presence of a tumor as compared to healthy mammary tissue, unraveling their potential contribution to tumor progression. Studies have identified specific bacterial species that are enriched within breast tumors and have highlighted the mechanisms by which even these microbes influence cancer progression through immune modulation, direct carcinogenic activity, and effects on cellular pathways involved in cell proliferation or apoptosis. This review aims to provide an overview of the current knowledge on the mechanisms of crosstalk between the gut/mammary microbiota and breast cancer. Understanding this intricate interplay holds promise for developing innovative therapeutic approaches.
Introduction
The human body, in particular the gastrointestinal tract, is populated by a large number of bacteria, viruses, fungi, and protozoa, constituting the so-called microbiota [1], also known as "forgotten organ" [2]. Over the past two decades, interactions between the gut microbiota and the host have been widely studied, highlighting its crucial role in a plethora of physiological and pathological processes. The most dominant phyla inhabiting the gut, about the 90% of the entire gut microbiota [3], are represented by Firmicutes and Bacteroidetes, but members of the Actinobacteria, Proteobacteria, Fusobacteria, and Verrucomicrobia phyla are also present [4]. These microorganisms establish a symbiotic relationship with the host and exert essential functions to preserve homeostasis. For instance, the microbiota is engaged in several metabolic pathways, such as the fermentation and absorption of undigested carbohydrates, and actively participates in energy harvesting, storage, and the activation and regulation of the immune system [5]. This delicate equilibrium can be subverted, and an imbalance between beneficial and potentially pathogenic bacteria has been observed in patients suffering from different pathologies. This condition, termed "dysbiosis", results in drastic changes in microbial composition and is considered the effect of the microbial barriers' disruption related to disease. However, recent studies have described how microbiota dysbiosis may represent the cause rather than the consequence of specific pathological conditions and/or events influencing the disease outcome [6]. For example, the gut microbiota has been demonstrated to be involved in promoting cancerogenesis, favoring tumor progression, and affecting the response to anticancer therapies, including immunotherapy [7].
The advent of high-throughput DNA sequencing technologies has made possible to characterize the entire human microbiome, that is, the collective genetic material of all the microorganisms living in our organism. The findings obtained using these novel techniques have allowed us to ascertain that, beyond the gut, other human body compartments, historically considered sterile, host indigenous bacterial communities [1]. Moreover, in a recent study conducted by Nejman et al., it was shown that tumors, including breast, lung, ovary, pancreas, melanoma, bone, and brain cancers, host their own microbiome, different from that present in the healthy counterpart [8]. It has been speculated that changes in bacterial abundance/composition in the tumor mass may represent an effect of the disease, related to the leakier vasculature in the tumor microenvironment that influences bacteria recruitment [9]. However, increasing evidence points to a "causal role" of tumor-associated bacteria in sustaining disease progression by shaping the phenotypes of cancer and immune cells and their interaction with the surrounding stroma. These data also raise important questions concerning the origin of these bacteria-whether they are tissue-resident or translocate from the gut or other sites in response to specific signals.
A specific microbiota is also associated with the mammary gland, once believed to be a microorganism-free environment [10]. The breast is mainly constituted by adipose tissue presenting an extensive vasculature and lymphatic drainage, and, for this reason, it represents a favorable environment for bacterial growth, particularly Proteobacteria and Firmicutes [11]. Culture experiments proved the existence of viable bacteria in the mammary tissue, revealing the colonization of Bacillus sp., Enterobacteriaceae sp., and Staphylococcus sp. [10]. Interestingly, a particularly rich and diverse microbiome has been identified in breast cancer [8]. Many studies have reported profound differences in the microbial composition of the mammary gland between tumoral and normal tissues and between benign and malignant tumors [8,[12][13][14], supporting the notion that changes in tissues' microbial communities may influence the progression of breast cancer [12].
Since both gut and local microbiota growing in the mammary gland have been postulated to influence breast cancer progression [14][15][16][17], in the present review, we aimed to give an overview of the state of the art regarding the intricate relationship between the gutand mammary-tumor-associated microbes with the host in the onset and progression of breast cancer.
Relationship between Breast Cancer and the Gut Microbiome
Breast cancer is the second leading cause of cancer-related deaths in women worldwide [18]. It is a heterogeneous malignancy, and distinct molecular subtypes have been characterized. For example, the Luminal A subgroup is characterized by estrogen receptor (ER) expression and activity and, due to its good response to endocrine therapy, has the best clinical prognosis. Luminal B cancers express lower levels of ER and have higher proliferation rates compared to the previous subtype. The human epidermal growth factor receptor 2 (HER2)\positive subgroup is ER-and progesterone receptor (PR)-negative and comprises about 15% of all invasive breast cancers. It is more aggressive than luminal-like tumors. Finally, triple-negative breast cancers (TNBC), the subtype with the worst survival and the most challenging to treat, do not express hormone receptors or HER2 [19]. Such a classification represents an extremely valuable tool for predicting the clinical outcome and guiding the selection of the more appropriate therapy. However, it is becoming clear that other clinical variables need to be taken into consideration, and among these, gut microbiota is an emerging factor. Breast cancer's occurrence and development has been demonstrated to be affected by the gut microbiota through different mechanisms, including the modulation of immune system activity, the alteration of estrogen levels, and the produc-
Gut Microbes-Immunity Crosstalk
The interaction between the commensal microbiota and the human immune system is in a dynamic balance [20]. Gut bacteria establish a complex and coordinated set of innate and adaptive immune responses to maintain tissues homeostasis. As a consequence, when the microbiota-host balance is disrupted and dysbiosis occurs, an increased production of inflammatory mediators, which are associated with cancer progression, is observed [21,22]. This effect has been experimentally demonstrated well in a mouse model of hormone receptor (HR)+ mammary cancer. Antibiotic-induced commensal dysbiosis resulted in a significant increase at the tumor site of myeloid cells highly expressing suppressive/inflammatory molecules, such as arginase-1 and IL-6 [23]. Among these inflammatory cells, M2-like macrophages, the most frequent immune subset in the breast tumor microenvironment and associated with reduced survival in HR+ breast cancer [24], were particularly found to infiltrate breast tumors and the normal-adjacent mammary gland during early and advanced stages of tumor progression. These effects were recapitulated by the fecal microbiota transplantation of dysbiotic cecal contents, demonstrating the direct impact of gut dysbiosis on mammary tumor growth [25,26].
Neutrophils have also been reported to be influenced by gut microbiota in the context of breast cancer [26]. In the C3-1-TAg mammary cancer mouse model, it has been observed that infection with Helicobacter hepaticus, a gut-resident bacterium, induced breast cancer progression associated with increased neutrophil recruitment and infiltration at the tumor site. Neutrophil depletion inhibited mammary tumor formation, resulting in the appearance of only a few pre-neoplastic and early neoplastic lesions in the breast tissue, as compared to multifocal advanced lesions in non-depleted mice [26].
Moreover, in a recent published study [27], it emerged that the specific gut bacteria is able to shape the immune response in a way that promotes or suppresses tumor development through the regulation of the stimulator of interferon gene (STING) agonists. In particular, the presence of cdAMP-producing Akkermansia muciniphila in the gut was observed to induce the IFN-I pathway upon STING activation. The IFN-I production led to the reprogramming of macrophages toward an anti-tumor phenotype and to the stimulation of the crosstalk between natural killer (NK) and dendritic cells (DC), further sustaining an anti-tumor immune response. Conversely, these events were halted in germ-free mice, in which it was possible to assist with monocytes' differentiation into pro-tumoral macrophages [27].
Gut Microbes-Immunity Crosstalk
The interaction between the commensal microbiota and the human immune system is in a dynamic balance [20]. Gut bacteria establish a complex and coordinated set of innate and adaptive immune responses to maintain tissues homeostasis. As a consequence, when the microbiota-host balance is disrupted and dysbiosis occurs, an increased production of inflammatory mediators, which are associated with cancer progression, is observed [21,22]. This effect has been experimentally demonstrated well in a mouse model of hormone receptor (HR)+ mammary cancer. Antibiotic-induced commensal dysbiosis resulted in a significant increase at the tumor site of myeloid cells highly expressing suppressive/inflammatory molecules, such as arginase-1 and IL-6 [23]. Among these inflammatory cells, M2-like macrophages, the most frequent immune subset in the breast tumor microenvironment and associated with reduced survival in HR+ breast cancer [24], were particularly found to infiltrate breast tumors and the normal-adjacent mammary gland during early and advanced stages of tumor progression. These effects were recapitulated by the fecal microbiota transplantation of dysbiotic cecal contents, demonstrating the direct impact of gut dysbiosis on mammary tumor growth [25,26].
Neutrophils have also been reported to be influenced by gut microbiota in the context of breast cancer [26]. In the C3-1-TAg mammary cancer mouse model, it has been observed that infection with Helicobacter hepaticus, a gut-resident bacterium, induced breast cancer progression associated with increased neutrophil recruitment and infiltration at the tumor site. Neutrophil depletion inhibited mammary tumor formation, resulting in the appearance of only a few pre-neoplastic and early neoplastic lesions in the breast tissue, as compared to multifocal advanced lesions in non-depleted mice [26].
Moreover, in a recent published study [27], it emerged that the specific gut bacteria is able to shape the immune response in a way that promotes or suppresses tumor development through the regulation of the stimulator of interferon gene (STING) agonists. In particular, the presence of cdAMP-producing Akkermansia muciniphila in the gut was observed to induce the IFN-I pathway upon STING activation. The IFN-I production led to the reprogramming of macrophages toward an anti-tumor phenotype and to the stimulation of the crosstalk between natural killer (NK) and dendritic cells (DC), further sustaining an anti-tumor immune response. Conversely, these events were halted in germfree mice, in which it was possible to assist with monocytes' differentiation into pro-tumoral macrophages [27].
The role of gut microbiota in breast cancer progression is also supported by clinical studies. It was found that low microbiome diversity was associated with the reduced survival of breast cancer patients. This condition was also accompanied by changes in immune cell compartments, consisting in a decreased level of lymphocytes and a parallel increased number of neutrophils [28]. Collectively, these findings clearly suggest that the gut microbiota may influence breast cancer progression and survival through the modulation of immune cells' activity.
Modulation of Estrogen Levels
Especially for HR+ breast cancer, the risk of breast cancer progression is highly associated with the level of circulating estrogens [29], whose metabolism takes place in the liver [30]. The role of estrogens as breast carcinogens has been proven by several epidemiological studies, and various mechanisms have been proposed to explain their pro-tumor effects. For example, upon binding to the specific nuclear receptor alpha (ER-α), estrogens are able to induce the enhanced production of growth factors that, in turn, boost the proliferation of breast cancer cells [31]. Moreover, estrogens have been reported to cause a genotoxic effect through a non-ER-α-dependent mechanism. Indeed, the catabolism of estrogens mediated by cytochrome P450 complexes generates reactive free radicals and intermediate metabolites that cause oxidative stress and genomic damage, resulting in increased mutation rates and a compromised DNA repair system [32,33].
C-18 steroid hormone estrogens exist in three biologically active forms, estradiol (E2, premenopausal), estrone (E1, postmenopausal), and estriol (E3, in pregnant women), which exert diverse biological effects. In the liver, the hydroxylation of parent estrogens E2 and E1 produces estrogen metabolites with varying hormone potency, bioavailability, and half-life. Estrogens and their metabolites are then conjugated through glucuronidation and sulfonation to allow biliary excretion into the gastrointestinal tract. A fraction of conjugated estrogens are deconjugated by the gut microbiota into free estrogens, which are then reabsorbed in the distal part of the intestine and, through the portal vein, distributed to other tissues, including the mammary glands [33]. Finally, they can circulate in the bloodstream as free molecules or bound to specific proteins. The link between estrogens and microbiota thus relies on the ability of intestinal bacteria to release free estrogens. In 2011, Plottel and Blaser widely discussed the "estrobolome", defined as the collection of enteric bacterial genes with the ability to metabolize estrogens [34]. Indeed, free estrogens are mainly derived from the deconjugation process occurring in the gut via bacterial β-glucuronidase, especially microbial communities belonging to the Clostridia and Ruminococcaceae families or the Escherichia/Shigella genus. These β-glucuronidase-producing bacteria were frequently found to be over-expressed in dysbiotic microbiota due, for instance, to diet, alcohol consumption, and the use of antibiotics [35,36]. The augmented abundance of such microorganisms results in an increased concentration of circulating free estrogens, eventually contributing to breast cancer progression [37,38]. Accordingly, a case-control study conducted on 2266 North American women affected by breast cancer and 7953 healthy controls showed that women with a clinical history of long-term antibiotic treatment were characterized by an elevated risk of developing breast cancer [39]. In addition, adiposity, a condition strictly related to high circulating estrogen levels, has been associated with an elevated breast cancer risk in postmenopausal women [40]. A meta-analysis of 50 prospective observational studies also confirmed a relationship between adult weight gain and the risk of breast cancer in women [41].
Role of Microbial Metabolites
Besides estrogens, other metabolic pathways link the gut microbiota to breast cancer progression. Through the metabolism and fiber fermentation of lipids or bile acids (BAs), bacteria produce an array of molecules that can directly or indirectly interfere with tumor cell proliferation [42,43].
Intestinal anaerobic bacteria, such as Clostridia spp., are one of the largest producers of lithocholic acid (LCA), a secondary bile acid found to decrease breast cancer cell proliferation through the activation of the G-protein-coupled bile acid receptor 1 (TGR5) [44,45].
Moreover, LCA is able to interfere with the mesenchymal-to-epithelial transition cell program, increase the tumor immune cell infiltration, and affect the tricarboxylic acid cycle (TCA) and the oxidative phosphorylation (OXPHOS) pathways [43]. Further in vitro studies demonstrated that LCA can decrease the expression of nuclear factor erythroid 2-related factor 2 (NRF2) and up-modulate Kelch-like ECH associating protein 1 (KEAP1), causing an imbalance between pro-and anti-oxidant enzymes, eventually impairing breast cancer cells proliferation [46]. Finally, LCA serum levels in breast cancer patients are associated with a high abundance of Clostridiales and Bacteroidales species. Early-stage breast cancer patients showed lower LCA levels and a concomitant reduced abundance of Clostridiales and Bacteroidales than healthy women [43].
Nisin, a gut bacteriocin produced by the Gram-positive L. lactis, has also been shown to have a highly cytotoxic effect on breast tumor cells by altering calcium ion influx across the cell membrane and promoting cell cycle arrest [47].
Other bacterial metabolites impacting breast cancer progression are short-chain fatty acids (SCFAs), such as butyrate, propionate, acetate, and lactate. SCFAs are the most common types of gut microbial metabolites, primarily produced by species colonizing the intestine, such as as Eubacterium rectale, Clostridium leptum, and Faecalibacterium prausitzii, as well as by the lactate-utilizing species Eubacterium hallii and Anaerostipes [47], through the fermentation of dietary fibers [48].
Evidence in breast cancer patients has shown that sodium butyrate has a promising anti-tumor activity on breast cancer cells alone or in combination with other anti-cancer agents [49][50][51][52][53], for example, the anti-HER2 antibody trastuzumab [54].
The diamine cadaverine, another bacterial metabolite derived from the decarboxylation of lysine and arginine, is known to inhibit breast cancer cells growth, migration and invasion, as well as suppressing epithelial-to-mesenchymal transition [45].
Overall, studies highlight how several bacterial metabolites can exert an anti-tumor effect against breast cancer cells. It should be considered that not all metabolites produced by the microbiota possess anti-cancer activity, but, instead, some of them are able to promote tumor growth, as shown in other cancer types [47,55]. However, to the best of our knowledge, no pro-tumorigenic bacterial metabolites have been identified in the context of breast cancer yet.
Breast Microbiome and Its Impact on Breast Cancer
Accumulating evidence indicates a consistent role of breast-tissue-resident bacteria in the onset and progression of breast cancer [14], but the origin of these bacteria remains unclear, and different hypotheses are debated. A study performed in canine breast tumors revealed the presence of bacteria belonging to the Bacteroides family in the tumor tissue, as well as in the mouth and gut [56], sustaining the notion of a possible bacterial translocation from the oral cavity to the intestine and, eventually, to the mammary tissue. However, the isolation from mammary tumors of bacteria typically inhabiting the skin, such as Staphylococcus epidermidis and Micrococcus luteus, shows a scenario in which these microbes may have reached the mammary gland through the nipples and then spread through the gland lobules and ducts [16].
Furthermore, live bacteria have been found within tumor cells and tumor-associated immune cells [8,15], suggesting that cancerous and host cells may be exploited as a shuttle to help microorganisms spread to the tumor or the adjacent normal mammary tissues [57]. Interestingly, Fu et al. also revealed that intracellular bacteria have the ability to induce the rearrangement of the breast cancer cell cytoskeleton, which confers tumor cells a higher resistance to fluid shear stress. This results in an increased survival rate during cancer cell transport through blood vessels and, consequently, an enhanced metastatic potential [15].
Several studies revealed modifications in the tumoral mammary gland microbial composition compared to the normal tissues and among tumors at different stages [8,12]. Urbaniak et al. reported that Enterobacteriaceae, Staphylococcus, and Bacillus were highly abundant in breast cancer patients compared to healthy individuals [14]. Xuan et al. [58] found the presence of Sphingomonas yanoikuyae in normal breast tissue and its dramatic reduction in the tumoral tissue, whereas the bacterium Methylobacterium radiotolerans was the most significantly enriched in the tumoral tissue. In an Asiatic cohort of breast cancer patients, Propionicimonas, Micrococcaceae, Caulobacteraceae, Rhodobacteraceae, Nocardioidaceae, and Methylobacteriaceae were enriched in tumors [59]. In the same study, a decrease was observed in the Bacteroidaceae family, and a parallel increase was observed in the genus Agrococcus as the malignancy developed. Moreover, cancer development also correlated with an augmented presence of Fusobacterium, Atopobium, Gluconacetobacter, Hydrogenophaga, and Lactobacillus genera [60].
In another study, Costantini et al. [61] reported that the most abundant genus found in the mammary tissue is represented by the bacterial genus Ralstonia, further increased in the breast tumoral tissue. Moreover, in the same study, the presence of Methylobacterium and Sphingomonas genera in the healthy mammary tissue was also observed, according to previous studies. Variations in the microbiota composition were also detected among the different breast cancer molecular subtypes. Banarjee et al. [62] firstly identified a unique microbial signature associated with triple-negative breast cancer. In a following work, the same authors defined four different microbial signatures associated with ER+, HER+, triple positive (ER+, PR+ and HER2+), and TNBC subtypes [13] ( Table 1). The idea that each breast cancer molecular subgroup is characterized by a peculiar pattern of bacteria is also strengthened by Smith et al., who described a specific abundance of Eucaryarchaeota, Cyanobacteria, and Firmicutes phyla in TNBC [63]. These data support the notion that mammary dysbiosis, either being the cause or the consequence of tumor implantation, does occur in breast cancer and that changes in the microbiome are plausibly associated with its progression and with the intrinsic property of the specific subtype.
Moreover, in a study of 668 breast tumor tissues present in The Cancer Genome Atlas (TCGA) data set, the microbiome profile was correlated with the expression of specific tumor genes [64]. Interestingly, the presence of some bacteria, such as Listeria fleischmannii, was strongly associated with genes involved in the epithelial-to-mesenchymal transition, while Haemophilus influenza was correlated with pathways related to tumor growth, cell cycle progression, E2F signaling, and mitotic spindle assembly.
Collectively, these findings reveal that a peculiar tumor-associated microbiota composition can be associated with some features intrinsic to tumors. However, this type of study is still at its infancy and requires further investigation. For example, it is still unclear whether a correlation between specific bacteria and mutations harbored by breast cancer cells exists. This topic is particularly interesting considering that a genotoxic activity of Escherichia coli, Staphilococcus, and Bacterioides fragilis, isolated from breast tumors, has been clearly described [14,65].
Mechanistic Role of Breast Microbiome in the Progression of Breast Cancer
As for the gut microbiota, recent studies have revealed various mechanisms through which local mammary-tumor-associated bacteria might play a role in breast cancer progression, including a direct carcinogenic activity, effects on cell growth or apoptosis, the modulation of the immune response, and the production of metabolites that, in many ways, can affect tumor biology ( Figure 2 and Table 2).
Figure 2.
Effect of the mammary microbiota in breast cancer interfering with cellular pathways (growth or apoptosis), inducing DNA damage, modulating the immune system, and releasing bacterial metabolites. Table 2. Bacterial species found to be associated with the microbiota of breast cancer tissue, in patients, and in murine models and their function in the progression of breast cancer.
Breast Cancer Tissue Bacteria Molecular Mechanism Reference
Human
Escherichia coli and Staphylococcus
Induction of DNA double-strand break and genomic instability in vitro [14] Human Clostridiales Inhibition of tumor growth by producing the metabolite trimethylamine N-oxide (TMAO) that activates CD8+ T cells-mediated antitumor-immunity [53] Human
Fusobacterium nucleatum
Breast tumor progression and metastases by fap-2 dependent binding of the bacterium to breast cancer tissue Gal-GalNac [68] Figure 2. Effect of the mammary microbiota in breast cancer interfering with cellular pathways (growth or apoptosis), inducing DNA damage, modulating the immune system, and releasing bacterial metabolites. Table 2. Bacterial species found to be associated with the microbiota of breast cancer tissue, in patients, and in murine models and their function in the progression of breast cancer.
Breast Cancer Tissue Bacteria Molecular Mechanism Reference
Human
Escherichia coli and Staphylococcus
Induction of DNA double-strand break and genomic instability in vitro [14] Human Clostridiales Inhibition of tumor growth by producing the metabolite trimethylamine N-oxide (TMAO) that activates CD8+ T cells-mediated antitumor-immunity [53] Human
Fusobacterium nucleatum
Breast tumor progression and metastases by fap-2 dependent binding of the bacterium to breast cancer tissue Gal-GalNac [68] Mice Staphylococcus, Lactobacillus and Streptococcus Breast tumor lung metastases by modulating the stress response and influencing cancer cell viability, altering the cell cytoskeleton [15] Mice
Staphylococcus epidermidis
Increased T regulatory cell infiltration in the tumor and complement pathway activation in vivo, and increased pro-tumoral M2 macrophages phenotype in vitro [16] Mice
Micrococcus luteus
Reduction of mammary tumor growth in vivo, and increased anti-tumoral M1 macrophage phenotype in vitro [16] Mice
Bacteroides fragilis
Breast tumor progression and metastasis through the secretion of the B. fragilis toxin (BFT) [65] 3.1.1. Carcinogenic Effect on the Host Genome Urbaniak et al. compared the normal and cancerous breast tissues of patients undergoing mastectomy for breast reduction in the absence of neoplastic disease or for surgical resection of the tumor. In cancerous tissue compared to normal tissue, a higher abundance of Enterobacteriaceae and Staphilococcus was found [14], and subsequent culture experiments allowed the isolation of Escherichia coli and Staphylococcus aureus, two species belonging to the aforementioned genera. These bacteria are reported to possess a direct carcinogenic activity mediated by the production of colibactin, a genotoxin able to induce double-stranded DNA breaks and genomic instability [69,70]. Accordingly, the authors observed that HeLa cells exposed to Escherichia coli had significantly higher levels of histone-2AX phosphorylation, a marker of DNA double strand-break. A similar effect was also induced by Staphylococcus [14].
Moreover, a toxin from Bacteroides fragilis, a gut-colonizing bacterium also found in the mammary gland, can induce epithelial hyperplasia to promote tumor growth and metastatization via the β-catenin-Notch1 axis [65].
Effect on Cell Growth/Apoptosis
One of the mechanisms regulating the crosstalk between microbes and the host is based on the expression of pattern-recognition receptors (PRRs), such as Toll-like receptors (TLRs), by different types of immune and non-immune cells. These receptors can sense microbial changes occurring in the tumor microenvironment and modulate the immune system activity and, in certain circumstances, tumor cell growth/proliferation [71]. For instance, it has been reported that Fusobacterium nucleatum, previously demonstrated to be associated with colorectal cancer (CRC) [72], is implicated in breast cancer growth [12,68] through the activation of TLR4/NF-kB pathway in cancer cells [73].
Moreover, it has been also demonstrated that this bacterium, through the Fap2 lectin protein, can bind Gal-GalNac, a sugar present in high levels, on breast tumor cells' surfaces, causing an acceleration of breast cancer growth and the development of metastases [70,72].
Effects on the Immunity
In a TNBC mouse model, we have recently observed the abundant presence of Staphylococcus epidermidis in the tumor niche [16]. In particular, Staphylococcus epidermidis was found to be responsible for an extremely inflamed tumor microenvironment, determined by its strong ability to induce pro-inflammatory cytokine secretion and complement activation, reported to sustain tumor growth [74]. The in vivo peritumoral transfer of this bacterium was also demonstrated to be associated with a significant increase in immunosuppressive T regulatory cells into the tumor nodules and, when co-cultured in vitro with bone-marrow-derived macrophages (BMDM), to promote a pro-tumor phenotype. Accordingly, antibiotic treatment, by the abundance of lowering Staphylococcus epidermidis, reduced tumor growth. The anti-tumor effect mediated by antibiotic treatment was accompanied by the appearance of Micrococcus luteus in the tumor mass. Unlike Staphylococcus epidermidis, Micrococcus luteus, when in vivo peritumorally transferred, exerted an anti-tumor activity by inducing an M1 macrophage phenotype and by reducing myeloid-derived suppressor cell (MDSC) infiltration [16]. These findings are in line with previously published data revealing the abundant presence of the Micrococcaceae family in healthy breast samples and of Staphylococcaceae in tumoral tissues [14].
Moreover, the bacterium Sphingomonas, detected in the healthy mammary gland, is able to induce the activation of invariant NKT (iNKT) cells [75], important mediators in cancer immunosurveillance and in the control of breast cancer metastases [76]. Accordingly, an increased level of Sphingomonas in healthy compared to tumoral mammary tissue has been observed to be associated with a higher expression of TLR2, -5, and -9 and of antimicrobial response effectors IL-12A, bactericidal/permeability-increasing protein (BPI), and myeloperoxidase (MPO), suggesting its possible protective role in cancer by sustaining immunosurveillance [58].
Microbial Metabolites Production
It is still unclear whether tumor-infiltrating bacteria can produce metabolites, as largely demonstrated for gut microbiota. However, based on data present in the literature, it is possible to obtain some insights. For instance, Bacillus cereus, capable of metabolizing progesterone into 5-alpha-pregnane-3,20-dione (5αP) [77], was found to be higher in breast cancer patients than in healthy ones [10,14]. Since 5αP is believed to promote tumor development by stimulating cell proliferation, it is plausible to speculate that at least part of this molecule may be of bacterial origin [78].
Moreover, in a recent study performed in a cohort of patients with TNBC, a high abundance of Clostridiales in tumoral tissue was associated with an activated immune microenvironment [79]. Specifically, the presence of these bacteria positively correlated with the production of the metabolite trimethylamine N-oxide (TMAO), a compound able to activate CD8+ T cells-mediated antitumor immunity and M1 macrophages, further supporting the idea of a metabolically active tissue-resident microbiota [76].
The Gut-Breast Microbiota Axis
The existence of axes between gut microbiota and different body areas, such as the liver, lung, and brain, has already been reported, but no definitive proof is available today on the crosstalk between the gut microbiota and the mammary glands. However, it was observed that treatment with orally administered probiotics is highly effective in the cure of mastitis and that probiotics become detectable in human milk [80], strongly suggesting that an interconnection between gut microbiota and the breast may exist.
Gut-resident bacteria may leave the intestine through breaches in the epithelium, which is frequent during dysbiosis, and translocate to the mammary gland via the blood or lymphatic systemic circulation. An alternative escape route is represented by intestinal dendritic cells, which are reported to uptake bacteria in the intestinal mucosa through their ability to open the tight junctions between epithelial cells [81]. Since dendritic cells are migratory cells, they can reach distant sites, such as the mammary tissue, through the vascular system. Furthermore, a third possible participant in the gut-microbiota-breast tissue dialog may be represented by bacteria metabolites. These bacterial products, produced in the intestine, may be absorbed by the intestinal mucosa and released into the bloodstream through which they can virtually reach all the body compartments, including mammary glands, exerting their biological functions in loco.
Although there are many insights regarding the possibility of a gut-breast axis, further investigations are still required not only to finally prove its existence but also to identify the players involved in their crosstalk.
Conclusions
A large symbiotic microbiota resides in the human intestine and exerts fundamental roles in health and disease. Since it is able to regulate host metabolism and shape the immune system, the gut microbiota has been revealed to affect breast cancer progression. More recently, bacteria have also been found to be a component of the breast mammary tissue, but their pathobiological role is poorly understood due to their low biomass. Many studies have clearly demonstrated that mammary tissue microbiota changes in the presence of a tumor, representing a scenario in which the tumor-associated microbiota actively participates in the constitution of the complex tumor microenvironment.
The present review summarizes what is known about the relationship between specific bacteria and breast cancer progression and, concomitantly, highlights what is still missing in the literature. Indeed, there are many open questions that represent weaknesses in this field: (i) Does a direct link really exist between the gut and the mammary microbiota composition? (ii) Are they seeded in the tumor microenvironment early on in tumorigenesis, or are they recruited as the tumor alters the microenvironment? (iii) Which bacteria can be defined as "good" or "bad"? (iv) What are the metabolic products or the structural molecules mechanistically involved in their effects on cancer cells? (v) Do different species share a common mechanism that is able to impact tumor cell biology and could represent potential therapeutic targets? A limitation of studies aiming to answer these questions is that the use of mouse models might not be exhaustive, as they do not allow one to consider many factors affecting the human microbiome, such as diet, host genetics, and age. Thus, only studies on human subjects may represent the future direction of microbiome research to deconvolute the complex microbiota-tumor crosstalk and to open new avenues to shape the cancer microenvironment toward a favorable context through the modulation of gut and/or local microbiota. Antibiotics, probiotics, prebiotics, and fecal microbiota transfer are strategies that are used to modulate the gut microbiome in the treatment of many infectious diseases and are currently investigated as potential anti-cancer therapeutic options.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,357 | 2023-07-27T00:00:00.000 | [
"Biology"
] |
Hashtag Analysis of Indonesian COVID-19 Tweets Using Social Network Analysis
Social media has become more critical for people to communicate about the pandemic of COVID-19. In social media, hashtags are social annotations which often used to denote message content. It serves as an intuitive and flexible tool for making huge collections of posts searchable on Twitter. Through practices of hashtagging, user representations of a given post also become connected. This study aimed to analyze the hashtag of Indonesian COVID-19 Tweets using Social Network Analysis (SNA). We used SNA techniques to visualize network models and measure some centrality to find the most influential hashtag in the network. We collected and analyzed 500.000 public tweets from Twitter based on COVID-19 keywords. Based on the centrality measurement result, the hashtag #corona is a hashtag with the most connection with other hashtags. The hashtag #COVID19 is the hashtag that is most closely related to all other hashtags. The hashtag #corona is the hashtag that most acts as a bridge that can control the flow of information related to COVID-19. The hashtag #coronavirus is the most important of hashtags based on their link. Our study also found that the hashtag #covid19 and #wabah have a substantial relationship with religious-related hashtags based on network visualization. Keywords—COVID-19, Twitter, Social Network Analysis, SNA, Hashtag ◼ ISSN (print): 1978-1520, ISSN (online): 2460-7258 IJCCS Vol. 15, No. 3, July 2021 : 275 – 284 276
INTRODUCTION
Social media has been growing to become an essential part of people's lives in recent years. Several big social media platforms like Facebook, Instagram and Twitter have become more critical for people to communicate almost of everything with no exception to pandemic of COVID-19 related issues [1]. Those make how much information we receive through this platform influences how we perceive and deal with the current COVID-19 epidemic. Even before the outbreak, the public and scientists used social media as an essential information source. The efficient use of social media also affects the character and communication way of modern society [2]. One kind of social media that is commonly used today is Twitter [3]. Previous research has noted the potential for Twitter to provide real-time content analysis so that public health authorities can quickly respond to anxieties grown by the public [4].
Hashtags, which are social annotations used to denote message content, serve as an intuitive and flexible tool for making huge collections of posts searchable on Twitter. The inherent ability of hashtags to bypass the network structure's boundaries during the communication process gives the easy dissemination of information outside the user's network. Hashtags take a double role in the communication process, functioning both as metadata for archiving purposes and information retrieval on social media platforms [5], such as Twitter. Hashtags written with the symbol # are even used on other social media platforms to index keywords or topics [6], [7].
Several previous studies have carried out hashtag analysis on social media for various purposes. Xing et al., in 2016 [8], analyzed hashtags to find sub-shows on Twitter. They model the relationship between the hashtag and the tweet's topic and highlight the hashtag's role as a semantic representation of the appropriate tweet. In 2016 [9], Yılmaz and Hero studied to detect a multimodal event in Twitter hashtag networks. They introduced a new unsupervised event detection approach to Twitter.
Through practices of hashtagging, user representations of a given post also become connected [10]. Simultaneously, we can define the relationship between hashtags as an organization within a topic area, especially related to the COVID-19 outbreak. In this study, we aimed to analyze the hashtag of Indonesian COVID-19 Tweets using Social Network Analysis (SNA). We used SNA techniques to visualize network models and measure some centrality to find the most influential hashtag in the network.
Several previous studies have applied SNA for various purposes. Iswandhani and Muhajir, in 2017 [11], used SNA to identify the central popularity of tourist destinations in an Instagram account. In 2017 [12], Setatama and Tricahyono implemented Social Network Analysis to analyze the most influential actors in the spread of the country branding "Wonderful Indonesia." Tahalea and Azhari, in 2019 [13], used SNA to identify the central actor of crimes done by several people using five centrality measurements, such as degree, betweenness, closeness, and eigenvector centrality. This study showed 80.39% accuracy from 102 criminal cases gathered with at least three actors involved in each case.
In the other study, Hung et al., in 2020 [14], used SNA to determine the social network of dominant topics related to COVID-19 on Twitter. The study successfully identified five dominant issues related to COVID-19. They are social change, health care environment, business economy, emotional support, and psychological stress. In 2020 [15], Tahalea used SNA to analyze the relationships between heroes and find out the roles of heroes in DOTA2 online games. He used the measurement of degree centrality to see the popularity of a hero, betweenness centrality to see the role as a liaison, and closeness centrality to see a hero's closeness to other heroes. The social network analysis process in this study consists of three stages. They are data extraction and preprocessing, building a network model, and measuring centrality value.
Data Extraction and Preprocessing
This study implemented a web scraping technique to extract tweet data related to COVID-19 from Twitter web-based. Online data extraction refers to the routine extraction of data from a web data source that can evolve [16]. After we collected the tweet data, the next step is preprocessing. In this step, we pulled only the hashtag from every tweet data.
Network Model
The second stage of this experiment is builds a network model. In the network model hashtags are represented as nodes, and the connections between them are represented as edges. When a user publishes more than one hashtags in a single tweet, the connection between hashtag happens. For example, if user post a tweet contains hashtag #Covid19, #wabah, and #lockdown, then there will be a connection between #Covid10 and #wabah, #Covid19 and lockdown, and also #wabah and #lockdown. Figure 1 shows the network model representation of this case. Figure 1 The network model representation
Centrality Measure
This study used five centrality measurements, such as degree, betweenness, closeness, and eigenvector centrality, to analyze the Indonesian COVID-19 Tweets' hashtag. In some fields such as psychological networks [17], social networks [18], criminal networks [13], and even trust networks [19], these centrality measures can determine the network's central or highly influential nodes.
Degree Centrality
The degree of centrality is determined by the total quantity of direct connections to other nodes in a network graph [20]. The node degree centrality is a key parameter representing the community centrality in networks [21]. We used degree centrality to identify the most connected hashtags in a tweet that can be measured using equation (1), where hashtags represent as i and total nodes (hashtags) in the network represent as n.
Betweenness Centrality
Betweenness centrality is to measure the role of a node as a mediator in the network. We use this centrality to show how significant a hashtag is to act as a bridge in the network, which can be measured using equation (2). In this case, the hashtags represent as , the number of shortest paths from actor to actor shown as , , and the number of shortest paths from actor to actor through actor shown as , ( ).
Closeness Centrality
Closeness centrality measures one node to the other nodes' sum distances by measuring the average distance of a node from other nodes in the network [22]. We use this centrality to show the closeness of the connection between hashtags, which can be measured using equation (3). ( ) is the closeness centrality of node i, and is the shortest path from node i to node j.
Eigenvector Centrality
The eigenvector centrality measures the number of connections of a given node and its relevance in information movement [23]. We used eigenvector centrality to shows the importance of hashtags based on their link, which can be measured using equation (4). The hashtags represent as i, constant represents as λ, and ai,j is shown adjacency matrix of the network.
RESULTS AND DISCUSSION
In this section, we discussed the results of the calculations. For the first time, we did data profiling from the data extraction. After that, we found the most influential hashtags that play an essential role in disseminating COVID-19 information on Twitter using SNA.
Data Profiling
We have collected 500000 public tweets from Twitter based on COVID-19 keywords. We gathered from March to May 2020, i.e., three months. We selected those months since President Joko Widodo reported the first confirmed two cases of COVID-19 infection in March 2020, in Indonesia [24]. Then, the data collection was closed in May 2020, when this research analysis begins.
Centrality Measure
At this stage, we will discuss the centrality measure of Twitter data about COVID-19. Centrality measures that are calculated include degree centrality, closeness centrality, betweenness centrality, and eigenvector centrality. Degree centrality is a simple measure of centrality which calculates how many connections or neighbors a node has [25]. The hashtag regarding COVID-19, which has the most number of neighbors, can be seen in Table 1. Based on Table 1, it is known that people often use ten hashtags in uploading tweets on Twitter. These ten hashtags are important because they have the highest number of connections to other hashtags. The 10 hashtags include #corona, #Corona, #VirusCorona, # COVID19, #coronavirus, #dirumahaja, # covid19, #RememberingKhilafah, # Covid19, and #viruscorona. The use of lowercase and capital letters in a hashtag is a sensitive matter because the hashtag #corona with a lowercase and hashtag #Corona with capital is considered different.
The next discussion is about closeness centrality. Closeness centrality indicates how close a hashtag is to all other hashtags in the network. The closeness centrality data of hashtags can be seen in Table 2. The 10 hashtags that have the most closeness to all hashtags in the network are # COVID19, #dirumahaja, #Corona, #coronavirus, #corona, #VirusCorona, # Covid_19, # Covid19, #Indonesia , and #COVID. To get the maximum speed of information flow related to COVID-19 data, you can use these hashtags. Betweenness centrality is used to show how often other nodes pass a node to go to a particular node in the network. This value serves to determine the role of the hashtag as a bridge connecting interactions in the network. The hashtag data that has the highest betweenness centrality value can be seen in Table 3. The 10 hashtags are #corona, #VirusCorona, #dirumahaja, # COVID19, #Corona, #coronavirus, #Corona, # covid19, #dirumahaja, and # Covid19. These hashtags are located in communication channels and can control the flow of information related to COVID-19 data on Twitter. Eigenvector Centrality is used to measure a node's importance by considering the importance of its neighbors [26]. The hashtag data that has the highest eigenvector centrality value can be seen in Table 4. The 10 hashtags are #coronavirus, #corona, #Corona, # COVID19, #dirumahaja, #VirusCorona, # covid19, # Covid_19, #virus, and #COVID. In this section, the data visualization of the information dissemination network interaction on COVID-19 is carried out on the Twitter social networking site. The visualization of social network analysis can be seen in Figure 2. Visualization of the interaction network is made up of 12,906 nodes and 50,349 edges. From the visualization of the network of interactions, it is known that the network interaction patterns of information dissemination related to COVID-19 are strongly influenced by the hashtags #corona, #Corona, #VirusCorona, # COVID19, #coronavirus, and #dirumahaja. The central hashtag for the spread of COVID-19 information is the hashtag #corona. In our experiment we found some interesting fact that the the network interaction with the dissemination of information about COVID-19 on the social networking site Twitter has strong correlation with the hashtag form discussions related to religious issues. This can be seen in Figure 3, and in detail, can be seen in Figure 4. Based on the network visualization, it can be seen that the hashtag #covid19 and #wabah have a substantial relationship with religious hashtags. These hashtags include #islam, #muslim, #tauhid, #nikah, #kajianislam, #sunnah, # hijrah, #dakwah, and #ramadhan.
Figure 3 The network model representation
We reveal this network structure as represented in figure 4 by configuring the node size and network layout based on eigenvector centrality. Refering to the phylosophy of eigenvector calculation that the nodes which connect to popular nodes will becoming popular, we can assume that religious post on Twitter take the benefits of #covid19 hashtag popularity to increase the visibility of their posts. This is the common practice found in social media marketing.
As we knew, Social Network Analysis is usually used to determine the influence of actors in social media as in studies [27], [28] and most previous studies. According to this study result, we could know that Social Network Analysis also can be used to determine the influence of hashtags in social media. Moreover, some studies used Social Network Analysis based on a particular hashtag, but it was only used to define the topic as in studies [29], [30]. Eventually, the Social Network Analysis is used to determine the influence of actors. In this study, we used Social Network Analysis to determine the influence of hashtags on the other hashtags or the connection between hashtags as in the study [31]. However, the study was implemented on the Instagram platform, whereas this study was implemented on the Twitter platform. With the approach defined in this study, we can understand which hashtag has an essential role in disseminating information on Twitter. Therefore, we can control the stream of information dissemination, mainly if it negatively influences news. This paper's main goal was to analyze the hashtag of Indonesian COVID-19 Tweets using Social Network Analysis (SNA). We collected and analyzed 500.000 public tweets from Twitter based on COVID-19 keywords. We used SNA techniques to visualize network models and measure some centrality to find the most influential hashtag in the network. Based on the centrality measurement result, degree, closeness, betweenness, and eigenvector centrality, we got ten hashtags with the highest score. The hashtag #corona is a hashtag that has the most connection with other hashtags. The hashtag #COVID19 is the hashtag that is most closely related to all other hashtags. The hashtag #corona is the hashtag that most acts as a bridge that connects hashtag to the network. Therefore, the hashtag can control the flow of information related to COVID-19. The hashtag #coronavirus is the most important of hashtags based on their link. Visualization of the interaction network of the hashtags is made up of 12,906 nodes and 50,349 edges. Based on the network visualization, the hashtag #covid19 and #wabah have a substantial relationship with religious hashtags such as #islam, #dakwah, #ramadhan. Hence, there is an interesting topics which can be further explored on how the popular hashtag like #covid19 was being used to increase the popularity of another topic which actually unrelated. | 3,490.2 | 2021-07-31T00:00:00.000 | [
"Computer Science"
] |
IS1294 Reorganizes Plasmids in a Multidrug-Resistant Escherichia coli Strain
ABSTRACT The aims of this study were to elucidate the role of IS1294 in plasmid reorganization and to analyze biological characteristics of cointegrates derived from different daughter plasmids. The genetic profiles of plasmids in Escherichia coli strain C21 and its transconjugants were characterized by conjugation, S1 nuclease pulsed-field gel electrophoresis (S1-PFGE), Southern hybridization, whole-genome sequencing (WGS) analysis, and PCR. The traits of cointegrates were characterized by conjugation and stability assays. blaCTX-M-55-bearing IncI2 pC21-1 and nonresistant IncI1 pC21-3, as conjugative helper plasmids, were fused with nonconjugative rmtB-bearing IncN-X1 pC21-2, generating cointegrates pC21-F1 and pC21-F2. Similarly, pC21-1 and pC21-3 were fused with nonconjugative IncF33:A−:B− pHB37-2 from another E. coli strain to generate cointegrates pC21-F3 and pC21-F4 under experimental conditions. Four cointegrates were further conjugated into the E. coli strain J53 recipient at high conjugation frequencies, ranging from 2.8 × 10−3 to 3.2 × 10−2. The formation of pC21-F1 and pC21-F4 was the result of host- and IS1294-mediated reactions and occurred at high fusion frequencies of 9.9 × 10−4 and 2.1 × 10−4, respectively. Knockout of RecA resulted in a 100-fold decrease in the frequency of plasmid reorganization. The phenomenon of cointegrate pC21-F2 and its daughter plasmids coexisting in transconjugants was detected for the first time in plasmid stability experiments. IS26-orf-oqxAB was excised from cointegrate pC21-F2 through a circular intermediate at a very low frequency, which was experimentally observed. To the best of our knowledge, this is the first report of IS1294-mediated fusion between plasmids with different replicons. This study provides insight into the formation and evolution of cointegrate plasmids under different drug selection pressures, which can promote the dissemination of MDR plasmids. IMPORTANCE The increasing resistance to β-lactams and aminoglycoside antibiotics, mainly due to extended-spectrum β-lactamases (ESBLs) and 16S rRNA methylase genes, is becoming a serious problem in Gram-negative bacteria. Plasmids, as the vehicles for resistance gene capture and horizontal gene transfer, serve a key role in terms of antibiotic resistance emergence and transmission. IS26, present in many antibiotic-resistant plasmids from Gram-negative bacteria, plays a critical role in the spread, clustering, and reorganization of resistance determinant-encoding plasmids and in plasmid reorganization through replicative transposition mechanisms and homologous recombination. However, the role of IS1294, present in many MDR plasmids, in the formation of cointegrates remains unclear. Here, we investigated experimentally the intermolecular recombination of IS1294, which occurred with high frequencies and led to the formation of conjugative MDR cointegrates and facilitated the cotransfer of blaCTX-M-55 and rmtB, and we further uncovered the significance of IS1294 in the formation of cointegrates and the common features of IS1294-driven cointegration of plasmids.
the cotransfer of bla CTX-M-55 and rmtB, and we further uncovered the significance of IS1294 in the formation of cointegrates and the common features of IS1294-driven cointegration of plasmids.
KEYWORDS 16S rRNA methylase, cointegrate, IS1294, recombination, extendedspectrum b-lactamases, ESBLs T he emergence and dissemination of antibiotic resistance is a major clinical problem that poses a serious threat to public health (1). Antibiotic resistance genes are associated with mobile genetic elements like plasmids, transposons, and integrons (2). Among them, plasmids play a key role as vehicles for resistance gene capture and subsequent dissemination (3). Plasmid interaction is important for the maintenance and conjugal transfer of plasmids, particularly the mobilization of nonconjugative plasmids (4). The fusion of nonconjugative plasmids and conjugative helper plasmids is often related to different recombination events, namely, homologous recombination and replicative transposition, facilitating the dispersal of resistance genes and the evolution of multidrug resistance (MDR) plasmids and extending the resistance profiles of cointegrate plasmids, which has raised wide concerns (5)(6)(7)(8)(9)(10)(11).
Insertion sequences IS26 and IS1294 are present in many antibiotic-resistant isolates and play critical roles in the diversity of the variable region of F33:A2:B2 plasmids carrying bla CTX-M-55 or bla CTX-M-65 (12). Three well-characterized fusion plasmids mediated by IS26 have been reported in clinical strains, namely, pSL131_IncA/C_IncX3, pD72C, and pSE380T (5)(6)(7). IS1294, a member of the IS91 family, is an atypical insertion sequence that lacks terminal inverted repeats, does not generate target site duplication, and transposes using rolling-circle replication (13). The IS1294-mediated formation of cointegrate plasmids is rarely reported. In our previous study, the bla CTX-M-55and rmtB-bearing sequence type 156 (ST156) Escherichia coli strain C21 from a chicken in China was characterized, and the ISEcp1 element located upstream from bla CTX-M-55 was found to be disrupted by IS1294 (14). Here, two plasmids, used as conjugative helper plasmids, were fused with the nonconjugative rmtB-carrying plasmid in strain C21 at high fusion frequencies, generating two conjugative cointegrates that could be further transferred into recipient E. coli strain J53 at high conjugation frequencies.
Consequently, the role of IS1294 in the formation of cointegrate plasmids was experimentally verified.
Sequence analysis of plasmids in C21. The bla CTX-M-55 -positive pC21-1 harbored an IncI2 replicon and typical IncI2-associated genetic modules responsible for plasmid replication, transfer, maintenance, and stability functions. Sequence analysis revealed that pC21-1 shared high degrees of genetic identity (99 to 100% identity at 97 to 99% coverage) with several known bla CTX-M -bearing IncI2 plasmids, including pHNY2, pHN1122-1, pHNAH46-1, and pHNLDH19, in E. coli strains isolated from different sources (Fig. S1A), and the ISEcp1 located upstream from bla CTX-M-55 in pC21-1 differed from the IncI2 plasmids mentioned above by the insertion of an IS1294 (Fig. 1A).
The multireplicon IncN-X1 plasmid pC21-2, with repE and pir genes, which are responsible for the replication initiation of IncN and IncX1, harbored the resistance genes rmtB, oqxAB, bla TEM-1b , floR, tet(A), strAB, sul1, sul2, aac(3)-IId, and aph(39)-IIa and a class 1 integron cassette array, dfrA12-orfF-aadA2, as well as mobile elements, including one IS1294 and five intact IS26 copies with no direct repeats (DRs) ( Table 1 and Fig. 1A). The fusion of segments in pC21-2 containing replication regions from the conjugative IncX1 plasmid pOLA52 in a swine E. coli strain and the classical IncN plasmid R46 in Salmonella enterica serovar Typhimurium (15,16) might be mediated by IS26 through homologous recombination ( Fig. 1A and Fig. S1A). A BLASTN search revealed that pC21-2 exhibited high homology to the bla NDM-1 -positive IncN-X1 plasmid p1108-NDM, FIG 1 The proposed mechanism of plasmid fusion. (A) Linear sequence comparison of two fusion plasmids, pC21-F1 and pC21-F2, with daughter plasmids pC21-1, pC21-2, and pC21-3. Colored arrows represent open reading frames, with blue, cyan, red, yellow, maroon, and gray arrows representing replicon genes, transfer-associated genes, resistance genes, mobile elements, stability associated genes, and hypothetical proteins, respectively. The shaded areas indicate 100% identity. (B) The proposed model for the IS1294-mediated formation of fusion plasmids. Plasmid names are shown in red on a gray background. Arrowheads indicate orientation. The cointegrates were brought about by intermolecular homologous recombination. Cointegrates pC21-F1 and pC21-F2 could subsequently be resolved into two plasmids identical to the original donor plasmids except for the excision of IS26-orf-oqxAB. Yellow arrows represent IS elements, and gray arrows represent hypothetical proteins. with 99.9% identity at 81% coverage; however, the main multidrug resistance regions of pC21-2 were almost identical with those of IncI1/ST136 pEC008 (accession number KY748190) (Fig. S1B) (17). The pC21-2 plasmid, without a transfer region, was not selftransmissible, which was determined by conjugation assays showing that no transconjugant was obtained after numerous attempts using the transformant TC21-2 carrying pC21-2 as the donor. pC21-3, without an antimicrobial resistance gene, belonged to IncI1/ST134 except for one nucleotide substitution (G!A) in the conjugative transfer gene trbA. BLAST analysis showed that pC21-3 exhibited 98.2 and 98.7% identity at 93 and 95% coverage with two conjugative helper plasmids, the nonresistant pSa27-HP (accession number MH884654) and the CTX-M-130-producing pSa44-CRO (accession number MH430883), recovered from Salmonella strains (8,9). pC21-4, a phage-like IncY plasmid without any antimicrobial resistance gene, had a single pO111 plasmid replicon and exhibited high homology to p1108-IncY in E. coli (accession number MG825379), with 99% identity at 93% coverage.
Identification of fusion plasmids. In previous work, we showed that two important resistance determinants, bla CTX-M-55 and rmtB, were present in separate plasmids in strain C21 and could be cotransferred into the recipient strain (14). In this work, three representative transconjugants, TC21-1, TC21-F1, and TC21-F2, were screened successfully by conjugation experiments using different antibiotics (Table 1). S1 nuclease pulsed-field gel electrophoresis (S1-PFGE) and Southern blot hybridization confirmed that bla CTX-M-55 and rmtB were located on the ;60-kb pC21-1 plasmid and the ;60-kb pC21-2 plasmid, respectively, in the parental strain C21. However, bla CTX-M-55 coexisted with rmtB on a single ;120-kb plasmid, pC21-F1, in TC21-F1, and rmtB was located on a single ;140-kb plasmid, pC21-F2, in TC21-F2. The pC21-F1 and pC21-F2 plasmids were larger than any plasmid in the original strain C21 (Fig. S2A). In view of the plasmid sizes, we proposed that pC21-F1 might be the recombinant product of pC21-1 (63,878 bp) and pC21-2 (62,933 bp) and pC21-F2 might be the recombinant product of pC21-2 (62,933 bp) and pC21-3 (87,627 bp). To further probe the sources of fusion plasmids, the complete sequences of plasmids in the transconjugant strains were obtained by WGS, combining the Illumina short-read and PacBio long-read sequencing data.
Based on the sequence analysis detailed above and the observed structure, we proposed the model of cointegrate formation shown in Fig. 1B. In the model, the IS1294 element in non-self-transmissible pC21-2 (IncN-X1) attacked another IS1294 in the conjugative pC21-1 or pC21-3, resulting in the occurrence of cointegrates. Linearized pC21-1 or pC21-3 was incorporated into pC21-2, creating the cointegrates pC21-F1 and pC21-F2, and then, two same-orientation IS1294 elements surrounded the insertion fragment. The sequences spanning the cointegrate junctions were confirmed using primers P1-P2 and P3-P4 for pC21-F2 and P2-P5 and P3-P6 for pC21-F1 and sequences of PCR amplicons corresponding to the result of WGS ( Fig. 2A and B). A dynamic process occurred between cointegrates and daughter plasmids in transconjugants, which was identified by PCR and sequencing, and several amplicons of combinations of primers to detect the flanking sequence of IS1294 were obtained (Fig. 2B). IS1294 lacking terminal inverted repeats does not generate DRs of the target site and transposes by rolling-circle replication (13). Although DRs of IS1294 surrounding the insertion fragments were not detected in this study, IS1294-mediated intermolecular recombination was likely to be related to the formation of cointegrates.
The fusion frequency of cointegrate pC21-F1 from pC21-1 and pC21-2 was 9.9 Â 10 24 transconjugants per cefotaxime-resistant transconjugant (Table S3). The fusion frequency of pC21-F2 could not be determined because pC21-3 did not have an antibiotic resistance marker (Table 1). However, the number of transconjugants carrying pC21-F2 from the parental strain was significantly higher than that of transconjugants carrying pC21-F1 from the parental strain in conjugation assays. Based on these data, we speculated that the fusion frequency of pC21-F2 was higher than that of pC21-F1, which was further confirmed by a conjugation assay using E. coli C21 as the donor and E. coli C600 as the recipient. The results of the assay showed that 80 randomly selected transconjugants screened by rifampin and amikacin carried the rmtB gene but not bla CTX-M-55 . The fusion frequency of pC21-F4 (IncI2-F33:A2:B2) was 2.1 Â 10 24 transconjugants per cefotaxime-resistant transconjugant (Table S3). Comparative assays were performed in wild-type and recombination-deficient (DrecA) donor strains, with the results showing that host-and IS1294-mediated reactions were involved in the formation of cointegrate plasmids and that knockout of recA resulted in a 100-fold decrease (from 3.0 Â 10 24 to 4.8 Â 10 26 ) in the frequency of plasmid reorganization (Table S4).
Stability assays in vitro showed that ,10% losses of fusion plasmids pC21-F1 and pC21-F2 in transconjugants occurred from day 1 to day 15, which suggested that fusion plasmids were stable in E. coli for at least 15 days of passage in an antibiotic-free environment (Fig. S3). A total of 40 amikacin-and cefotaxime-susceptible colonies from TC21-F1 were detected among 1,800 colonies screened at 0, 3, 6, 9, 12, and 15 days (100 colonies screened at six time points in each of three independent experiments). S1-PFGE showed that randomly selected colonies with the resistant phenotype originating from TC21-F1 harbored a single fusion plasmid, pC21-F1 (data not shown), suggesting that the fusion plasmid pC21-F1 was not easily lost and cleaved. However, 123 amikacin-susceptible colonies from TC21-F2 were detected among 1,800 colonies screened, and 2 of 14 colonies carried the daughter plasmids at 12 and 15 days (Fig. S4). As shown in the electrophoretic bands of lane 1 presented in Fig. S4B, the fusion plasmid pC21-F2 and its daughter plasmids coexisted in transconjugant TC21-F2 at 12 days, suggesting that the cointegrate and daughter plasmids may be in a dynamic process.
DISCUSSION
In an exploration of the evolutionary process of F33:A2:B2 plasmids, Wang et al. found that several IS26 and IS1294 elements were interspersed in MDR regions of F33: A2:B2 plasmids carrying bla CTX-M-55 or bla CTX-M-65 , causing diversity in the variable regions of the plasmids (12). The IS26-mediated formation of fusion plasmids in transconjugants has been well described in the hybrid resistance plasmid pD72C and the virulence and resistance plasmid pSE380T (5,7). However, the IS26-mediated fusion plasmid pSL131_IncA/C_IncX3 was identified in the parental strain, and its daughter plasmid pSL131T_IncX3 carrying bla NDM-1 was detected in the corresponding transconjugant (6). The ISPa40-mediated fusion plasmid pSa44-CIP-CRO was also illustrated in the parental strain, and two corresponding transconjugants selected in eosin methylene blue agar supplemented with different agents harbored the fusion plasmid pSa44-CIP-CRO and its daughter plasmid pSa44-CRO (8). In the present study, three transconjugants were obtained from the parental strain C21 under selective pressure by different agents; one of them carried a daughter plasmid, and the other two carried different fusion plasmids mediated by IS1294. Two cointegrates, pC21-F1 and pC21-F2, were not observed in the parental strain by S1-PFGE and complete sequencing; however, a dynamic process occurred between cointegrate and daughter plasmids in the transconjugants. The different states in the cointegrate plasmids between the original strain and transconjugants may be due to the abundance of cointegrate plasmids. Although the cointegrate plasmids may be in low abundance in the original strain harboring daughter plasmids, they were in high abundance after antibiotic drug selection. Taken together, the findings indicated that the cointegrate plasmid was easily selected and disseminated under pressure by different agents. Furthermore, the cointegrate plasmids mediated by IS elements were ubiquitous, and the replicon typing of the daughter plasmids from fusion plasmids was diverse. Studies have demonstrated that the cointegrates were formed between two DNA molecules in a process mediated by IS26 through a replicative transposition mechanism (7,18,19). However, in the present study, IS1294-mediated intermolecular recombination was involved in the formation of cointegrates.
The differences in the abundance of fusion plasmids pC21-F1 and pC21-F2 between the parental strain and transconjugants were consistent with their conjugation frequencies. A 4 Â 10 5 -fold increase in the conjugation frequency of pC21-F1 from transconjugant TC21-F1 to recipient E. coli J53 was noted when compared with the conjugation frequency of pC21-F1 from the parental strain C21 to recipient E. coli C600 (from 7.1 Â 10 29 to 2.8 Â 10 23 ), and a 1.2 Â 10 5 -fold increase in the conjugation frequency for pC21-F2 was noted (from 2.6 Â 10 27 to 3.2 Â 10 22 ). Similar conjugation frequency results were obtained for cointegrate plasmids pC21-F3 and pC21-F4 (Table S2). These findings indicated that pC21-1 and pC21-3 may act as conjugative helper plasmids, providing nonconjugative plasmids pC21-2 (IncN-X1) and pHB37-2 (IncF33:A2:B2) with self-transmission capacity through the formation of cointegrates. In addition, their activity may lead to the rapid transmission of resistance genes in nonconjugative plasmids under selection by antibiotics, as well as promoting the evolution of MDR plasmids.
The average fusion frequencies were 9.9 Â 10 24 and 2.1 Â 10 24 , respectively, for cointegrates pC21-F1 and pC21-F4, which resulted from host-mediated homologous recombination and IS1294-mediated intermolecular reactions. Comparison analysis performed in the wild-type and recombination-deficient (DrecA) donor strains showed that knockout of recA resulted in a 100-fold decrease (from 3.0 Â 10 24 to 4.8 Â 10 26 ) in the fusion frequency of cointegrate pC21-F1, which suggested that IS1294-mediated reactions, with an average transposition frequency of 4.8 Â 10 26 for pC21-F1, and intrinsic homologous recombination played major roles in plasmid reorganization. The frequency of cointegrate formation mediated by IS26 between pRMH762 and the construct R388::IS26 was 1.8 Â 10 24 per R388::IS26 transconjugant in transposition experiments (18). The high fusion efficiency mediated by IS1294 or IS26 highlighted the important role of IS1294 and IS26 in the generation of cointegrate plasmids and the dissemination of resistance genes.
In summary, this study characterized the complete genetic features of four plasmids and elucidated the mechanism underlying the reorganization of fusion plasmids. To the best of our knowledge, this is the first description of the role of IS1294 in the formation of fusion plasmids derived from three plasmids in an original strain. This study provided insight into the formation and evolution of cointegrates under the selective pressure of one or more antimicrobials, which poses a serious threat to public health. Therefore, more prudent use of antimicrobial agents in clinical practice, particularly the use of antibiotic combinations, is important to avoid the occurrence, dissemination, and further evolution of MDR fusion plasmids.
MATERIALS AND METHODS
Bacterial strain. Multidrug-resistant ST156 E. coli strain C21, carrying two important resistance determinants, bla CTX-M-55 and rmtB, in separate plasmids was characterized from a chicken in China in September 2009 as described in our previous study (14).
Conjugation, transformation, S1-PFGE, and Southern hybridization. E. coli C21 as the donor and E. coli C600 (resistant to rifampin) as the recipient were used in conjugation experiments. Three representative transconjugants were screened on MacConkey agar supplemented with rifampin (450 mg/liter), cefotaxime (2 mg/liter), and/or amikacin (20 mg/liter). The conjugation frequencies were calculated as the number of transconjugants per donor. The plasmids in the donor strain C21 were transformed into E. coli DH5a by electroporation; the rmtB-bearing transformant TC21-2 was selected on LB agar supplemented with amikacin (20 mg/liter), and transformant TC21-3, harboring a single pC21-3 without any antibiotic resistance genes, was selected on antibiotic-free LB agar. Plasmid profiles in the donor strain, transconjugants, and transformants were subjected to S1-PFGE and Southern blot hybridization with bla CTX-M-55 , rmtB, and trbA for the IncI1 plasmid as probes.
WGS and bioinformatics analysis. To explore the genetic basis of plasmid size alteration in the donor and transconjugant strains, total genomic DNA was extracted from C21 and the plasmids in transconjugants TC21-F1 and TC21-F2 using the Omega bacterial DNA kit (Omega Bio-Tek, USA) and the Qiagen plasmid midi kit (Qiagen, Hilden, Germany) and subjected to whole-genome sequencing (WGS) using Illumina NovaSeq 6000 and the PacBio RSII single-molecule real-time (SMRT) platforms. The longread data were assembled de novo using the hierarchical genome assembly process (HGAP) with the SMRT Analysis version 2.3.0 software package for the PacBio RSII platform, in combination with complementary short reads (21). The plasmid sequences were initially annotated using the Subsystem Technology (RAST version 2.0) server (http://rast.nmpdr.org) and curated manually using the BLASTn and BLASTp algorithms (http://blast.ncbi.nlm.nih.gov/blast). The plasmid replicon genotype and resistance genes were identified by using the CGE server (https://cge.cbs.dtu.dk/services/). The comparative analysis and plasmid maps were generated using Easyfig and BRIG (22,23).
Identification of circular intermediates carrying oqxAB. Reverse PCR was performed to detect the potential circular form of the IS26-flanked transposon carrying oqxAB in the parental strain C21, transconjugants, and transformants. PCR with TaKaRa Taq DNA polymerase was carried out with an initial denaturation at 94°C for 5 min, followed by 30 cycles of amplification (denaturation at 94°C for 30 s, annealing at 57°C for 30 s, and extension at 72°C for 2 min) and a final extension at 72°C for 10 min. To further assess the excision of IS26-orf-oqxAB-IS26, a conjugation assay was performed under rifampin and amikacin selection, and oqxAB was identified in 80 randomly selected transconjugants by PCR using the oqxAB-F/R primers listed in Table S1.
Recombination and conjugation frequencies of fusion plasmids. To investigate the ability to form the fusion plasmid pC21-F1 from conjugative bla CTX-M-55 -positive pC21-1 (IncI2) and nonconjugative rmtB-positive pC21-2 (IncN-X1), recombination frequencies were identified by conjugation assay using strain C21 as the donor and E. coli C600 as the recipient. The recombination frequency was calculated as the number of transconjugants carrying fusion plasmid pC21-F1 per cefotaxime-resistant transconjugant. The recombination frequency for the fusion plasmid pC21-F2 from conjugative pC21-3 (IncI1) and pC21-2 could not be determined because of the lack of a selective marker for pC21-3.
To explore the role of IS1294 in plasmid reorganization, a comparative analysis between wild-type and recombination-deficient (DrecA) donor strains was performed. Both pC21-2 and pC21-1 were transformed into E. coli C600 and recombination-deficient (DrecA) E. coli C600, respectively, generating two corresponding transformants, and then the transformants as the donor and E. coli J53 (DrecA) as the recipient were used in conjugation experiments. Transconjugants carrying cointegrate pC21-F1 were selected on LB agar plates supplemented with cefotaxime (2 mg/liter) and amikacin (20 mg/liter). All the transformants and transconjugants were confirmed by the presence of bla CTX-M-55 , rmtB, and fusion points by PCR and Sanger sequencing. Recombination frequencies were calculated as the number of transconjugants carrying pC21-F1 per cefotaxime-resistant transconjugant.
To assess the self-transferability of the fusion plasmids pC21-F1 and pC21-F2 in transconjugants, conjugation assays were further performed using E. coli C600 transconjugants TC21-F1 and TC21-F2 as the donor and azide-resistant E. coli J53 as the recipient, and conjugation frequencies were calculated as the number of transconjugants per donor. All the transformants and transconjugants were verified by PCR and antimicrobial susceptibility testing. Plasmid profiles in the transconjugant and transformant strains were subjected to S1-PFGE and Southern blot hybridization, and the fusion points were detected by PCR and sequencing. The sequences and approximate positions of the primers are shown in Table S1 and Fig. 2A and C.
Plasmid stability. The stability of fusion plasmids pC21-F1 and pC21-F2 was assessed as described previously (24). In brief, transconjugants TC21-F1 and TC21-F2 were propagated by serial transfer for 15 days of passage. The culture broths were serially diluted in 0.85% saline and plated onto antibioticfree LB agar at 0, 3, 6, 9, 12, and 15 days. A total of 100 colonies were randomly chosen and plated onto LB agar supplemented with amikacin and cefotaxime for TC21-F1 and with amikacin for TC21-F2, and then PCR was performed to confirm the presence of bla CTX-M-55 and rmtB in TC21-F1 colonies and rmtB and IncI1 replicon types for TC21-F2 colonies. The numbers of colonies were calculated at six time points in each of three independent experiments. The plasmid profiles of 14 randomly selected colonies from TC21-F1 or TC21-F2 were further identified using S1-PFGE. In all instances, the patch counts were consistent with the colony counts.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. SUPPLEMENTAL FILE 1, PDF file, 0.7 MB.
ACKNOWLEDGMENTS
This work was supported by grants from the Foundation of Henan Educational Committee (grant number 21A230014) and the National Natural Science Foundation of China (grant number 31702295).
We declare no conflicts of interest. | 5,346.6 | 2021-10-06T00:00:00.000 | [
"Medicine",
"Biology"
] |
Space-point calibration of the ALICE TPC with track residuals
In the upcoming LHC Run 3, starting in 2021, the upgraded Time Projection Chamber (TPC) of the ALICE experiment will record minimum bias Pb-Pb collisions in a continuous readout mode at an interaction rate up to 50 kHz. This corresponds to typically 4-5 overlapping collisions during the electron drift time in the detector. Despite careful tuning of the new quadruple GEM-based readout chambers, which fulfill the design requirement of an ion backflow below 1%, these conditions will lead to space-charge distortions of several centimeters that fluctuate in time. They will be corrected via a calibration procedure that uses the information of the Inner Tracking System (ITS), which is located inside, and the Transition Radiation Detector (TRD) and Time-Of-Flight system (TOF), located around the TPC, respectively. By using such a procedure the intrinsic track resolution of the TPC of a few hundred micrometers can be restored. The required online tracking algorithm for the TRD, which is based on a Kalman filter, is presented. The procedure matches extrapolated ITS-TPC tracks to TRD space-points utilizing GPUs. Subsequently these global tracks are refitted neglecting the TPC information. The residuals of the TPC clusters to the interpolation of the refitted tracks are used to create a map of space-charge distortions. Regular updates of the map compensate for changes in the TPC conditions. The map is applied in the final reconstruction of the data. First performance results of the tracking algorithm will be shown.
Introduction
A Large Ion Collider Experiment (ALICE) is the dedicated heavy-ion experiment at the Large Hadron Collider (LHC) at CERN, designed to study the physics of strongly interacting matter at extreme energy densities and temperatures. The ALICE apparatus consists of a central barrel enclosed by a solenoidal magnet and a muon spectrometer in the forward region. During the second LHC data taking period between 2015 and 2018 (commonly denoted as Run 2) the main tracking detector inside the central barrel, the TPC, was affected by large local distortions induced by space charge [1]. While the space-charge distortions were small in the bulk of the detector and O(cm) only in localized regions they still needed to be corrected for in order to preserve the full detector performance.
In Run 3, ALICE will record Pb-Pb collisions in a continuous mode rather than a triggered, allowing for the readout of the full interaction rate up to 50 kHz, which is a factor six higher than the rate of Run 2. The MWPC-based readout chambers will be replaced by quadruple GEM-based chambers with an ion backflow below 1% [2]. The distortions are expected to reach 10-15 cm and be present in the whole TPC volume.
In addition to detector upgrades, an entirely new software framework (called O 2 ) is being developed, where the functionalities of the data acquisition, High-Level Trigger (HLT), and the offline systems of the previous data collection runs will be combined [3]. The spacecharge distortion correction, which will be introduced in Sec. 2, therefore needs to be ported to O 2 and new requirements need to be incorporated. An important change is the revised readout of the TRD [4] which necessitates a new tracking procedure that is described in Sec. 3.
Calibration procedure
The space-point calibration of the TPC utilizes the external detectors ITS on the inside and TRD and TOF on the outside. Since they are not affected by space-charge distortions, they can provide reference cluster positions for global tracks inside the TPC. The calibration procedure is illustrated in Fig. 1. It consists of the following steps: 1. Track seeding and following in the TPC is done with relaxed tolerances.
2. Tracks are first matched to the ITS on one side, subsequently to the TRD and TOF on the other, again with relaxed tolerances.
3. Two independent refits are performed based on information from ITS and from TRD and TOF, respectively. In this step the TPC cluster information is ignored, but their association to the ITS-TRD-TOF tracks is kept.
4. The residuals between the distorted TPC clusters and the weighted ITS-TRD-TOF refits are collected for sub-volumes of the TPC.
5. For each sub-volume a vector representing the distortion in x, y and z is calculated. 6. A carefully tuned kernel smoother is applied to obtain a stable map of distortions without neglecting real discontinuities. One distortion map requires at least 450k matched tracks. In Run 2 one map was created about every 40 minutes. With the increased interaction rate and continuous readout in Run 3 around 1 minute of data taking will suffice. The space-charge distortion maps correct the average TPC cluster positions. Fluctuations on a short time scale cannot be accounted for with the long integration time. They are covered by a second calibration procedure where the integrated charge on the readout pads will be sampled in intervals of about 1 ms to allow for additional maps with a higher time granularity. The present work focuses on the correction of the average distortions. The TPC seeding and track following required for step 1 is discussed in [5]. In the following the matching to the TRD which is needed for step 2 is described.
TRD tracking algorithm
The TRD cannot be operated in continuous mode in Run 3 but will continue to be operated in triggered mode [4]. In order to utilize the available bandwidth in an optimal way, TRD readout will be limited to tracklets calculated in the detector Front-End Electronics (FEE). In Runs 1 and 2 these tracklets were used to generate triggers on high transverse momentum processes [6]. The offline reconstruction was based on the charge cluster data which will not be available anymore in Run 3. Therefore, a new tracking algorithm based on tracklets is needed.
The TRD is segmented into 522 chambers. In azimuth the TRD follows the same segmentation as the TPC in 18 sectors. In the longitudinal direction 5 stacks are installed which each in turn consist of 6 layers in radial direction. A chamber consists of a radiator, followed by a 3 cm long drift region filled with Xe-CO 2 and, in the same gas volume, a MWPC with pad readout. The signal on the readout pads is sampled in time bins of 100 ns. In the ideal case a traversing charged particle generates one cluster per time bin in a chamber. The tracklets comprise results of a straight line fitỹ(x) = y + (x − x 0 ) · d y to these clusters, where y is the transverse offset with respect to the chamber center at the radius x 0 (close to the readout pad plane) and d y is the transverse deflection for a cluster over the full drift length. Additionally the longitudinal position z (along the beams) in form of the pad row on which the tracklet was reconstructed and the PID information based on the energy loss in separate time windows in the detector are contained in a tracklet.
ITS-TPC matched tracks are extrapolated to the first TRD layer. All tracklets inside a search road based on the covariance matrices of the tracks are considered for matching. They are sorted depending on their χ 2 with respect to the track. Only the offset of the straight line fit y and the longitudinal position are used for the χ 2 calculation, because the resolution of the tracklet inclination in azimuth (∝ d y ) is much worse compared to the precision for the inclination of the extrapolated tracks due to their short lever arm. The tracklet inclination is instead used as criterion for exclusion if it deviates by more than 4σ from the track inclination. Multiple hypotheses can be kept per layer, but only the best N candidates hypotheses are propagated as seeds to the next layer in order to limit combinatorics. The matching algorithm is depicted in Fig. 2.
The TRD tracklets do not contain information on the quality of the fit. The angular and position resolution have been parametrized in Fig. 3. A second order polynomial is fitted to the measured d y as a function of the inclination of the ITS-TPC track in the azimuthal plane sin ϕ Trk . The resolution σ d y is defined as the width of the measured d y with respect to the fit for given track inclination. It can be directly translated into an angle with the fixed length of the drift region and is below 1 • for tracks with ϕ Trk ≈ Ψ Lorentz . Since both resolutions depend on the inclination of the track associated to the tracklet, they have to be calculated on-the-fly during the matching procedure for the χ 2 calculation, possibly multiple times if more than one track points in the same direction.
Due to misalignment not all tracklets are necessarily reconstructed at the same radius. This is handled by first extrapolating the track to the average radius of a chamber taking into account inhomogenities of the magnetic field and energy loss, and subsequently performing fast linear extrapolations for χ 2 calculations. Before the tracks are updated with the best matching tracklets they are again propagated with maximal precision to the exact radius of the matched tracklet.
Additional complications arise from the fact that the TRD readout pads are tilted by ±2 • in order to improve the resolution in longitudinal direction. The errors in azimuth and longitude are therefore correlated and y has to be corrected for based on the z position of the extrapolated track.
Performance and benchmark results
Efficiency and purity distributions for the TRD tracking are shown in Fig. 4. Since a minimum of two tracklets are required for the TPC space-point calibration, the efficiency is defined as the fraction of tracks which are matched to at least 2 correct (based on their MC label) tracklets. Below p T ≈ 1.5 GeV/c the efficiency for the new TRD tracking algorithm deteriorates compared to the old tracking, which is based on cluster data. This is due to the fact that the tracklets in the FEE are allowed to span over a maximum of two pads, introducing a selection on the deflection inside the drift region. This in turn translates into a position dependent selection on the transverse momentum which is reflected as the observed smeared decrease of efficiency. Primary tracks with p T < 300 MeV/c do not reach the TRD and are neglected by the tracking algorithm. At high-p T the new tracking algorithm yields results compatible to the old offline tracking.
The purity is defined as the fraction of tracks with at least two attached tracklets which in addition do not have any fake tracklets attached. In pp collisions the purity for both the old and the new tracking algorithm is practically 1, while it decreases to about 0.9 for low momentum tracks in Pb-Pb collisions where the track density is much higher.
For the TPC space-point calibration a lower efficiency can be compensated by accumulating more statistics. The purity on the other hand is improved by neglecting central Pb-Pb events, where the track density is very high, when creating the distortion maps.
Since the TRD tracking will run synchronously to the data taking the processing time is very important. Parallelization can easily be achieved over the tracks and is implemented both on CPUs via OpenMP and on NVIDIA GPUs via CUDA. The processing time per event as a function of the number of tracks which reach the TRD is shown in Fig. 5. The most central events typically contain about 2000 ITS-TPC tracks which reach the TRD. This number is not yet large enough to profit strongly from GPU utilization. For Run 3 the data will not be processed on a per-event basis, but in time frames of 10-20 ms corresponding to about 500-1000 collisions. The number of tracks to be processed is therefore much higher and large speedup on GPUs compared to CPUs is expected as can be deduced from the extrapolations shown in Fig. 5.
Next steps
Currently the simulation chain of the TRD in the new O 2 framework for Run 3 is nearly complete. As soon as it is finished, the TRD tracking can be plugged into O 2 . Furthermore the tracking algorithm currently supports only a reconstruction on a per-event basis. Support for tracking in time frames still needs to be added. Note that this does not increase the complexity, as the TRD will continue to be operated in triggered mode and therefore the association of tracklets to a certain bunch crossing is a priori known. After matching to ITS, the TPC tracks also contain a time stamp which connects them to a bunch crossing. Hence, only the sizes of the input arrays for tracklets and tracks increase. And for the matching only those tracklets which belong to the same interaction as the respective ITS-TPC track will be considered.
Conclusions
The new matching between ITS-TPC tracks and TRD tracklets shows a comparable performance with the offline tracking of Run 1 and 2, which was based on the TRD raw data. At the same time it fulfills the computing speed requirements for Run 3. It was tested successfully both on CPUs and NVIDIA GPUs. The CPU version was activated in the HLT during data taking in 2018 both in pp and in Pb-Pb collisions. Built on the ALICE GPU framework [7] the algorithm is contained in a single source code file and supports wrappers to different APIs, i.e. there is no code duplication needed for the CPU and the GPU version. The remaining steps specified in Sec. 2, which are required to create the space-charge distortion maps for the TPC, have already been ported to the new O 2 framework. As soon as the TRD simulation chain is ready the full calibration can run in the Run 3 software framework. | 3,229.8 | 2020-03-06T00:00:00.000 | [
"Physics"
] |
The Role of Changes in Extracellular Matrix of Cartilage in the Presence of Inflammation on the Pathology of Osteoarthritis
Osteoarthritis (OA) is a degenerative disease that affects various tissues surrounding joints such as articular cartilage, subchondral bone, synovial membrane, and ligaments. No therapy is currently available to completely prevent the initiation or progression of the disease partly due to poor understanding of the mechanisms of the disease pathology. Cartilage is the main tissue afflicted by OA, and chondrocytes, the sole cellular component in the tissue, actively participate in the degeneration process. Multiple factors affect the development and progression of OA including inflammation that is sustained during the progression of the disease and alteration in biomechanical conditions due to wear and tear or trauma in cartilage. During the progression of OA, extracellular matrix (ECM) of cartilage is actively remodeled by chondrocytes under inflammatory conditions. This alteration of ECM, in turn, changes the biomechanical environment of chondrocytes, which further drives the progression of the disease in the presence of inflammation. The changes in ECM composition and structure also prevent participation of mesenchymal stem cells in the repair process by inhibiting their chondrogenic differentiation. This review focuses on how inflammation-induced ECM remodeling disturbs cellular activities to prevent self-regeneration of cartilage in the pathology of OA.
Introduction
Osteoarthritis (OA) is a debilitating disease, which primarily affects joints, especially load-bearing areas such as hips and knees. It is characterized by pain and degenerative changes in the tissues surrounding those areas. There are no current therapies which can completely prevent the progression of the disease. Some of the main factors that drive the progression of OA are chronic inflammation and gradual structural changes within the joint tissues [1]. Unlike the general concept of OA being a degenerative disease, the remodeling processes are highly active throughout each stage of the disease [2]. During the active remodeling, however, the quality of extracellular matrix (ECM) is compromised due to the quick turnover rate and atypical composition of the newly synthesized ECMs [3]. Among many factors, inflammatory cytokines and proteases are main contributors which mediate the changes in the quality of ECM [2]. As a consequence of the microenvironmental changes, the altered ECM synthesis in the presence of inflammation, in turn, further disturbs the functions of the cells. Therefore, there is a constant cycle of evolution between the cells and their newly synthesized ECM, forming a positive feedback loop, which drives the progression of OA. In this review, we will focus on the interplay between ECM and cellular functions under inflammation, and how these factors are responsible for the progression of OA. An understanding of the complexity of the interplay between the cells and their microenvironment may provide a sound basis for developing suitable therapies to treat osteoarthritis.
Changes in Extracellular Matrix Synthesis during Osteoarthritis
Progression of OA can be characterized by changes in ECM composition and structure. Natural, healthy cartilage matrix is mainly composed of collagen type II which provides tensile support for the tissue. Aggrecan, a negatively charged 2 BioMed Research International proteoglycan that attracts water molecules, provides the compressive resistant and shock absorbing capability of cartilage under loading [2]. It has been shown that during OA, there are sequential events that affect the integrity of homeostatic ECM; aggrecan content is decreased, while collagen content is increased [2,3,5]. This change in ECM composition predisposes the tissue for mechanical fault resulting in significantly altered mechanical environments of the cells within the cartilage matrix.
In the initial stages of OA, proliferative chondrocytes form clusters in order to adjust to the changing microenvironments [2]. This alteration of cellular configuration also changes the quantity and composition of the ECM secreted by the cells. It has been shown that there is a significant downregulation of aggrecan gene expression at the onset of OA in a rat model [1], and this finding agrees with markedly low proteoglycan synthesis, observed in human OA samples with normal appearance [6]. The changes of aggrecan, which exists in a nonaggregated form in OA, alter the permeability and thus mechanical compliance of the matrix [2,7]. The reduced proteoglycan content decreases compressive modulus of cartilage and consequently exposes the tissue to greater strains when exposed to mechanical stress.
Unlike the decreased production of proteoglycan, collagen synthesis rate increases in the early stages of OA and remains elevated [8]. In addition to the increased ratio of collagen/aggrecan synthesis, the composition of collagen type has been also shown to change from collagen type II to type I [9]. Healthy cartilage matrix mainly contains collagen type II, while collagen type I is mainly found in subchondral bone tissue [2,3,10]. The compositional change affects the mechanical stability of the ECM network [10]. Compared to collagen type I, type II chains contain a higher content of hydroxylysine as well as glucosyl and galactosyl residues which mediate the interaction with proteoglycans [11]. Therefore, the decreased collagen type II content during OA inevitably undermines the integrity of ECM networks formed by collagen and proteoglycan. Furthermore, Silver et al. showed that the elastic modulus, due to shortened collagen fibril lengths, decreases with an increased extent of OA [12]. As a result of these changes, the osteoarthritic cartilaginous tissues exhibit a reduced ability to store elastic energy, and this, in turn, leads to fibrillation and fissure formation [12]. Figure 1 shows the structural and compositional changes in cartilage in a monoiodoacetate-(MIA-) induced arthritis model in rats. Although the animal model induces significantly accelerated cartilage degeneration as compared to typical human osteoarthritis, it depicts similar structural and compositional changes in cartilage exhibited in the pathogenesis of OA [13]. On day 11 post-MIA injection, the overall cartilage damage was assessed at Grade 2-3 according to Osteoarthritis Research Society International's (OARSI's) histopathology grading system showing cartilage lesion formation, articular surface fissurization, subchondral bone advancement, and bone marrow edema/cyst [14]. An area exhibiting chondrocyte disorientation without vertical fissure development was chosen to observe changes in cartilage matrix. In this area, nonchondrocytic collagen type I is present in the cartilage matrix of the OA tissue, whereas it is negligible in the control (Figure 1(B)). These changes in the structure and composition of ECM progressively alter the biological and mechanical microenvironments that significantly modulate cellular activities as described later in this review.
Inflammation-Induced Extracellular Matrix Changes in Osteoarthritis
ECM changes in cartilage can be attributed to multiple factors during the progression of OA. Among them, inflammation plays an active role affecting both quantity and quality of ECMs. Mechanical damage and/or age-related wear/tear are thought to trigger systematic inflammatory responses in all tissues surrounding the joint including articular cartilage, synovial membrane, subchondral bone, and ligaments [2,15]. Chondrocytes, the only cell type residing in cartilage, respond to such inflammatory conditions and participate in the catabolic activities that ultimately lead to the degradation of cartilaginous ECM [16]. An animal model of MIA-induced arthritis showed that the sequential upregulation of inflammatory genes is associated with all levels of cartilage damage throughout the progression of OA [1]. These upregulated inflammatory genes form a positive feedback loop, mainly through the NF-B signaling pathway, as the severity of the cartilage damage progresses [17]. In fact, it was observed that chondrocytes in human arthritic cartilages also constitutively exhibit elevated activities of NF-B [18]. Factors that contribute to the catabolic processes in OA include interleukin 1 (IL-1 ), tissue necrosis factor-(TNF-), IL-12, IL-15, and various associated chemokines [19][20][21][22][23]. These inflammatory factors were shown to significantly increase the expression of matrix degrading proteins including matrix metalloproteinases (MMPs) (i.e., MMP-1 and MMP-13) and various types of a disintegrin and metalloproteinase with a thrombospondin type 1 motif (ADAMTS) (i.e., ADAMTS 1,4,5) in chondrocytes [1,[24][25][26][27][28][29][30]. For example, an increase in cell clustering, a typical morphological feature of chondrocytes in the early stage of OA, was observed with an increase in MMP-13 expression [31]. The receptor for advanced glycation end products (RAGE), which is increased in OA articular chondrocytes, was also shown to stimulate MAP kinase and NF-B activities that, in turn, increased the production of MMP-13 and propagated the catabolism of the cartilage matrix [32,33]. The degenerative activities of matrix degrading proteins are intensified by the elevated level of nitric oxide (NO), a molecule which is also upregulated by inflammatory proteins in chondrocytes. NO, upregulated by the transcriptional activity of NF-B, perpetuates the chronic inflammation that enhances matrix degradation and mediates apoptosis of chondrocytes by creating oxidative environments [34][35][36]. In a canine model of OA, the use of a NO inhibitor reduced the degenerative changes in cartilage, possibly demonstrating the critical role of NO in the progression of OA [34].
Concurrently with matrix degradation, the inflam- Figure 1: Changes in the extracellular matrix structure and composition of cartilage afflicted by osteoarthritis (OA). Experimental OA was induced by intra-articular injection of monoiodoacetate (MIA) similar to the previously described protocol using a rat model [4]. OA induced rats were sacrificed at day 11, and the medial condyles of the arthritic knees (A (c-d); B (e-h)) were histologically (H&E staining (A)) and immunohistologically (collagen type I (B (a) and (e)) and type II ( Figure B Consecutive sections of the healthy and OA cartilages were stained using monoclonal antibodies for collagen type I or type II. An increase in intensity for collagen type I is observed in the OA cartilage, while it is not present in the control cartilage. Collagen type II is readily observed for both the healthy and OA cartilage. (B) (b, d, f, and h) are phase-contrast images of (B) (a, c, e, and g), respectively, to reveal tissue morphologies. transcription factors that mediate chondrocytic ECM synthesis, such as transforming growth factor (TGF-), sex determining region Y-box 9 (SOX9), insulin-like growth factor (IGF), and connective tissue growth factor (CTGF), is also responsible for suppressing the anabolic activities of chondrocytes [1,37,38]. Taken together, these results demonstrate the significant influence of inflammatory mediators in the progression of OA by altering the homeostasis of cartilage ECM.
Another matrix component which is found in increased concentrations in synovial fluid during OA is Tenascin-C (TN-C), an ECM glycoprotein. Elevated levels of TN-C have been suggested to induce inflammatory mediators and promote ECM degradation in OA patients [39]. Although TN-C is highly expressed during embryogenesis, its presence is minimal in healthy adult tissues. Its expression during OA is, however, highly upregulated [40,41]. The elevated concentration of TN-C causes a significant effect in the catabolism of the cartilage, resulting in degradation of ECM [39,40]. Additionally, biglycan fragments in articular cartilage and meniscus and fibronectin fragments in hip and knee synovia have also been found in elevated levels as OA progresses [42][43][44]. Both fragmented biglycan and fibronectin exhibit proinflammatory effects through the activation of toll-like receptors [45,46]. Overall, the combination of inflammationinduced upregulation of matrix-degrading proteins, downregulation of chondrocytic ECM synthesis, and accelerated matrix degradation due to fragmented inflammatory ECMs, promotes the progression of disease.
Alteration in Biomechanical Environments during Osteoarthritis
The changes in altered ECM synthesis and elevated activities of matrix degrading proteins drastically change the mechanical properties of cartilage, which further intensifies the destructive processes associated with OA [47]. Initially, an increase in cartilage thickness is observed by hyperproliferative chondrocytes before noticeable surface fibrillation occurs [48]. The highly proliferating chondrocytes produce greater amount of aggrecan that leads to cartilage thickening in dimensions as well as softening of extracellular matrix [2]. At this stage, a lower shear modulus was observed in the cartilage from an OA model when compared to normal articular cartilage [49,50]. In a mouse model, a reduction in tensile stiffness in articular cartilage is also accompanied by the tentative cartilage thickening [51]. These biomechanical changes expose chondrocytes to an environment more susceptible to greater strains, as compared to physiological levels, thus altering their cellular functions. As the disease progresses, however, the tissue gradually loses aggrecan content, which has provided compliance of local mechanical environments due to its ability to interact with water molecules. In addition to aggrecan loss, it has been recently shown that collagen fibril stiffens in osteoarthritic cartilage [52]. Furthermore, another possible mechanism through which the mechanical microenvironment changes is the accumulation of advanced glycation end products (AGEs) which can crosslink to the collagen network [53]. In vitro, the increased AGE crosslinking to the collagen network was shown to increase the stiffness of human adult articular cartilage [53]. The combination of aggrecan loss and collagen network stiffening results in increased overall stiffness of the tissue. Consequently, as OA advances, the cartilage layer becomes thinner and stiffer transmitting greater load to the underlying subchondral bones. The change in mechanical conditions induces the advancement of subchondral bones towards the articular surface leading to the development of bone marrow edema/subchondral bone cysts and the propagation of periarticular osteophytes [2,54,55]. Recent studies suggest that these changes in subchondral bone structure may precede the articular cartilage thinning [56].
Nevertheless, due to changes in the mechanical properties of the cartilage via altered homeostasis of ECM, its residing cells, chondrocytes, are exposed to vastly different biomechanical microenvironments that further intensify the progression of OA by altering cellular behaviors. Ultimately, this leads to the formation of fibrocartilaginous tissues that exhibit more bone-like properties replacing the completely degenerated cartilage in addition to osteophyte formation at the periphery of the articular surface [2,54].
The Effects of Inflammation on Cartilage Extracellular Matrix Homeostasis by Articular Chondrocytes
Global inflammation in synovium during OA affects chondrocytes that are responsible for ECM turnover and thus cartilage homeostasis [57]. Inflammation which is persistent in OA has shown to directly induce the catabolic activities of chondrocytes. IL-1 , a highly upregulated cytokine during OA, has shown to induce upregulation of matrix degrading enzymes such as MMP-1, 3, and 13 in chondrocytes [58]. Dozin et al. also showed that when exposed to inflammatory cytokines, chondrocytes, regardless of patient age or OA status of human donors, enhance their production of proinflammatory cytokines such as IL-6 and IL-8 [59]. TNF-, another critical cytokine that is highly upregulated in OA, has been shown to induce MMP-13 expression, mediated by ERK, p38, JNK MAP kinases, and AP-1 and NF-B transcriptions factors [24,60,61]. At the same time, the presence of inflammatory cytokine IL-1 has been shown to play a role in suppressed ECM synthesis through downregulation of SOX9 [62]. This, in turn, decreases the expression of collagen type II and aggrecan in articular chondrocytes. The activation/suppression of such signaling cascades autoregulates chondrocytes to further upregulate the synthesis of matrix degrading enzymes and downregulate the production of chondrocytic ECMs [63]. Nitric oxide (NO) and cyclooxygenase-2 (COX-2), two components which have active roles in perpetuating inflammation, were also endogenously expressed at high levels in chondrocytes from OA tissues even when cultured in vitro in the absence of inflammatory cytokines [64,65]. These changes in metabolism may demonstrate a possible permanent phenotypical change in the OA chondrocytes.
In this regard, one notable alteration of chondrocytes in arthritic joints is their production of nonchondrocytic ECM. In addition to the increase in the production of collagen type I replacing type II as previously described, chondrocytes isolated from OA diseased tissues have shown to produce collagen type X, a marker for hypertrophic chondrocytes, as compared to undetectable expression of the protein in healthy cartilage [66]. Collagen type X is typically synthesized by hypertrophic chondrocytes that also produce collagen type I. The emergence of these nonarticular chondrocytic proteins may indicate the change of phenotype in chondrocytes as the disease progresses. The morphological change of chondrocytes with abnormal nonround morphology in arthritic cartilages could be related to a phenotypical change such as an increase in IL-1 production and a decrease in pericellular collagen type VI synthesis [67]. When the cells from arthritic knees are subject to a chondrogenic in vitro culture condition, they are not able to fully recover normal tissue phenotype as evident by low cellularity and decreased chondrocytic ECM production as compared to chondrocytes from healthy joints [66,67]. This demonstrates that damages in OA cartilage may not be able to be fully recovered by autologous chondrocytes.
One possible cause of the phenotype change of OA chondrocytes is inflammation as inflammatory synovial fluid has shown to activate chondrocytes and dramatically affect the normal processes of the cells. When healthy chondrocytes are subjected to inflammation, simulated by inflammatory cytokines such as IL-1 , TNF-, CXCL1, or 8, all of which are upregulated during OA, the cells exhibit hypertrophic differentiation [68]. This differentiation is shown to be mediated by RAGE signaling through the p38 MAPK pathway [69]. Interestingly, the activation of the p38 MAPK signaling pathway has also shown to promote the synthesis of MMP-13 possibly linking the change in phenotype to the facilitated rate of matrix turnover [32]. In addition to the synthesis of nonchondrocytic ECM and enhancement in matrix degradation, chronic inflammation also induces cell death. When healthy chondrocytes were subject to synovial fluids from osteoarthritic patients, the cells not only upregulated the expression of cytokines, such as IL-6, IL-8, monocyte chemotactic protein-1 (MCP-1), and vascular endothelial growth factor (VEGF), but also underwent apoptosis [16].
The Effects of Changes in Extracellular Matrix on Articular Chondrocytes
The altered microenvironments by ECM changes, in the presence of inflammation, further drive catabolic/nonreparative activities of chondrocytes, ultimately leading to cartilage destruction/achondrocytic ECM formation. As previously described, the mechanical properties of cartilage are dynamically altered during the progression of OA due to imbalanced matrix turnover (greater matrix degradation versus synthesis) and noncartilaginous ECM formation. The increase in local matrix stiffness due to changes in ECM appears to suppress chondrocytic activities of the cells. Recent studies show that chondrocytes sense the stiffness of the matrix and differentially respond to it by altering their phenotype, resulting in production of different types of ECM (i.e., ratio of collagen type II to type I) [70][71][72]. An optimal stiffness has been shown to promote greater SOX9, COL2A1, and aggrecan gene expression in chondrocytes and either above or below this stiffness induced dedifferentiation of the cells towards fibrochondrocytic phenotype [70]. This effect of matrix stiffness on modulating chondrogenic phenotype has been shown to occur through the regulation of the TGFsignaling pathway [70]. In addition, the mechanosensitive behavior of chondrocytes may explain the fact that typical in vitro 2D culture of chondrocytes on stiff tissue culture plastics results in the dedifferentiation of the cells [73][74][75].
The changes in matrix composition during OA not only affect the mechanical environments of chondrocytes but also alter interactions of matrix proteins with the cells. Matrilin-3 (MATN3) is a matrix protein that is highly upregulated during OA [76,77]. Although the protein is a part of healthy cartilage matrix, the soluble form of MATN3 is upregulated and released to synovial fluid in OA [78]. When human chondrocytes were cultured in the presence of soluble MATN3, there was a decrease in ECM anabolism and increased catabolism only at concentrations higher than those found in OA patients. On the other hand, when soluble MATN3 was immobilized, ECM synthesis and accumulation was enhanced [78]. These results show how MATN3, which is found in synovial fluid of OA patients, can change the behavior of chondrocytes, demonstrating the direct involvement of ECM in the progression of OA by interacting with the cells as well as indirectly by changing the cells' mechanical environments.
The presence of calcium crystals in cartilage has been shown to increase with severity of OA, and these changes have a strong correlation with hypertrophic chondrocyte differentiation [79]. Interestingly, bovine articular chondrocytes within cartilage explants, when exposed to basic calcium phosphate crystals, had significant increases in intracellular calcium content, which is correlated with cartilage matrix degradation [80]. Another ECM component that affects chondrocyte metabolism is fibronectin, which showed a significant positive correlation between chondrocyte apoptosis and fibronectin content [81]. Overall, these multifaceted effects by changes in ECM, including dysregulation of matrix synthesis (reduction in collagen type II and aggrecan, increase in collagen type I and X), upregulation of matrix degradation, and induction of cell apoptosis, promote the progression of OA by altering the cellular behaviors of chondrocytes.
The Effects of Inflammation on Chondrogenic Differentiation of Mesenchymal Stem Cells during Osteoarthritis
The mechanisms involving the initiation of OA are still elusive as some argue it is mechanical damage-induced and others inflammation-induced. Nevertheless, once the disease is initiated, the degeneration of cartilage matrix progresses due to the combination of chronic inflammation and altered mechanical loading as discussed earlier. A part of the progressive degenerative processes is due to the limited regenerative capability of chondrocytes. These cells are typically quiescent in healthy cartilage [2]. When they are exposed to proliferating conditions to repair the cartilage damage, they often dedifferentiate to a phenotype that produces nonchondrocytic ECM [2]. This atypical ECM synthesis further drives chondrocyte dedifferentiation and nonhomeostatic ECM synthesis by altered mechanical environments. In addition to chondrocytes, the repair of the damaged tissue is attempted by another cell type, mesenchymal stem cell (MSC), that can differentiate to all mesenchymal lineage cells including chondrocyte, osteoblast, and adipocyte [82]. MSCs often participate in the repair of bone damage since they constitute bone marrow. Due to its close proximity to the cartilage layer in the subchondral marrow and their ability to differentiate into chondrocytes, MSCs have been considered as a possible cell source involved in cartilage repair. For this reason, microfracture (or microperforation) surgery is often used to treat a localized cartilage lesion. Small fractures are created in the subchondral bone, and this causes new cartilage formation mainly due to the regenerative activities of MSCs from the bone marrow [83]. Although this technique has shown some benefits repairing damaged cartilage, the neotissue contains fibrocartilage that exhibits different mechanical properties, leading to question its longterm stability [84,85]. These studies may provide clues for why endogenous MSCs cannot fully rescue damaged cartilage during the progression of osteoarthritis, unlike the positive healing response after bone fractures. Typically, subchondral bone advances towards the cartilage surface as the articular surface degrades [86]. In this condition, MSCs are subjected to a milieu of inflammation, altered ECM composition, and vastly different mechanical loading profiles in the injured cartilage, all of which affect the differentiation of MSCs to chondrocytes.
As described earlier, the native cartilage is exposed to chronic inflammation conditions by increased levels of inflammatory mediators including IL-1 , TNF-, and prostaglandin E 2 (PGE 2 ) [87,88]. These inflammatory cytokines not only affect the homeostatic functions of residential chondrocytes but also impact the chondrogenic differentiation of MSCs [87,[89][90][91]. Treatment of IL-1 during chondrogenic differentiation of bone marrow-derived MSCs suppresses Sox9 expression, a critical transcription factor that controls chondrogenesis [90]. The suppression of Sox9 subsequently leads to a decrease in collagen type II and aggrecan expression. In addition, TNF-, in combination with IL-1 , has been shown to transform embryonic chondroprogenitor cells into fibroblast-like cells, further suggesting the inhibitory effects of inflammatory cytokines on chondrogenesis [87]. Similarly, when human MSCs are exposed to conditioned medium derived from osteoarthritic synovium, chondrogenesis is inhibited [92]. These antichondrogenic effects of inflammatory cytokines were shown to be caused by the activation of the NF-B signaling pathway [93]. Overall, inflammatory conditions present in OA cartilage prevent chondrocytic differentiation of MSCs, thus inhibiting regeneration of damaged cartilage with appropriate chondrocytic ECMs.
The Effects of Changes in Extracellular Matrix on Chondrocytic Differentiation of Mesenchymal Stem Cell
The changes in the composition of ECM also affect chondrogenic differentiation of MSCs. In a study by Bosnakovski et al., MSCs cultured in collagen type II hydrogels exhibited greater gene expression levels of chondrocytic markers as compared to those cultured on typical tissue culture plates [94]. As OA progresses, residential chondrocytes start to produce collagen type I instead of type II. This change can affect the subsequent chondrogenesis of MSCs as it has been shown that collagen type II favors chondrogenic induction by modulating cell shape, as compared to collagen type I [95]. It was demonstrated that collagen type II promotes a more rounded cell shape, similar to that of the native chondrocyte in healthy cartilage, through the 1 integrin-mediated Rho A/Rock signaling pathway. In addition to the compositional effect, mechanical changes of ECMs (become stiffer due to the loss of hydrating aggrecan in OA) affect chondrogenesis of MSCs by regulating cell morphology [96]. A softer mechanical environment enhances chondrogenesis of MSCs, evident by greater gene and protein expression of chondrogenic markers including SOX9, collagen type II, and aggrecan by inhibiting stress fiber formation, as compared to the stiffer environment. Similarly, using polyacrylamide hydrogels with varying stiffnesses, Xue et al. showed that human mesenchymal stem cells are differentiated towards a chondrocytic phenotype on softer gels, regardless of initial cell seeding density [97]. The study highlights the importance of cell-matrix interactions during chondrogenic differentiation of MSCs.
Along with the direct influence of local stiffness change on MSC differentiation, the altered mechanical profiles under loading also affect the differentiation process. Bone marrow derived MSCs seeded onto fibrin hydrogels developed a spread out morphology and differentiated towards a myogenic lineage [98]. In the presence of long-term, dynamic compression, myogenic differentiation was inhibited, while markers for chondrogenic phenotype were upregulated. However, the magnitude of loading is an important factor determining chondrocytic differentiation of MSCs and thus synthesis of proper ECMs. Under the same loading regimen, a stiffer ECM induces less strain on the cells. In this regard, Michalopoulos et al. have recently shown that physiological compressive loading (15% strain) on MSC-laden scaffolds induces greater chondrogenesis as compared to a smaller strain of 10% that led to greater osteogenesis [99]. Similarly, stiffer agarose gels inhibited cartilage matrix production and gene expression of MSCs under hydrostatic pressure as compared to those in softer microenvironments [100]. These studies demonstrate that changes in the mechanical properties of cartilage during OA may favor the differentiation of MSCs towards nonchondrocytic lineages further intensifying the degeneration of cartilage. Overall, altered environments in ECM composition and mechanical properties during the progression of OA significantly limit the chondrogenesis of MSCs inhibiting the regeneration process of cartilage damage.
Summary
Both inflammatory factors and compositional/structural changes of ECM drive the progression of OA by affecting residential articular chondrocytes as well as MSCs that migrate from bone marrow in the underlying subchondral bone to repair the cartilage defect ( Figure 2). Due to chronic inflammation and altered microenvironments, chondrocytes change their phenotype towards more hypertrophic cells resulting in achondrocytic ECM synthesis. These changes in ECM, in combination with cartilage matrix degradation under inflammation, further fuel the degeneration process resulting in the alteration of biomechanical conditions, which disturb the surrounding tissues in the joint. The ECM changes in the presence of inflammation also negatively affect chondrogenic differentiation of MSCs, limiting self-regeneration of cartilage. Overall, the interplay between changes in ECM and changes in cellular function under inflammation forms a positive feedback loop that drives the pathology of OA. | 6,196.4 | 2013-08-28T00:00:00.000 | [
"Biology",
"Medicine"
] |
In Situ Synthesis of AZO-Np in Guar Gum/PVOH Composite Fiber Mats for Potential Bactericidal Release
Since the number of antibiotic-resistant bacterial infections is growing and cases are getting worse every year, the search for new alternative bactericidal wound dressing treatments is becoming crucial. Within this context, the use of polysaccharides from plants and seeds in innovative biopolymer technologies is of key importance. In this work, bio-nano-composite guar gum/polyvinyl alcohol (PVOH) membranes loaded with aluminum-doped zinc oxide nanoparticles were produced via electrospinning. Citric acid was added to the mixture to increase spinnability. However, depending on the pH, zinc oxide nanoparticles are partially dissociated, decreasing their bactericidal efficiency. Thus, a second successful alkaline thermo-chemical regrowth step was added to the process to treat the obtained fibers. This alkaline thermo-chemical treatment reconstituted both the nanoparticles and their bactericidal properties. The Staphylococcus aureus antibacterial assay results show that the membranes obtained after the alkaline thermo-chemical treatment presented a 57% increase in growth inhibition.
Introduction
Recent studies show that the number of bacterial infections increases every year, and these numbers also reflect the increase in multi-drug-resistant bacterial infection deaths [1]. One of the reasons for such rising numbers is the inappropriate and persistent use of antibiotics, which leads to antibiotic resistance. It is known that bacteria can rapidly develop resistance mechanisms, such as antibiotic target site alteration, antibiotic inactivation, and metabolic changes to minimize drug entry. Therefore, the surviving bacteria adapt to thrive into the novel environment. For this reason, the search for novel antibiotics and new antibacterial materials is crucial.
Within this scenario, there is a growing interest in developing novel nano-biocomposite materials with bactericidal properties. These systems can be exploited for wound dressing and tissue engineering, [2,3] unlike the standard antibiotics used to avoid bacterial formation. Furthermore, the use of low-cost biodegradable patches with a fast bacterial product release would allow the attenuation of incipient bacterial colony formation. Moreover, it is worth considering that their action would be superficial and would naturally dissolve on the skin with time, which provides the additional advantage of not damaging deeper skin layers. Among these novel materials, composite membranes of polysaccharides blended with synthetic polymers, such as polyvinyl alcohol (PVOH), and loaded with bactericidal nanoparticles are adequate candidates to suit this emerging area. Such materials are versatile due to their low cost, biocompatibility, and biodegradability [4][5][6]. Furthermore, their since it hinders the necessary peroxide formation mechanism. One way to overcome this problem is to use a low-temperature wet chemical path to reconstitute AZO nanoparticles by oxidizing zinc cations in situ with a hot alkaline aqueous solution [10,24].
In the present investigation, we aimed to successfully produce bactericidal PVOH/GG/ AZO electrospun composite membranes for fast bactericidal release patches using citric acid as a plasticizer. To overcome the partial dissolution of AZO into hydroxylated Zn cations, we performed a post-chemo-thermal treatment to restore ZnO nanoparticles in the membrane surface layers. In addition to its low cost, the advantage of this procedure is that the use of zinc oxide precursors is not necessary, which would need high temperature processing to obtain the AZO nanoparticles and therefore would degrade the PVOH/GG membranes [42,43]. We also performed bactericidal assays to check the membranes' efficiency.
Materials
Commercial (GG) was purchased from Biotec (batch No 21038, Rio de Janeiro, Brazil) with average molar mass (Mw: 519,000 g/mol). It was purified and characterized as in Lubambo et al. [44].
Commercial aluminum-doped zinc oxide nanoparticle was purchased from Sigma Aldrich with 6% Al as dopant; nanoparticles were smaller than 50 nm and had surface area greater than 10.8 m 2 /g according to the technical note from the manufacturer.
All other chemicals were P.A. grade and were used as purchased. NaOH and citric acid was from Vetec. Ethanol was from Biotec.
Solution Preparation Procedures
-Solubilization of PVOH and GG at pH 7: A total of 1.05 g of PVOH was stirred with distilled water at 60 • C for 30 min in a 10 mL beaker. Afterwards, the hot plate was turned off and the solution continued to be stirred for 24 h.
In another 10 mL beaker, 0.42 g of GG was stirred in distilled water for 24 h at room temperature. After 24 h of GG stirring, aluminum-doped zinc oxide nanoparticles were added to the solution according to Table S1 (Supplementary Materials) and stirred for another 24 h.
After these solubilization procedures, both beaker contents were mixed in a 1:1 ratio and stirred for 30 min. The final mixture had a pH 7 determined by using a pH indicator paper from Merk.
-Solubilization of PVOH and GG at pH 7: The solubilization of PVOH and GG followed the same protocol as for pH 7 except that 0.5 mg of sodium borohydride was added previously to the GG/AZO-Np mixture before the addition of PVOH solution to protect the glycosidic terminals against alkaline degradation (β-elimination).
When GG/AZO-Np and the borohydride were solubilized, 1 M NaOH drops (2.5 µL) were added while stirring until the mixture reached pH 10. Finally, both GG/AZO-Np/borohydride and PVOH solutions were mixed in a 1:1 ratio and stirred for 30 min, the pH was corrected until it reached 8 with drops of 0.5 M NaOH, and the whole solution was sonicated using a Branson ultrasonics™ sonifier™ SFX250 with a 1/2" diameter tapped bio horn and 1/2" extension. It was sonicated with amplitude of 30%, (10 s on/10 s off) cycles for 30 min, before electrospinning. Solubilization of PVOH and GG at acidic pH: The solubilization of PVOH and GG followed the same protocol as for pH 7 except that citric acid was added to the PVOH beaker while stirring before mixing with GG solution according to Table S1 (Supplementary Materials). Finally, both beakers were mixed in a 1:1 ratio and stirred for 30 min before use.
Electrospinning
The positive terminal of a high-voltage power supply (0-40 kV) was attached to a rectified (blunt tip) 22-gauge standard syringe needle, which worked as a metallic capillary. The mixture solution was loaded into a 2.5 mL gastight Hamilton syringe and pumped with a homemade syringe pump at (31 ± 6) µL/min flow rate. The applied voltage was (16.0 ± 0.5) kV. The tip-to-collector distance was (20.0 ± 0.3) cm. These parameters (voltage, flow rate, and tip-collector distance) were adjusted to obtain a stable Taylor cone during the electrospinning process. We used a grounded polished aluminum collector plate. The electrospinning deposition around 45% room relative humidity was finished when the substrate collector was covered with a self-sustained membrane. The obtained membranes were manually peeled from the metallic collector.
Chemical and Thermal Treatments of Membranes
After deposition, all membranes were stabilized by thermal treatment [46]. They were all heated at 150 • C in a vacuum chamber for 1 h at 0.05 MPa.
The membranes originated from the acidic mixture and thermally treated were also hydrothermally cross-linked in alkaline medium in a 0.1 M NaOH/ethanol solution for 30 min at 70 • C to regrowth ZnO-type nanocrystals [47]. The temperature varied by ± 1 • C during treatment. Then, they were dried at room temperature for 24 h over a Teflon substrate.
Bactericidal Assay Membranes Protocol
Two acidic stock solution obtained membranes were entirely cut into 6 mm diameter disks forming two groups. Each group was equally divided and stacked up into four samples, which were then pelletized with a pressure of 8 tons for 3 min. All four samples on average weighed 0.07 g. The first group of four samples was used as obtained. The disks obtained in the second group were thermo-chemically treated to regrowth AZO nanocrystals before they were stacked up to a weight of 0.07 g and pelletized.
Scanning Electron Microscopy (SEM) and Chemical Analysis
Images were obtained in a JEOL 6360-LV and a Tescan VEGA3 LMU, both using 15 kV as working tension. Before the analyses in SEM the samples were covered with a thin layer of carbon to enhance the images. Cathodoluminescence (CL) images and emission spectra and EDS spectra were acquired with 15 kV working tension.
Transmission Electron Microscopy (TEM)
TEM images were obtained using a JEOL JEM 1200 EX-II transmission electron microscope operating at 100 kV. The images were recorded using an Orius SC1000 B Gatan CCD camera. The samples were deposited directly on 200 mesh copper grids to observe in TEM.
Fourier Transform Infrared (FTIR-ATR)
Spectra were obtained using an ALPHA-P Bruker spectrometer with an ATR (attenuated total reflection) platinum-diamond crystal analyzer with a resolution of 4 cm −1 . The signal was obtained from an average of three series of 24 scans from 400 to 4000 cm −1 and subtracted from the background. The background was obtained from one series of 24 scans. Apodization was performed using the 3rd term of a Black-Harris function.
Rheology
Experiments were performed using a Haake RS 1 rheometer with a cone-and-plate geometry (60 mm diameter, 2 º cones). Temperature was controlled (25 • C) by a circulating water bath Haake (DC30). Viscosity and viscoelastic property parameters were analyzed with Haake rheometer software.
Antibacterial Assay
The inhibition of Staphylococcus aureus growth was evaluated by determining the bacterial colony-forming unit (CFU) number of a suspension put in contact with the test films in contrast to a control suspension without contact with films. The bacterial suspension was prepared according to the 0.5 McFarland scale and diluted to present a final concentration of 6 × 10 4 UFC/mL in Mueller-Hinton Broth (MHB). Commercially available bacterial cellulose membrane (BCM), (PVOH/GG/CA) (5/2/0.7) (w/w) %, 0% AZO-Np membrane with citric acid, (PVOH/GG/CA) (5/2/0.7) (w/w) % without AZO-Np membrane with alkaline treatment, or bacterial suspension were only used as a control of growth. Then, the test films were incubated with (PVOH/GG/CA/AZO-Np) (5/2/0.7/2) (w/w) %, 2% AZO-Np membrane with citric acid, (PVOH/GG/CA/AZO-Np) (5/2/0.7/2) (w/w) %, 2% AZO-Np membrane with alkaline treatment, and the controls with the bacterial suspension at 37 • C for 2 h. Bacterial growth was quantified by plating different suspension dilutions before and after 2 h of contact with films in plate count agar (PCA). The CFU number was determined after incubation at 37 • C for 24 h [48]. This experiment was run in quadruplicate and the percentage of inhibition of the test films was defined in relation to the control without film. The results were analyzed by Student's t-test with p < 0.01. Variables exceeding the upper quantification limit were considered statistically significant.
SEM and TEM Analysis
PVOH/GG/AZO-Np membranes were produced using three different pHs, as can be seen in Figure 1. When the solution was neutral (pH 7), as shown in Figure 1A,B, it produced bead aggregates, with some of them partially filled with AZO-Np, as seen in Figure 1B. It is known that ZnO is poorly soluble in water at room temperature; it becomes more soluble if the pH is changed [39]. When the pH increased, it improved fiber homogeneity compared to the neutral condition, as seen in Figure 1C.
However, aggregates were still present over the fibers. When citric acid was added to the mixture, reducing the pH as shown in Figure 1D, a great improvement in fiber homogeneity as well as in its production rate was observed. Moreover, this result shows that citric acid helped to improve fiber homogeneity when used together with guar gum [49].
When AZO-Np weight concentration increases from 0.25 to 0.5 (w/w) % at constant citric acid concentration (acidic pH) and after the sonication protocol, the fiber morphology presents smoothness and homogeneity, as can be observed in Figure 2A. When the AZO-Np concentration was increased to AZO-Np 1 (w/w) %, we noticed the appearance of aggregates, as shown in Figure 2B. A further increase in citric acid concentration would restore fiber homogeneity. However, since citric acid also has bactericidal properties, [50] we decided to keep its concentration under the bactericidal threshold level so as to only evaluate the effect of AZO-NPs. It is worth pointing out that when the mixture with AZO-Np 1 (w/w) % was not sonicated, dark ellipsoidal areas on the surface of the fiber could be observed, as shown in Figure 2C. The diffraction image of the spots shown in Figure 2D shows a large halo which corresponds to semi-crystalline materials, such as PVOH and GG. Figure 2F shows a diffraction pattern that corresponds to the AZO-Np cluster, Table S2, in the center of Figure 2E. This result leads to the conclusion that the darks areas cannot be associated with the AZO-Nps, since the Nps have quite a different diffraction pattern. The dark spots are probably associated with the product of semicrystalline ZnO lixiviation by citric acid. The darkening of the spot is related to atomic However, aggregates were still present over the fibers. When citric acid was added to the mixture, reducing the pH as shown in Figure 1D, a great improvement in fiber homogeneity as well as in its production rate was observed. Moreover, this result shows that citric acid helped to improve fiber homogeneity when used together with guar gum [49].
When AZO-Np weight concentration increases from 0.25 to 0.5 (w/w) % at constant citric acid concentration (acidic pH) and after the sonication protocol, the fiber morphology presents smoothness and homogeneity, as can be observed in Figure 2A. When the AZO-Np concentration was increased to AZO-Np 1 (w/w) %, we noticed the appearance of aggregates, as shown in Figure 2B. A further increase in citric acid concentration would restore fiber homogeneity. However, since citric acid also has bactericidal properties, [50] we decided to keep its concentration under the bactericidal threshold level so as to only evaluate the effect of AZO-NPs. It is worth pointing out that when the mixture with AZO-Np 1 (w/w) % was not sonicated, dark ellipsoidal areas on the surface of the fiber could be observed, as shown in Figure 2C. The diffraction image of the spots shown in Figure 2D shows a large halo which corresponds to semi-crystalline materials, such as PVOH and GG. Figure 2F shows a diffraction pattern that corresponds to the AZO-Np cluster, Table S2, in the center of Figure 2E. This result leads to the conclusion that the darks areas cannot be associated with the AZO-Nps, since the Nps have quite a different diffraction pattern. The dark spots are probably associated with the product of semi-crystalline ZnO lixiviation by citric acid. The darkening of the spot is related to atomic numbers greater than PVOH/GG that constitute the fiber. This leads to higher electron absorption and a consequent darkening of the image. Figure 2E shows the changes to dark spot geometry after sonication; according to the previous described protocol, they become thinner, straight, and elongated, meaning that sonication leads to a better fiber homogeneity. These features also indicate that the AZO-Np powder is crystalline. Figure 2E shows the changes to dark spot geometry after sonication; according to the previous described protocol, they become thinner, straight, and elongated, meaning that sonication leads to a better fiber homogeneity. These features also indicate that the AZO-Np powder is crystalline. Figure 3 depicts from left to right SEM images, Cathodoluminescence (CL) images, and CL spectra obtained from a selected region of different samples.
Cathodoluminescence
The AZO-Np powder measurement has the SEM image presented in Figure 3(2a). Its corresponding CL image shown in Figure 3(2b) presents an almost homogeneous brightness with high intensity spots distributed all over the image. Its spectrum shown in Figure 3(2c) presents two well-defined peaks, being the most intense at 339.1 nm, which corresponds to 3.66 eV band gap energy. The band gap for pure ZnO is around 3.25 to 3.28 eV. The blue shift of the nanoparticles is a clear signature of Al doping [51]. The second peak centered at 610.9 nm corresponds to defect bands, an excess of oxygen, and OH groups [52].
The control PVOH/GG/CA membrane SEM image is shown in Figure 3(1a). Its CL image is very dim as can be observed in Figure 3(1b), and its CL spectrum, shown in the graph of Figure 3(1c), presents one broad peak, which is believed to be mostly from PVOH luminescence. In fact, PVOH is the major membrane constituent and shows a visible (400-500 nm) emission due to electronic transitions in the -OH groups [53].
The untreated PVOH/GG/CA/AZO-Np membrane SEM image is shown in Figure 3(3a). Its CL image is mostly dark with large scattered bright spots, shown in Figure 3(3b). Its CL spectrum, shown in the graph of Figure 3(3c), is quite similar to the powder spectrum since the membrane has AZO-Np embedded. However, the reduced bright spot is an expected result because the AZO-Np partially dissociates in acidic environments depending on the ionic strength.
The thermo-chemically treated sample SEM image is shown in Figure 3(4a). Its CL image is shown in Figure 3(4b) and similarly to the untreated one is mostly dark with less bright spots and with more intense emission as can be observed in the CL spectrum of Figure 3(4c). Furthermore, its spectrum shows a more pronounced contribution from the defect band, around 600 nm (orange), [52] when compared with the untreated membrane spectrum, as shown in Table S2. This might be evidence that the thermochemical process reconstitutes new AZO-Nps with a higher defect concentration, as will be shown ahead with further evidence. All CL spectra were obtained with the bean focused on a single nanoparticle or with the microscope characteristic spot size. The normalized spectra are shown in Figure S1 (Supplementary Materials).
Increasing the citric acid content from 0.7 (w/w) % to 2 (w/w) % in the mixture, that is, nearly a 3-fold increase, dissolves the AZO-Nps almost completely. The result of this dissolution is shown in Figure 4A,B for the acidic sample membrane (PVOH/GG/CA/AZO-Np) (4.8/1.9/2.0/0.5) (w/w) % where the maximum peak intensity is around 100 cps for a 200× magnification. The same sample after thermochemical treatment in a 0.1 M NaOH ethanolic solution for 1 h at 70 • C presents a completely different spectrum obtained with the same magnification, as shown in Figure 4C
EDS Spectra
The EDS spectra from the AZO-Np powder (data not shown) indicate that the original nanoparticle powder was made of 50.35 at% O, 16.53 at% Na, 2.38 at% Al, 0.15 at% Si, and 30.59 at% Zn. The presence of important concentration of sodium probably is due to chemical processes of synthesis which are not mentioned by the manufacturer. The presence of silicon at this concentration can be considered as an impurity. Figure 5 presents the SEM image of a selected region Figure 5(1a), its corresponding CL image Figure 5(2a), and the superimposition of both Figure 5(3a), as well as the CL spectrum of one of the bright spots Figure 5(1b) and the EDS profile of two selected points in samples A Figure 5(2b) and B Figure 5(3b). The sample with a three-fold increase in the citric acid content sample (PVOH/GG/CA/AZO-Np) (4.8/1.9/2.0/0.5) (w/w) % after thermo-chemical treatment was analyzed by EDS at the two selected points A and B. The EDS line profile analysis results corresponding to nanoparticles located at points A and B show that the nanoparticles had different elemental content from the original AZO-Np. The EDS content result for the nanoparticle at point A, Figure 5(2b), was 70 at% C, 19 at% O, 4 at% Na, 4
EDS Spectra
The EDS spectra from the AZO-Np powder (data not shown) indicate that the original nanoparticle powder was made of 50.35 at% O, 16.53 at% Na, 2.38 at% Al, 0.15 at% Si, and 30.59 at% Zn. The presence of important concentration of sodium probably is due to chemical processes of synthesis which are not mentioned by the manufacturer. The presence of silicon at this concentration can be considered as an impurity. Figure 5 presents the SEM image of a selected region Figure 5(1a), its corresponding CL image Figure 5(2a), and the superimposition of both Figure 5(3a), as well as the CL spectrum of one of the bright spots Figure 5(1b) and the EDS profile of two selected points in samples A Figure 5(2b) and B Figure 5(3b). The sample with a three-fold increase in the citric acid content sample (PVOH/GG/CA/AZO-Np) (4.8/1.9/2.0/0.5) (w/w) % after thermo-chemical treatment was analyzed by EDS at the two selected points A and B. The EDS line profile analysis results corresponding to nanoparticles located at points A and B show that the nanoparticles had different elemental content from the original AZO-Np. The EDS content result for the nanoparticle at point A, Figure 5(2b), was 70 at% C, 19 at% O, 4 at% Na, 4 at% Al, and 4 at% Zn. The nanoparticle at point B, Figure 5(3b), had similar content when the EDS line profile was observed: 72 at% C, 18 at% O, 2.88 at% Na, 3.3 at% Al, 2.84 at% Zn, and 0.2 at% Si. The presence of silicon was probably due to impurities resulting from the thermo-chemical process. The fluorescent emission peaks also had an energy shift at point A, Figure 5(1b), thus indicating that the thermo-chemical treatment generated new AZO-type nanoparticles.
Zn, and 0.2 at% Si. The presence of silicon was probably due to impurities resulting from the thermo-chemical process. The fluorescent emission peaks also had an energy shift at point A, Figure 5(1b), thus indicating that the thermo-chemical treatment generated new AZO-type nanoparticles.
The corresponding EDS elemental map area with the two analyzed points, Figure S2 (Supplementary Materials) shows that zinc, oxygen, sodium hydroxide, and carbon are found spread in the analysis area, whereas aluminum atoms are specifically located in the nanoparticles. At first glance, zinc and oxygen are not homogenously spread on the analyzed area, and we see oxygen islands surrounded by zinc deposits. However, the detailed EDS analysis at Figure 5(2b,3b) shows the increase in oxygen where aluminum and zinc are present in the nanoparticle, indicating that is a kind of AZO-type nanoparticle. The corresponding EDS elemental map area with the two analyzed points, Figure S2 (Supplementary Materials) shows that zinc, oxygen, sodium hydroxide, and carbon are found spread in the analysis area, whereas aluminum atoms are specifically located in the nanoparticles. At first glance, zinc and oxygen are not homogenously spread on the analyzed area, and we see oxygen islands surrounded by zinc deposits. However, the detailed EDS analysis at Figure 5(2b,3b) shows the increase in oxygen where aluminum and zinc are present in the nanoparticle, indicating that is a kind of AZO-type nanoparticle.
Moreover, depending on the parameter process (time, temperature, sodium hydroxide concentration), nanoparticles with different constitutions are produced in the same process. Among them, it was possible to observe nanoparticles created without aluminum and nanoparticles without zinc. It is worth noting that the reconstituted nanoparticles are sphere-like, whereas the original zinc oxide nanoparticle was not (data not shown).
The EDS elemental map from the same sample before the thermo-chemical treatment is shown in Figure S3 (Supplementary Materials). It is possible to observe that the elemental map display is different to the one after the treatment, Figure S2 (Supplementary Materials). Elemental islands of oxygen or zinc presented in the sample after treatment are not observed. The major elements are homogeneously spread over the analysis area. Figure 6A shows the spectrum of AZO-Np nanoparticles used in these experiments. Moreover, depending on the parameter process (time, temperature, sodium hydroxide concentration), nanoparticles with different constitutions are produced in the same process. Among them, it was possible to observe nanoparticles created without aluminum and nanoparticles without zinc. It is worth noting that the reconstituted nanoparticles are sphere-like, whereas the original zinc oxide nanoparticle was not (data not shown).
FTIR Spectra
The EDS elemental map from the same sample before the thermo-chemical treatment is shown in Figure S3 (Supplementary Materials). It is possible to observe that the elemental map display is different to the one after the treatment, Figure S2 (Supplementary Materials). Elemental islands of oxygen or zinc presented in the sample after treatment are not observed. The major elements are homogeneously spread over the analysis area. Figure 6A shows the spectrum of AZO-Np nanoparticles used in these experiments. As can be observed, these nanoparticles have absorption bands at 430 cm −1 and 500 cm −1 , large bands between 526-586 cm −1 and between 669.7-829.8 cm −1 , 1373.6 cm −1 , and 1560 cm −1 , and a broad band at 3483 cm −1 . The spinel structure has stretching bands in the 500-900 cm −1 range corresponding to vibrations of metal-oxygen, aluminum-oxygen, and metal-oxygen-aluminum [54]. Peaks in the 1300-1600 cm −1 range are attributed to chemical impurities that come from synthesis [55], and the peaks in the 3400-3700 cm −1 range are attributed to chemically bonded hydroxyl vibration modes [55,56]. Figure 6B also shows the FTIR spectra of sample PVOH/GG/CA/AZO-Np(4.9/1.9/2.0/0.5) (w/w) % corresponding to the thermo-chemically treated and untreated samples and the corresponding alkaline control. The absorption peaks from the matrix PVOH/GG are described in Lubambo et al. [33]. Moreover, it is possible to observe absorption peaks for the composite fibers at 850, 1142, and 1420 cm −1 corresponding to As can be observed, these nanoparticles have absorption bands at 430 cm −1 and 500 cm −1 , large bands between 526-586 cm −1 and between 669.7-829.8 cm −1 , 1373.6 cm −1 , and 1560 cm −1 , and a broad band at 3483 cm −1 . The spinel structure has stretching bands in the 500-900 cm −1 range corresponding to vibrations of metal-oxygen, aluminum-oxygen, and metal-oxygen-aluminum [54]. Peaks in the 1300-1600 cm −1 range are attributed to chemical impurities that come from synthesis [55], and the peaks in the 3400-3700 cm −1 range are attributed to chemically bonded hydroxyl vibration modes [55,56]. Figure 6B also shows the FTIR spectra of sample PVOH/GG/CA/AZO-Np(4.9/1.9/ 2.0/0.5) (w/w) % corresponding to the thermo-chemically treated and untreated samples and the corresponding alkaline control. The absorption peaks from the matrix PVOH/GG are described in Lubambo et al. [33]. Moreover, it is possible to observe absorption peaks for the composite fibers at 850, 1142, and 1420 cm −1 corresponding to functional groups C-O, C-C, and CH2 common to PVOH/GG/CA and 1611/1714 cm −1 C=O common to GG/CA.
FTIR Spectra
As mentioned before, this sample had its citric acid content increased to dissolve the AZO-Np almost completely as shown in Figure 4A. Observing the corresponding sample spectrum, the absorption peak related to Zn-O stretching is not present in this untreated sample.
However, the corresponding FTIR spectra of the treated sample compared to the untreated sample show the appearance of one absorption peak at 540 cm −1 related to the new AZO-Np nanoparticles reconstituted by the thermo-chemical treatment. Figure 7 below shows the flow curve (viscosity (η) x shear rate ( . γ)) for the samples with AZO-Np in different concentrations and the controls PVOH/GG and PVOH/GG/CA. All samples have a pseudo-plastic behavior. It is possible to observe an increase in viscosity when the polyelectrolyte is added to the mixture PVOH/GG with or without CA, except when the AZO-Np concentration in the mixture is 5 (w/w) %. The mixture with 1 and 2 (w/w) % AZO-Np does not produce significant differences. However, increasing AZO-Np concentration to 3 (w/w) % produces a significant rise in viscosity, possibly because in this situation the interaction polymer chain water is preferential compared to polymer-AZO-Np.
Rheology
Polymers 2022, 14, x FOR PEER REVIEW 13 of 17 functional groups C-O, C-C, and CH2 common to PVOH/GG/CA and 1611/1714 cm −1 C=O common to GG/CA. As mentioned before, this sample had its citric acid content increased to dissolve the AZO-Np almost completely as shown in Figure 4A. Observing the corresponding sample spectrum, the absorption peak related to Zn-O stretching is not present in this untreated sample.
However, the corresponding FTIR spectra of the treated sample compared to the untreated sample show the appearance of one absorption peak at 540 cm −1 related to the new AZO-Np nanoparticles reconstituted by the thermo-chemical treatment. Figure 7 below shows the flow curve (viscosity (η) x shear rate ( )) for the samples with AZO-Np in different concentrations and the controls PVOH/GG and PVOH/GG/CA. All samples have a pseudo-plastic behavior. It is possible to observe an increase in viscosity when the polyelectrolyte is added to the mixture PVOH/GG with or without CA, except when the AZO-Np concentration in the mixture is 5 (w/w) %. The mixture with 1 and 2 (w/w) % AZO-Np does not produce significant differences. However, increasing AZO-Np concentration to 3 (w/w) % produces a significant rise in viscosity, possibly because in this situation the interaction polymer chain water is preferential compared to polymer-AZO-Np. The flow curves were modeled according to an Ostwald-de Waele mathematical model, which was applied to non-Newtonian fluids under shear rate. When the exponent n from the model is situated between 0 < n <1 range, they are pseudo-plastic. When n = 1, it is Newtonian [57]. Our results varied from 0.8 to 0.93 according to Table S3 (Supplementary Materials), which shows that the mixtures had a behavior very similar to a Newtonian fluid.
Rheology
Since the K values as shown in Table S3 (Supplementary Materials) are proportional to the viscosity, it becomes clear that the most viscous is the sample with 3 (w/w) % AZO-Np, which is in accordance with the results in Figure 7.
The scan results of the oscillatory mode analysis as presented in Figure S4 (Supplementary Materials) display that for all the frequency rates scanned, the elastic modulus (G') was always smaller than the viscous (G'') one. This result is characteristic of viscoelastic dispersions with liquid behavior and is present in all the samples.
Antibacterial Assay
The proper growth of Staphylococcus aureus occurred in the experiment controls: commercially available bacterial cellulose membrane (BCM); (PVOH/GG/CA) (5/2/0.7) (w/w) %, 0% AZO-Np membrane with citric acid; and (PVOH/GG/CA) (5/2/0.7) (w/w) %, The flow curves were modeled according to an Ostwald-de Waele mathematical model, which was applied to non-Newtonian fluids under shear rate. When the exponent n from the model is situated between 0 < n <1 range, they are pseudo-plastic. When n = 1, it is Newtonian [57]. Our results varied from 0.8 to 0.93 according to Table S3 (Supplementary Materials), which shows that the mixtures had a behavior very similar to a Newtonian fluid.
Since the K values as shown in Table S3 (Supplementary Materials) are proportional to the viscosity, it becomes clear that the most viscous is the sample with 3 (w/w) % AZO-Np, which is in accordance with the results in Figure 7.
The scan results of the oscillatory mode analysis as presented in Figure S4 (Supplementary Materials) display that for all the frequency rates scanned, the elastic modulus (G') was always smaller than the viscous (G") one. This result is characteristic of viscoelastic dispersions with liquid behavior and is present in all the samples.
Antibacterial Assay
The proper growth of Staphylococcus aureus occurred in the experiment controls: commercially available bacterial cellulose membrane (BCM); (PVOH/GG/CA) (5/2/0.7) (w/w) %, 0% AZO-Np membrane with citric acid; and (PVOH/GG/CA) (5/2/0.7) (w/w) %, 0% AZO-Np membrane with a thermo-chemical alkaline treatment. Therefore, no control film inhibited bacterial growth, presenting no statistically significant difference in relation to the control inoculated with bacteria without any membrane as shown in Figure 8A-D. film inhibited bacterial growth, presenting no statistically significant difference in relation to the control inoculated with bacteria without any membrane as shown in Figure 8A-D.
Conclusions
The SEM and TEM images of the membranes with AZO-NP nanoparticles produced in neutral and alkaline pH presented bead aggregates. By adding citric acid to the mixture, the obtained fibers became more homogenous, which was probably due to the partial AZO-Np dissolution confirmed by the Zn-O stretching (540 cm −1 ) absorption band absence in the FTIR-ATR spectrum. A thermo-chemical treatment was used to reconstitute the AZO-Np nanoparticles into the electrospun fibers, and this was confirmed by the Zn-O stretching absorption band reappearance. Regarding their rheological behavior, all the mixtures were pseudo-plastic. In general, their viscosity increased with the polyelectrolyte increase into the mixture except at AZO-Np 5 (w/w) %, which is probably because there is a preferential polymer-polyelectrolyte interaction at this polyelectrolyte concentration.
The Cl spectra of the thermo-chemically treated and untreated membranes showed that they had similar spectra when compared to the AZO-Np powder. However, the treated membrane showed in its spectrum a more pronounced contribution from the defect band of around 600 nm (orange) when compared to the untreated membrane. This is a consequence of the thermochemical process reconstituting new AZO-Np with a higher number of defects. The EDS results from the reconstituted nanoparticles showed that they had different atomic percentages when compared to the original nanoparticles. It is also possible to achieve different nanoparticle content produced with the same process; the content depends on (time, temperature, sodium hydroxide content). Concerning their morphology, the reconstituted nanoparticles were sphere-like, whereas the original ones were not.
Conclusions
The SEM and TEM images of the membranes with AZO-NP nanoparticles produced in neutral and alkaline pH presented bead aggregates. By adding citric acid to the mixture, the obtained fibers became more homogenous, which was probably due to the partial AZO-Np dissolution confirmed by the Zn-O stretching (540 cm −1 ) absorption band absence in the FTIR-ATR spectrum. A thermo-chemical treatment was used to reconstitute the AZO-Np nanoparticles into the electrospun fibers, and this was confirmed by the Zn-O stretching absorption band reappearance. Regarding their rheological behavior, all the mixtures were pseudo-plastic. In general, their viscosity increased with the polyelectrolyte increase into the mixture except at AZO-Np 5 (w/w) %, which is probably because there is a preferential polymer-polyelectrolyte interaction at this polyelectrolyte concentration.
The Cl spectra of the thermo-chemically treated and untreated membranes showed that they had similar spectra when compared to the AZO-Np powder. However, the treated membrane showed in its spectrum a more pronounced contribution from the defect band of around 600 nm (orange) when compared to the untreated membrane. This is a consequence of the thermochemical process reconstituting new AZO-Np with a higher number of defects. The EDS results from the reconstituted nanoparticles showed that they had different atomic percentages when compared to the original nanoparticles. It is also possible to achieve different nanoparticle content produced with the same process; the content depends on (time, temperature, sodium hydroxide content). Concerning their morphology, the reconstituted nanoparticles were sphere-like, whereas the original ones were not.
The antibacterial assays showed that Staphylococcus Aureus growth inhibition after 2 h at 37 • C was 30% on the membrane with citric acid when compared to the control membrane with no statistically different significance in relation to the control. However, growth inhibition increased to 57% for thermo-chemically treated membranes, indicating that the reconstituted nanoparticles increased antibacterial efficiency and therefore confirming that our membranes have a potential application as fast-release antibacterial wound dressing patches. | 8,080.8 | 2022-11-01T00:00:00.000 | [
"Materials Science"
] |
Soft Sensing of Silicon Content via Bagging Local Semi-Supervised Models
The silicon content in industrial blast furnaces is difficult to measure directly online. Traditional soft sensors do not efficiently utilize useful information hidden in process variables. In this work, bagging local semi-supervised models (BLSM) for online silicon content prediction are proposed. They integrate the bagging strategy, the just-in-time-learning manner, and the semi-supervised extreme learning machine into a unified soft sensing framework. With the online semi-supervised learning method, the valuable information hidden in unlabeled data can be explored and absorbed into the prediction model. The application results to an industrial blast furnace show that BLSM has better prediction performance compared with other supervised soft sensors.
In industrial processes, a large amount of sensor variables that are available can be used as input to the soft sensor model. The quality-relevant variable to be predicted using a soft sensor can be regarded as "labeled" data. However, the amount of quality-relevant variable ("labeled" data) is often limited mainly because it is difficult to measure online. Till now, most soft sensors in industrial ironmaking processes act in a supervised manner. That is, for construction of a soft sensor, both of inputs (sensor variables) and outputs (quality-relevant variables) are required for the task of supervised modeling. The labeled dataset contains both input and output data, while the unlabeled one consists of only input data (i.e., large amount of sensor variables). Actually, the labeled data are much fewer Sensors 2019, 19, 3814 2 of 11 than the unlabeled data mainly because the assaying process of silicon contents is infrequent and time-consuming. In contrast, the process input variables are measured frequently. Using a limited set of labeled data, the soft sensors are often inaccurate. To enhance the prediction performance, with large amounts of unlabeled data available, some semi-supervised soft sensors have been applied to chemical processes [33][34][35]. Therefore, the information hidden in unlabeled data is explored to develop a semi-supervised soft sensor for the silicon content prediction.
Most soft sensors have their fixed prediction domains. The predictive accuracy of soft sensors gradually decreases due to changes in the state of chemical plants [36]. Consequently, flexible models with adaptive structure, e.g., just-in-time-learning (JITL) soft sensors [23,24,37] are more attractive than only using a fixed one in practice use. Unfortunately, most conventional JITL-based soft sensors were constructed only with the labeled data. Only the labeled data are considered in the process of selection and modeling of similar samples. Consequently, without integration of useful information in the unlabeled data, the prediction performance of JITL-based models may still not be sufficient for some applications.
In this work, bagging local semi-supervised models (BLSM) for online silicon content prediction are proposed. It integrates the bagging strategy, the JITL modeling manner [37] and semi-supervised extreme learning machine (SELM) [34,38,39] into a unified soft sensing framework. For online prediction of a test sample, the useful information in both of similar labeled and unlabeled samples is taken into its special JITL model. Additionally, a simple bagging strategy is adopted to online construct the model. Compared with conventional JITL models only with the labeled data, the prediction performance of BLSM is improved by utilizing the useful information in unlabeled data.
This work is organized in the following way. The extreme learning machine (ELM) and SELM soft sensors are described in Section 2. Additionally, the BLSM online modeling method and its detailed implementation are proposed in this section. In Section 3, BLSM is applied to online silicon content prediction and compared with other approaches. Finally, a conclusion is given in Section 4.
Soft Sensor Modeling Methods
In this section, three soft sensing methods for the silicon content prediction are presented. First, the ELM-based supervised regression algorithm is briefly described. Second, the SELM-based semi-supervised regression algorithm is presented. Finally, the BLSM online local modeling method is proposed.
Extreme Learning Machine (ELM) Regression Method
The labeled dataset is denoted as are L input and output data, respectively. ELM works for generalized single-hidden layer feedforward networks (SLFNs) [38]. The ELM model has an input layer, a single-hidden layer, and an output layer. With N hidden nodes, ELM approximates the training data, i.e., L i=1 y l i −ŷ l i = 0, where y l i andŷ l i denote the actual output and predicted one, respectively. Compactly, the ELM-based regression formulation [38] is described as: where the output matrix of hidden-layer P = [p 1 , p 2 , . . . , , i = 1, . . . , N; v a i , x l j + b i is the activation function output of the ith hidden node related to the jth input x l j . For the ith hidden node, a i and b i are its input weight and bias, respectively; and a i , x l j is the Different from the gradient-descent based training algorithms (e.g. backpropagation method) for many NNs and the optimization method based for support vector machines, the essence of ELM is that the hidden layer of SLFNs need not be tuned. Without resorting to some complex training algorithms, the weights of the hidden neurons in ELM can be efficiently computed [38]. For many regression cases, the number of hidden nodes is much less than the number of training samples, i.e., N << L. In such a situation, the output weights α [38] are determined as: Using the Moore-Penrose generalized inverse of matrix P to solve α in ELM is feasible, i.e., α = P + Y l [38]. Additionally, to avoid the problem of P T P being noninvertible, a regularized ELM (RELM) model was formulated [34] where γ > 0 is the ridge parameter for the unit matrix I.
Finally, for a test sample x t = [x t1 , x t2 , · · · , x tn ] T ∈ R n , its predictionŷ t is obtained below: where p t is the output vector of the hidden-layer associated with x t .
Semi-supervised Extreme Learning Machine (SELM) Regression Method
For the semi-supervised learning methods, the input and output samples are represented as , respectively. Additionally, the hidden layer output matrix P can be defined as P = [p 1 , p 2 , · · · , p N ] (L+U)×N as aforementioned. The manifold regularization framework is utilized to learn the matrix W of an SELM model [39].
where JPW − Y 2 is the approximation errors of labeled training data (i.e., for the empirical risk) while λ(PW) T LPW is the penalty term utilizing the graph Laplacian L with a parameter λ ≥ 0 (i.e., for the complexity of learnt function). All the unlabeled data are integrated into the matrix P. The graph Laplacian L can be designed using a basic identity in the spectral graph theory [39]. Additionally, for the convenience of calculation, J = I L 0 0 0 (L+U)×(L+U) is defined [39].
By solving Equation (5), the coefficient matrix W [39] is obtained as: Generally, for semi-supervised learning methods, there is an assumption that the input patterns from both labeled and unlabeled data are from the same distribution. In such a situation, the data samples in the local region should have similar labels [33,34,39]. Useful information hidden in the unlabeled data can be explored from the above modeling framework. The graph Laplacian L of SELM contains the information in both of labeled and unlabeled data. Once the unlabeled data are ignored (i.e., λ = 0), W is the same as α in Equation (3). The prediction performance improvement can be obtained by suitably choosing λ as the penalty of model complexity. Finally, for a query sample where p t is the output vector of the hidden-layer associated with x t .
Bagging Local Semi-supervised Models (BLSM) Online Modeling Method
In industrial processes, JITL-based local soft sensors are more flexible than only using a fixed one for the relatively long-term utilization [23,24]. Nevertheless, most conventional JITL approaches only use limited labeled data, regardless of the useful information in lots of unlabeled data samples. As can be expected, using the unlabeled data, the prediction accuracy of JITL models can be improved.
Online inquiry of x t contains three main steps. First, select a similar set {S t } = S l t ∪ S u t , including both of L t labeled data and U t unlabeled data (i.e., S l t = X l t , Y l t and S u t = X u t ), from the historical database {S} via some defined similarity criteria [37]. The common Euclidean distance-based similarity is adopted here. Other similarity criteria available [23,24,37] can also be combined with local SELM models. Second, construct a local SELM model f (x t ) using the selected similar dataset {S t }. Third, online predict and then repeat the same procedure for another query sample.
For a selected {S t }, two parameters, i.e., the number of hidden nodes N and the balance parameter λ ≥ 0, are necessary to train a local SELM model. To avoid the overfitting problem, a simple bagging strategy is adopted to generate multiple local candidate models with diversities and then aggregate them as a new predictor. With the bootstrapping re-sampled strategy, several candidate regression models are ensembling to achieve an improved prediction [40].
For the similar labeled dataset S l t = X l t , Y l t , L t pairs of samples are randomly selected to replace S l t where the probability of each pair being chosen is 1 L t [40]. These L t pairs of data are a re-sampled training set S l t . Sequentially, the procedure is repeated for K times and to obtain K re-sampled datasets, i.e., S l t1 , · · · , S l tK . Similarly, the bagging strategy is applied to the unlabeled dataset S u t = X u t to get K re-sampled datasets S u t1 , . . . , S u tK . For the kth dataset {S tk } = S l tk ∪ S u tk , W k of the kth local SELM model is obtained (similar with Equations (5) and (6)). Consequently, for a test input x t = [x t1 , x t2 , . . . , x tn ] T ∈ R n , the prediction value of the kth local SELM model, i.e.,ŷ k,t , is formulated: where p t is the output matrix of the hidden-layer associated with x t . Finally, using a simple ensemble strategy, K candidate SELM models are equally weighted to generate the final prediction.ŷ The main modeling flowchart of BLSM is given in Figure 1. In summary, BLSM has two main characteristics. First, the useful information hidden in unlabeled data is explored and absorbed. Second, The main modeling flowchart of BLSM is given in Figure 1. In summary, BLSM has two main characteristics. First, the useful information hidden in unlabeled data is explored and absorbed. Second, using the bagging strategy [40], the BLSM model can be aggregated using multiple local candidates with diversities. Figure 1. Bagging local semi-supervised models (BLSM)-based online soft sensing flowchart for the silicon content prediction.
Data Sets and Pretreatment
The BLSM method is applied to the silicon content prediction in an industrial blast furnace in China. For construction of soft sensors, the related input variables include the blast volume, the blast temperature, the top pressure, the gas permeability, the top temperature, the ore/coke ratio, and the pulverized coal injection rate [22][23][24]. After preprocessing the data set with 3-sigma criterion, most of obvious outliers were removed out. A set of about 260 labeled samples was investigated. Half of labeled samples are considered as the historical samples. The remaining part is used for testing the models. Additionally, 500 unlabeled data were obtained as historical samples in the same furnace. The labeled and unlabeled data are from the same industrial blast furnace, indicating that they share with similar characteristics in a production process. Consequently, the semi-supervised learning methods can be applied.
Data Sets and Pretreatment
The BLSM method is applied to the silicon content prediction in an industrial blast furnace in China. For construction of soft sensors, the related input variables include the blast volume, the blast temperature, the top pressure, the gas permeability, the top temperature, the ore/coke ratio, and the pulverized coal injection rate [22][23][24]. After preprocessing the data set with 3-sigma criterion, most of obvious outliers were removed out. A set of about 260 labeled samples was investigated. Half of labeled samples are considered as the historical samples. The remaining part is used for testing the models. Additionally, 500 unlabeled data were obtained as historical samples in the same furnace. The labeled and unlabeled data are from the same industrial blast furnace, indicating that they share with similar characteristics in a production process. Consequently, the semi-supervised learning methods can be applied.
As a recent supervised method with good nonlinear regression performance, the just-in-time least squares SVR (JLSSVR) soft sensor [23] is adopted for comparison. Additionally, as a semi-supervised model, the SELM model [39] is also combined with JITL to construct a local SELM soft sensor here.
Two common performance indices, including the root-mean-square error (RMSE), the relative RMSE (simply denoted as RE), and the hit rate (HR), are adopted and defined, respectively.
where N tst is the number of test samples. H t is defined as:
Results and Discussion
First, with different sizes of unlabeled data, the comparison results of three performance indices between two semi-supervised models, i.e., BLSM and local SELM, are shown in Figures 2-4, respectively. For both BLSM and local SELM models, the prediction performance is enhanced gradually with the increase in the size of the unlabeled data. Due to the ensemble local modeling ability, BLSM exhibits superior prediction performance to a single local SELM one. In this case, the prediction performance is not further enhanced when the number of unlabeled samples is more than about 400. This is mainly because most of useful information in unlabeled dataset is absorbed from the first 400 data.
As a recent supervised method with good nonlinear regression performance, the just-in-time least squares SVR (JLSSVR) soft sensor [23] is adopted for comparison. Additionally, as a semi-supervised model, the SELM model [39] is also combined with JITL to construct a local SELM soft sensor here. Two common performance indices, including the root-mean-square error (RMSE), the relative RMSE (simply denoted as RE), and the hit rate (HR), are adopted and defined, respectively.
where tst N is the number of test samples. t H is defined as:
Results and Discussion
First, with different sizes of unlabeled data, the comparison results of three performance indices between two semi-supervised models, i.e., BLSM and local SELM, are shown in Figures 2-4, respectively. For both BLSM and local SELM models, the prediction performance is enhanced gradually with the increase in the size of the unlabeled data. Due to the ensemble local modeling ability, BLSM exhibits superior prediction performance to a single local SELM one. In this case, the prediction performance is not further enhanced when the number of unlabeled samples is more than about 400. This is mainly because most of useful information in unlabeled dataset is absorbed from the first 400 data. With 400 unlabeled data, taking the HR index as an example, different numbers (i.e., K) of candidate local SELM models for construction of a BLSM one is shown in Figure 5. With the ensemble learning strategy, the efforts on parameter selection of BLSM can be reduced. The HR index indicates that the ensemble learning can enhance the prediction performance to some extent With 400 unlabeled data, taking the HR index as an example, different numbers (i.e., K) of candidate local SELM models for construction of a BLSM one is shown in Figure 5. With the ensemble learning strategy, the efforts on parameter selection of BLSM can be reduced. The HR index indicates that the ensemble learning can enhance the prediction performance to some extent With 400 unlabeled data, taking the HR index as an example, different numbers (i.e., K) of candidate local SELM models for construction of a BLSM one is shown in Figure 5. With the ensemble learning strategy, the efforts on parameter selection of BLSM can be reduced. The HR index indicates that the ensemble learning can enhance the prediction performance to some extent (the HR value increases from 77.2% to 80.3%). And BLSM achieves the best prediction performance when K = 15 for this application. of lab assay, the computational load is accepted. With more historical data (especially unlabeled data), the computational load of online modeling becomes larger. To alleviate this problem, it is suggested that the online and offline models are integrated using the Bayesian analysis [37]. Alternatively, development of the recursive version of BLSM may be a choice. In summary, all the obtained results show that BLSM is a promising prediction method of the silicon content in hot metal produced in blast furnaces. For the three soft sensors (i.e., BLSM, local SELM, and JLSSVR [23]), the silicon content prediction results are shown in Figure 6. This parity plot shows that BLSM is better than local SELM and JLSSVR methods. The prediction performance comparison of three modeling methods is listed in Table 1. Their main characteristics are also described briefly. Generally, BLSM is a local semi-supervised learning model and therefore it can better capture nonlinear characteristics in local regions, especially with the help of unlabeled data. For JLSSVR [23] only with a few labeled data, the prediction domain may be limited. Different from JLSSVR [23], BLSM explores and utilizes the hidden information in lots of unlabeled data to improve the local modeling ability. Moreover, using the simple bagging ensemble strategy, the prediction performance of a semi-supervised local model (e.g., a local SELM) can be enhanced. The computational complexity of BLSM is about K times of a local SELM model. Based on the experiences, K is often much less than 100. The online prediction time of BLSM for a test sample is about 1 s (with CPU main frequency 2.3 GHz and 4 GB memory). Compared with the interval time of lab assay, the computational load is accepted. With more historical data (especially unlabeled data), the computational load of online modeling becomes larger. To alleviate this problem, it is suggested that the online and offline models are integrated using the Bayesian analysis [37]. Alternatively, development of the recursive version of BLSM may be a choice. In summary, all the obtained results show that BLSM is a promising prediction method of the silicon content in hot metal produced in blast furnaces.
Conclusions
This work has presented an online semi-supervised soft sensor model, i.e., BLSM, for blast furnace hot metal silicon content prediction. Two main advantages distinguish BLSM from most current hot metal silicon prediction soft sensors. First, the useful information in unlabeled data is absorbed into the online modeling and prediction framework efficiently. Second, a bagging-based ensemble strategy is integrated into the online semi-supervised model to improve its prediction reliability. The application results show that BLSM has better prediction performance than traditional soft sensors. This is the first application of semi-supervised learning methods to industrial blast furnaces. How to efficiently select the more informative unlabeled data in an error-in-variables environment for construction of a more robust semi-supervised model will be tackled in our future work.
Conflicts of Interest:
The authors declare no conflict of interest.
BLSM
bagging local semi-supervised model ELM extreme learning machine JITL just-in-time-learning JLSSVR just-in-time least squares support vector regression NNs neural networks RE relative root-mean-square error RMSE root-mean-square error RELM regularized extreme learning machine SELM semi-supervised extreme learning machine SLFNs single-hidden layer feedforward networks SVR support vector regression | 4,620 | 2019-09-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Computer Science"
] |
The origin of the lattice thermal conductivity enhancement at the ferroelectric phase transition in GeTe
The proximity to structural phase transitions in IV-VI thermoelectric materials is one of the main reasons for their large phonon anharmonicity and intrinsically low lattice thermal conductivity $\kappa$. However, the $\kappa$ of GeTe increases at the ferroelectric phase transition near $700$ K. Using first-principles calculations with the temperature dependent effective potential method, we show that this rise in $\kappa$ is the consequence of negative thermal expansion in the rhombohedral phase and increase in the phonon lifetimes in the high-symmetry phase. Negative thermal expansion increases phonon group velocities, which counteracts enhanced anharmonicity of phonon modes and boosts $\kappa$ close to the phase transition in the rhombohedral phase. A drastic decrease in the anharmonic force constants in the cubic phase increases the phonon lifetimes and $\kappa$. Strong anharmonicity near the phase transition induces non-Lorentzian shapes of the phonon power spectra. To account for these effects, we implement a novel method of calculating $\kappa$ based on the Green-Kubo approach and find that the Boltzmann transport equation underestimates $\kappa$ near the phase transition. Our findings elucidate the influence of structural phase transitions on $\kappa$ and provide guidance for design of better thermoelectric materials.
The proximity to structural phase transitions in IV-VI thermoelectric materials is one of the main reasons for their large phonon anharmonicity and intrinsically low lattice thermal conductivity κ. However, the κ of GeTe increases at the ferroelectric phase transition near 700 K. Using firstprinciples calculations with the temperature dependent effective potential method, we show that this rise in κ is the consequence of negative thermal expansion in the rhombohedral phase and increase in the phonon lifetimes in the high-symmetry phase. Negative thermal expansion increases phonon group velocities, which counteracts enhanced anharmonicity of phonon modes and boosts κ close to the phase transition in the rhombohedral phase. A drastic decrease in the anharmonic force constants in the cubic phase increases the phonon lifetimes and κ. Strong anharmonicity near the phase transition induces non-Lorentzian shapes of the phonon power spectra. To account for these effects, we implement a novel method of calculating κ based on the Green-Kubo approach and find that the Boltzmann transport equation underestimates κ near the phase transition. Our findings elucidate the influence of structural phase transitions on κ and provide guidance for design of better thermoelectric materials.
Recent computational work has predicted that driving IV-VI materials closer to the ferroelectric phase transition via strain or alloying can lead to a drastically lower lattice thermal conductivity [22,23]. Under the assumption of the displacive phase transition, it was found that coupling between soft transverse optical (TO) modes and the heat carrying acoustic modes is the main reason for the κ reduction. At the displacive transition, the frequency of the soft TO mode collapses, becoming effectively zero. Since scattering rates are inversely proportional to phonon frequencies, the lifetimes of the acoustic modes that couple to soft TO modes decrease dramatically, leading to a considerable κ reduction [22,23]. Surprisingly, experimental studies have shown that the lattice thermal conductivity increases at the ferroelectric phase transition in GeTe [25][26][27][28][29][30][31][32]. This is at odds with measurements in some other materials going through ferroelectric phase transitions, where a significant decrease in κ is observed [33]. The reason for the anomalous behaviour of κ at the phase transition in GeTe remains unknown. Understanding the microscopic origin of the κ increase at the ferroelectric phase transition in GeTe may lead to design of improved thermoelectric materials.
Here we study how driving GeTe near the ferroelectric phase transition via temperature affects its lattice thermal conductivity, making no assumptions about the nature of the phase transition. Unlike the previous work [22,23], we calculate interatomic force constants at different temperatures using the state-of-the art, temperature dependent effective potentials (TDEP) method [34][35][36]. We find that the increase of the lattice thermal conductivity of GeTe at the phase transition in the rhombohedral phase comes from negative thermal expansion that enhances the phonon group velocities. In the cubic phase the phonon lifetimes increase, leading to even more substantial increase of κ. Large anharmonicity of phonon modes minimizes the phonon lifetimes at the phase transition in the rhombohedral phase and leads to non-Lorentzian power spectra of phonon modes. We implement a new method of calculating κ that includes these non-Lorentzian lineshapes of the phonon power spectra near the phase transition. This approach further increases κ at the phase transition, which can be attributed to further softening of the phonon frequencies due to phonon-phonon interaction.
Results and discussion.
TO mode softening at the phase transition. GeTe is a ferroelectric material, exhibiting a spontaneous polarization below 600 − 700 K [37][38][39][40] (the critical temperature strongly depends on the free charge carrier concentration). This occurs due to a slight offset of the Te sublattice along one of the body diagonals of the rocksalt structure. At temperatures higher than 600 − 700 K, GeTe transforms to the rocksalt structure, losing its ferroelectric nature [41][42][43][44]. Below 600−700 K, the GeTe structure can be described by the following set of lattice vectors: where a is the lattice constant, b = 2(1 − cos θ)/3, c = (1 + 2 cos θ)/3, and θ is the angle between the primitive lattice vectors. The atomic positions in this structure are taken to be: Ge (0.0,0.0,0.0) and Te (0.5 + µ, 0.5 + µ, 0.5 + µ) in reduced coordinates. If the phase transition is displacive, as assumed in Refs. [22,23], the angle θ becomes 60 • at the phase transition, while the interatomic displacement parameter µ becomes zero [41,42]. In this type of the phase transition, the harmonic frequency of the soft mode also collapses to zero [45]. Here we define the harmonic frequency of the phonon mode as the square root of the eigenvalue of the dynamical matrix for that phonon mode at a certain temperature. In the order-disorder phase transition, both the harmonic frequency of the soft mode and the local interatomic displacement are non-zero [43,44,46]. It is still under debate which type of the phase transition occurs in GeTe [41][42][43][44][45].
We have calculated the phonon dispersion of GeTe at different temperatures using the TDEP method. This method allows us to calculate harmonic frequencies of phonon modes at different temperatures. Fig. 1 shows the harmonic frequencies of the two TO modes at the Brillouin zone centre in GeTe. The phase transition temperature obtained in our calculations is T C ≈ 634 K (see Methods). The harmonic frequencies of both phonon modes are strongly renormalized by anharmonic interaction and lattice thermal expansion. The harmonic frequencies of these phonon modes soften drastically at the phase transition, but they do not become zero, indicating that the phase transition in GeTe might not be of the displacive type.
The observed phonon frequency in experiments is not the harmonic frequency, but what we will call in the rest of the paper the anharmonic frequency i.e. the peak of the phonon mode power spectrum. Blue squares in Fig. 1 represent the computed anharmonic frequencies of both a The lower transverse optical (TO) mode (TO1) and b the higher TO mode (TO2) at the zone centre. Red circles and blue squares represent the calculated harmonic and anharmonic phonon frequencies, while other symbols represent the experimental results from Refs. [43,45,47]. The harmonic frequency is the square root of the eigenvalue of the dynamical matrix for q = 0 at a particular temperature, while the anharmonic frequency is the peak of the phonon mode power spectrum. The black vertical line represents the phase boundary between the rhombohedral and rocksalt structures in our calculations (≈ 634 K).
TO modes at the zone centre, which do fall to zero at the phase transition, in contrast to the harmonic frequencies.
Our results thus suggest that the observation of phonon mode softening is not a conclusive proof of the displacive type of the phase transition, as previously argued in the case of GeTe [45].
Our computed anharmonic TO frequencies at the zone centre agree fairly well with those measured in experiments, see Fig. 1. This agreement highlights the accuracy of the TDEP method even for the challenging cases of materials undergoing structural phase transitions. We note that the critical temperature in Ref. [45] is around 600 K, in Ref. [43] approximately 700 K and in Ref. [47] 650 K. This difference in the calculated and measured critical temperature is expected since the critical tem-perature strongly depends on the number of free charge carriers [48,49].
Phonon spectral function. Harmonic frequencies are a valid description of lattice dynamics only in the absence of phonon-phonon interaction. In the inelastic neutron scattering experiments that measure phonon spectral functions, harmonic phonons would produce zero linewidth signals, revealing infinitely long lived quasiparticles. However, in real materials phonons interact with each other, thus broadening phonon spectral functions, with linewidths inversely proportional to phonon lifetimes. This effect can be described using the concept of phonon self-energy that quantifies the strength of phonon-phonon interaction [50][51][52].
The probability of an incoming neutron to interact with a phonon system acquiring/losing energy Ω and momentum q is proportional to the spectral function [50][51][52]: where u q,s is the phonon displacement operator, ω q,s is the harmonic frequency of the phonon mode with wave vector q and phonon branch s, β is 1/k B T with k B being the Boltzmann constant and T the temperature, and is the reduced Planck constant. ∆ q,s and Γ q,s are the real and imaginary part of the phonon-self energy, respectively. In a weakly anharmonic material, this expression can be reduced to a Lorentzian with a halfwidth of Γ q,s and the position of the peak at ω q,s + ∆ q,s . In this case the phonon lifetime can be calculated as τ q,s = 1 2Γ q,s . However, in materials with strong anharmonicity, phonon spectral functions can exhibit exotic behaviour with satellite peaks, shoulders etc. [18,20,[53][54][55][56].
First we show the phonon spectral function of GeTe near the phase transition (631 K) along a high symmetry path, see Fig. 2. The black lines represent the harmonic frequencies ω q,s calculated at 631 K and the cyan dashed lines are the harmonic frequencies at 300 K. The harmonic frequencies of optical modes soften from 300 to 631 K. The acoustic mode frequencies soften as well in the F -Γ and the in plane (Γ -X) directions. This is the consequence of the overall softening of the second order force constants with temperature. On the other hand, the acoustic modes along the Γ -Z direction (the direction along the trigonal axis) stiffen because of the negative thermal expansion, as we will discuss in the next section. The spectral function shows further softening of the phonon frequencies due to anharmonic phononphonon interaction. As expected, the TO phonon at Γ can not be clearly resolved in the graph of the phonon spectral function. Figure 3 shows the unusual features of the spectral function for the second transverse optical mode (TO2) of GeTe at the zone centre for several different temperatures. A non-Lorentzian behavior of the phonon spectral function is evident even at 300 K, very far from the phase transition. The broadening of the power spectrum is large, revealing the short lifetime of this phonon mode. The distortion of the power spectrum is stronger at 625 K, with a very large shift of the peak of the spectral function compared to the harmonic frequency. For temperatures near the phase transition, the power spectrum peaks around 0 THz as expected at the phase transition (see Fig. 1). At temperatures higher than the phase transition temperature, the peak of the power spectrum is at non-zero frequencies, but is still strongly renormalized compared to the harmonic frequency.
Non-Lorentzian shapes of the phonon power spectra of the TO modes in GeTe at the phase transition are the consequence of the coupling of these phonon modes to the entire phonon bath, rather than coupling to specific phonons. We test this by calculating the power spectrum of the TO2 phonon mode disregarding the coupling to specific phonon branches. We find that the change in the power spectrum does not substantially vary depending which phonon branch we disregard. A similar behaviour can be seen in the spectral function of the TO1 mode (see Supplementary Note 1).
Lattice thermal conductivity in the Boltzmann transport approach. Next we calculate the lattice thermal conductivity of GeTe for a range of temperatures including both rhombohedral and rocksalt phases (see Fig. 4), combining the TDEP method with the Boltzmann transport approach. Overall, the lattice thermal conductivity is inversely proportional to temperature, as a result of the linear dependence of phonon populations with temperature above the Debye temperature ( 200 K for GeTe). The calculated κ deviates from the 1/T law near the phase transition, where there is a large κ increase at the phase transition and in the cubic phase. At high temperatures, the κ in the cubic phase regains the 1/T dependence.
There is an anisotropy in the lattice thermal conductivity in the rhombohedral phase ( Fig. 4), as a consequence of the van der Waals gaps formed due to the Te sublattice offset. The direction perpendicular to the van der Waals gaps (i.e. parallel to the trigonal [111] axis) has weaker bonding, leading to the lower phonon group velocities and κ in that direction. Anisotropy of the lattice thermal conductivity disappears close to the phase transition and is not present in the rocksalt phase.
The agreement between the computed and experimental κ values is very good in the whole temperature range of the rhombohedral phase, especially if we consider the average κ (dashed line in Fig. 4). Almost all experiments show an increase in the lattice thermal conductivity in the vicinity of the phase transition [25][26][27][28][29][30][31][32], similarly to our results. The discrepancy between our results and experiments increases as we get closer to the phase transition and particularly in the cubic phase. Our results consistently overestimate κ compared to experiments. In the rhombohedral phase we would expect scattering from lattice imperfections, such as ferroelectric domain walls [57][58][59][60][61], to further reduce the computed lattice thermal conductivity, but this is not the case in the cubic phase. The discrepancies could also stem from ommiting the higher order terms in the Taylor expansion of the interatomic forces (we include the second and third order terms only). However, to the lowest approximation, the fourth order anharmonic terms would only affect the real part of the self-energy [51] and not imaginary part, which would mean that the phonon lifetimes should remain unchanged. Additionally, since the force constants in TDEP are obtained through a fitting procedure, the fourth order force constants should be smaller than the third order ones. We checked the influence of the fourth order anharmonicity on the real part of the self-energy and found that it is small even for highly anharmonic soft modes at the phase transition. Therefore, higher order anharmonicity is not the reason for the differences in the calculated and experimental κ values.
Experimental values of lattice thermal conductivity are usually extracted from the total thermal conductivity measurements using the Wiedemann-Franz law to eliminate the electronic contribution to the thermal conductivity, whose validity at structural phase transitions is not well understood. Additionally, most references use the single parabolic band Kane model to extract the Lorenz factor from measurements of the Seebeck coefficient, which is not appropriate in GeTe due to the intrinsically complicated Fermi surface [31]. Such lattice thermal conductivity values can differ widely near structural phase transitions, and sometimes an increase in the total thermal conductivity is assigned to the electronic contribution. Here we show that the increase in the thermal conductivity of GeTe at the phase transition, at least partially, comes from the lattice thermal conductivity. The difference between our theoretical and experimental results in the cubic phase might be due to an inaccurate estimation of the electronic contribution to the total thermal conductivity in experiments. Measuring the thermal conductivity of GeTe in an applied magnetic field (to exclude the electronic thermal conductivity) would test our predictions of the increased lattice thermal conductivity at the phase transition.
To understand the anomalous behaviour of κ near the phase transition, we calculate the average phonon lifetimes and group velocities at different temperatures, Fig. 3. For example, the average values of the phonon lifetimes in the vicinity of the phonon frequency ω 0 are given as:τ where the sum goes over all phonon modes λ, and σ is the smearing parameter taken to be σ where ω D is the Debye frequency and N is the number of ω 0 frequencies.
The phonon group velocities of GeTe are mostly independent of temperature, except very close to the phase transition, see Fig. 5 a. In this temperature region (600-− 675 K), there is an increase in the phonon group velocities across most of the frequency range and most noticeably for phonons between 1 and 3 THz. This is the frequency region that contributes most to the thermal conductivity (see Supplementary Note 2). We thus conclude that the anomalous increase of the thermal conductivity at the phase transition is partially due to this rise in the phonon group velocities. We find that the increase in group velocities originates from the lattice contraction near the critical temperature [41,42,62]. This can be understood from the observation that the increase of phonon group velocities happens most prominently along out of the plane direction, where negative thermal expansion occurs [62].
Phonon lifetimes in weakly anharmonic materials usually follow the 1/T law, similar to thermal conductivity. This is the case in our calculations in the rhombohedral phase far from the phase transition. At the phase transition, however, the phonon lifetimes of acoustic modes decrease more than expected from the 1/T scaling. This is a signature of stronger anharmonicity of acoustic phonon modes closer to the phase transition in the rhombohedral phase. Optical modes have more complicated behaviour. While soft transverse optical modes near the zone centre have much lower lifetimes at the phase transition, this is not the case for transverse optical modes in the rest of the Brillouin zone. Even more intriguing is the behaviour of longitudinal optical modes. At low temperatures, they usually have frequency independent lifetimes, but at the phase transition they decrease exponentially with frequency. In the cubic phase there is a substantial increase in the phonon lifetimes with respect to the rhombohedral phase(see Fig. 5 b). The phonon lifetimes at 637 K are larger in most of the frequency range compared to the temperatures closest to the phase transition in the rhombohedral phase (625 K and 631 K). Interestingly, the phonon lifetimes at 675 K are larger than the phonon lifetimes at 300 K rescaled by temperature (i.e. by 675 K/300 K), revealing stronger intrinsic anharmonicity of the rhombohedral phase. Third order force constants are much stronger in the rhombohedral phase, even for very similar temperatures and structures (631 K vs 637 K), see Supplementary Note 3.
In conclusion, the increase of the lattice thermal conductivity near the phase transition can be attributed to two phenomena, depending on the structure of GeTe. In the rhombohedral phase, the increase in the thermal conductivity is due to negative thermal expansion that causes phonon group velocities to increase, increasing phonon mean free paths. On the other hand, in the cubic phase, phonon lifetimes increase dramatically due to lower intrinsic anharmonicity of this phase.
Lattice thermal conductivity using the Green-Kubo method. The non-Lorentzian behaviour of the phonon spectral function raises the question whether the Boltzmann transport equation employed in the calculation of the κ values in Fig. 4 is valid close to the phase transition. It is assumed in the derivation of the Boltzmann equation that phonons are well defined quasiparticles with unique frequencies and lifetimes. This implies that their spectral weights are the Lorentzian functions centred at the harmonic frequencies and the widths equal to the phonon lifetimes. However, evidently this does not hold close to the phase transition, see Fig. 3.
We include the effect of non-Lorentzian lineshapes on lattice thermal conductivity following the Green-Kubo approach derived in Refs. [63,64]. In this approach, the heat current in the Cartesian direction j is defined as [63,65,66]: where v j q,s is the group velocity of the phonon mode with wave vector q and branch s, V is the volume of the unit cell, N is the number of q points, and p q,s (t) and u q,s (t) are the Fourier transforms of the phonon momentum and position operators. The lattice thermal conductivity tensor is obtained by employing the Green-Kubo relation [67]: The spectral theorem is then used to relate the heat current autocorrelation function in the equation above to the one particle retarded Green function [50,64,[68][69][70]. The final expression for the thermal conductivity in this approach is [63,64]: where q,s is: v q,s,s is the generalized phonon group velocity [65]: where X q,s is the eigenvector of the phonon with wave vector q and branch s, Φ a,b ( R) is the force constant between atoms with masses m a and m b and the vector distance R.
We can separate Eq. 6 into two parts. The first part is diagonal (s = s ). In the limit of small anharmonicity (∆ q,s = 0 and Γ q,s ω q,s ), this part reduces to the standard solution of the Boltzmann equation in the single relaxation time approximation. The second, non-diagonal part (s = s ), can be reduced in the limit of small anharmonicity to the expressions similar to the ones given in Refs. [71,72]. The non-diagonal contribution to the lattice thermal conductivity will become prominent only if there is a substantial overlap in the spectral functions of two phonon modes with the same wave vector. This is only true in the case of strong anharmonicity or when spectral functions broaden due to disorder. We have implemented the expression given by Eq. 6 with the TDEP method. Figure 6 a shows the difference between our results obtained using Eq. 6 and BTE (Fig. 4). We can see that the BTE underestimates the thermal conductivity in the whole temperature range. Additionally, we can see that the underestimation is not large, around 10% even at high temperatures. Overall, the difference scales linearly with temperature. We can also see that the difference is largest at the phase transition, which is expected considering large deviations from the Lorentzian shape of the phonon spectral functions in this region (see Fig. 2). The contribution of the non-diagonal part of the lattice thermal conductivity is comparable to the overall enhancement of the diagonal part due to non-Lorentzian shapes of the phonon spectral functions.
To understand the reason for the increased difference in κ at the phase transition obtained by the standard BTE method and using the Green-Kubo relation (Eq. 6), we show the phonon lifetimes of GeTe calculated using the two methods in Fig. 6 b. We define the phonon lifetimes in the Green-Kubo method as: where c q,s is the harmonic heat capacity of the phonon mode ( q, s). The increase of the phonon lifetimes in the Green-Kubo method is visible in the whole Brillouin zone. It is, however, most prominent in the region of soft phonon modes, where the phonon lifetimes increase by a factor of 100. The increase in phonon lifetimes mostly comes from the shifts of the peaks of the spectral functions, which increases the heat capacity of the phonon modes compared to the harmonic heat capacity.
Discussion. Here we highlight the advantages of the Green-Kubo method we implemented here over the standard approaches for computing κ in strongly anharmonic materials. Unlike the Boltzmann transport equation, the Green-Kubo method accounts for non-Lorentzian shapes of phonon spectral functions. It is possible to include these effects also by running long molecular dynamics (MD) simulations. Compared to MD, the Green-Kubo method presented here is faster and easier to converge, which is particularly important if these methods are combined with first principles calculations. The results obtained using this approach are easier to interpret and analyse compared to traditional MD approaches. Unlike MD simulations, the Green-Kubo method uses the Bose-Einstein statistics for phonons. Additionally, this method gives the non-diagonal part of lattice thermal conductivity, accounting explicitly for the whole phonon power spectra. The method of Refs. [71,72] includes only the values of the phonon self-energy at the harmonic frequency in the evaluation of the non-diagonal contribution to κ and is not applicable to strongly anharmonic materials.
In conclusion, we have performed a detailed first principles study of the lattice thermal conductivity κ of GeTe close to the ferroelectric phase transition. The harmonic frequencies of the soft modes, although dramatically softened, do not become zero at the phase transition. On the other hand, strong anharmonicity causes the spectral functions of the soft modes to collapse and effectively peak at zero frequency. Strong anharmonicity minimizes the acoustic phonon modes lifetimes at the phase transition. However, we calculate an increase in the lattice thermal conductivity at the phase transition, in agreement with experiments. In the rhombohedral phase, this effect is due to negative thermal expansion that increases phonon group velocities. In the cubic phase, the increase in κ is primarily driven by increased phonon lifetimes due to smaller anharmonicity of phonon modes compared to the rhombohedral phase. We implement a novel approach to compute lattice thermal conductivity that includes the observed non-Lorentzian power spectra of phonon modes. Using the new approach, we find that the calculated κ increases even further at the phase transition, which is the consequence of larger phonon populations due to softening of the phonon modes caused by phonon-phonon interaction.
Computational methods.
We calculate the lattice thermal conductivity of GeTe from first principles using density functional theory (DFT) and the Boltzmann transport equation (BTE). In this approach, the lattice thermal conductivity tensor is given as [63]: where i and j are the Cartesian directions, N is the number of q points, V is the unit cell volume, c q,s is the phonon mode heat capacity and v i q,s is the group velocity of the phonon mode ( q, s) in the direction i. The relaxation time of the same mode is τ q,s = 1/2Γ q,s , where the imaginary part of the phonon self-energy due to threephonon scattering is given as [50]: Here λ is a short hand notation for ( q, s) and phonon momentum is conserved in the three-phonon processes above. The three phonon matrix element Φ λλ λ is calculated using: where m i is the mass of atom i, iα λ is the component α of the eigenvector for mode λ and atom i, and r i is the vector associated with atom i. ω λ is the harmonic frequency of the phonon mode λ, and Φ αβγ ijk is the third order interatomic force constant.
To get temperature dependent interatomic force constants, we use the temperature dependent effective potential (TDEP) method [34][35][36]. This approach employs a fitting procedure of the second and third order force constants to DFT forces on forces sampled along a molecular dynamics (MD) trajectory. To perform MD simulations, we developed a very accurate interatomic potential based on the Gaussian Approximation Potentials scheme [73,74] (see Supplementary Note 4). We calculated the structural parameters of GeTe at different temperatures using MD. We ran NVT ensemble on these structures to obtain atomic configurations. After 50 ps of equilibration we sampled 24 configurations on the 300 ps long trajectory. We used a 512 atom supercell to converge phonon properties with respect to the second and third order force constants cutoff (12 and 8 Å, respectively). We extracted selected configurations and carried out density functional theory calculations on them to obtain forces.
DFT calculations were performed using the ABINIT software package [75,76]. We use a generalized gradient approximation with the Perdew-Burke-Ernzerhof parametrization (GGA-PBE) [77] for the exchangecorrelation functional and the Hartwigsen-Goedecker-Hutter (HGH) pseudopotentials [78]. Wave functions are represented in a plane wave basis set with the cutoff of 16 Ha, and the Γ point is used for sampling of electronic states. Sampling of phonon states is carried out using a 30 × 30 × 30 q-point grid.
Code availability.
The code that implements the temperature effective potential (TDEP) method is available from Olle Hellman upon reasonable request. The additional data processing scripts and codes are available from the corresponding authors. GAP software is available for non-commercial use from www.libatoms.org.
Data availability.
The authors declare that the data supporting the present work is available from the corresponding authors upon reasonable request. figure 1 shows the spectral function of the zone centre second transverse optical (TO2) phonon mode at 631 K, computed including and excluding transverse acoustic-TO2 mode coupling. When we neglect the coupling of the TO2 mode with transverse acoustic (TA) modes in our calculation, the peak of the spectral function shifts closer to the harmonic frequency. A similar behaviour is observed if we disregard the coupling of the TO2 zone centre mode with TA1 and TO2 phonon branches. This illustrates that although coupling to TA modes is strong, it is not the sole source of the exotic behaviour of the soft TO2 phonon power spectrum. In Supplementary figure 1 we show the types of coupling that bring the phonon power spectrum closest to the Lorentzian shape.
Acknowledgements.
The spectral function of the zone centre first transverse optical mode (TO1) in GeTe is illustrated in Supplementary figure 2 a for several temperatures. We can see a secondary peak in the TO1 spectral function at 625 K, indicating that anharmonicity of this mode is even larger than that of the TO2 mode. This is somewhat to be expected since the harmonic frequency of the TO1 mode is lower than that of the TO2 mode, leading to larger coupling to the rest of the modes. At 631 K the spectral function of this mode has a peak at almost 0 THz. In the rocksalt phase, the TO1 and TO2 modes are degenerate and have the same lineshapes. In Supplementary figure 2 b we show the spectral function of the zone centre TO1 mode at 631 K with full anharmonicity, excluding the coupling of TO1 mode with transverse acoustic modes and excluding the coupling to the TA2 mode and itself. We can see that the Lorentzian shape of the spectral function is not regained after excluding different types of interaction and thus we conclude that the non-Lorentzian shape of the TO1 mode power spectrum and the softening of the TO1 frequency is due to coupling to the entire phonon bath, rather than a particular phonon branch. To analyze the contribution of specific phonon modes to the total lattice thermal conductivity, we calculated the spectral thermal conductivity at different temperatures by convolving the total lattice thermal conductivity with a Gaussian of an appropriate width (see the main part for more details). We have found that acoustic modes give the dominant contribution to the lattice thermal conductivity, as one would expect (see Supplementary Fig. 3). At temperatures near the phase transition, the overall contribution of transverse modes (acoustic and optical) to the lattice thermal conductivity of GeTe diminishes. At the phase transition, the largest contribution to κ comes from the phonon modes in the frequency range between 1 and 3 THz. This is the frequency region that shows the most prominent enhancement of phonon group velocities due to negative thermal expansion in the rhombohedral phase. figure 4 shows the average phonon lifetimes of GeTe at different temperatures scaled by T /300 K, where T is the temperature. This enables us to see whether the decrease in the phonon lifetimes near the phase transition is due to phonon populations or increased anharmonicity of the material. Phonon lifetimes scale inversely with temperature for the temperatures far from the phase transition temperature (T C ≈ 634 K), which indicates that anharmonicity is not increased for those temperatures. However, close to the phase transition (631 K) in the rhombohedral phase, we can see a dip in the phonon lifetimes for most of the frequency range, revealing increased anharmonicity near the ferroelectric phase transition in the rhombohedral phase.
In the cubic phase, however, we see an increase in the scaled phonon lifetimes compared to the rhombohedral phase. Further from the phase transition (675 K), the scaled lifetimes are larger than the scaled phonon lifetimes at 300 K, which we attribute to lower intrinsic anharmonicity of the cubic phase.
To investigate in detail the reason for this behaviour of phonon lifetimes, we checked the change in anharmonic force constants with temperature (see Supplementary Fig. 5). To compare anharmonic force constants, we define a norm of the anharmonic force constant matrix as: N AF C (i, j, k) = α,β,γ |Φ αβγ ijk |, where i, j, k denote atoms in triplet and α, β, γ are the Cartesian directions. Surprisingly, most of the anharmonic force constants decrease with temperature, in both rhombohedral and cubic phases. There is a sudden drop in the norm of the anharmonic force constants at the phase transition. The anharmonic force constants are drastically smaller in the cubic phase, explaining higher scaled phonon lifetimes in this phase. However, the temperature dependence of anharmonic force constants does not explain the increased anharmonicity of the transverse acoustic modes at the phase transition in the rhombohedral phase (see Supplementary Figure 4). It is the scattering phase space (SPS) that increases at the phase transition appreciably, which leads to higher anharmonicity in the rhombohedral phase. We show this in Supplementary Figure 6 by calculating the scattering phase space for phonons at different temperatures. We do this by setting the matrix element Φ λλ λ in Eq. (11) of the main part to 1 and calculating the imaginary part of self-energy at the harmonic frequency for each phonon mode. The results are then scaled by temperature to minimize the effect of phonon population on the calculated value of the scattering phase space. We can notice that for the phonons in the frequency re- gion around 1 THz the SPS increases dramatically near the phase transition, which explains the lower contribution of transverse optical modes to total κ at the phase transition compared to 300 K (see Supplementary Figure 3) and prominent dip in the scaled phonon lifetimes for this frequency region (see Supplementary Figure 4). Phonons in the frequency region around 2 THz have a smaller SPS, which again is consistent with the results presented in Supplementary Figure 4 (the scaled phonon lifetimes at the phase transition and 300 K are comparable in this frequency region). However, this can not be the reason for the increased lattice thermal conductivity at the phase transition, because this effect is noticeable only due to the temperature scaling of phonon lifetimes and SPS and does not exist if one takes phonon populations into account (see Fig. 5 b of the main part). Finally, the available scattering phase space of longitudinal optical (the highest frequency) phonons increases dramatically with temperature, which explains their unusual frequency dependence.
In the cubic phase the SPS of the majority of phonons (except LO phonons) increases linearly with temperature, which balances out the decrease in the anharmonic force constants and leads to almost constant phonon lifetimes with temperature. The decrease in κ in the cubic phase between 700 K and 800 K comes primarily from a decrease in phonon group velocities.
Supplementary note 4: Gaussian Approximation Potential
To accurately sample atomic displacements at a certain temperature for the temperature dependent effective potential (TDEP) method, one must perform molecular dynamics (MD) simulations. Most commonly, one would run ab initio MD to achieve this. However, ab initio MD is extremely computationally expensive and the accuracy of such calculations is typically reduced in order to speed up the computation of forces and energies. Another important drawback of ab initio MD is that it does not scale linearly with the number of atoms, so carrying out these calculations on large supercells needed to reach convergence in GeTe would be very expensive.
To overcome these difficulties, we developed an accurate interatomic potential based on the Gaussian Approximation Potential (GAP) framework using ab initio calculated forces for different atomic configurations in different GeTe supercells. The details of the fitting of this interatomic potential will be the focus of the future publication. To check the validity of this interatomic potential for sampling atomic configurations through MD simulations, we compared the phonon band structures calculated using forces from DFT and GAP (see Supplementary Fig. 6 a). The comparison between these two methods is very good.
To further check the appropriateness of GAP for MD simulations, we compared the errors from GAP and DFT. We calculate forces on a set of structures and atomic configurations using DFT with fully converged parameters. Then we calculate the same forces using GAP and find the differences with respect to DFT. We then bin the errors and present the results in Fig. 6 b. We highlight that the structures taken in this study are not in the training set used to obtain the GAP potential which makes the low value of errors in forces even more remarkable.
As we previously said, to actually run ab initio MD simulations, one would have to reduce the accuracy of DFT calculations. To mimic this, we calculated forces on 216 atom supercells of GeTe using a 2x2x2 k-point grid and the energy cutoff of 16 Ha for plane waves. We consider this calculation as high accuracy. To perform low accuracy calculations, we chose an 1x1x1 k-point grid and the energy cutoff of 10 Ha for the plane wave expansion of electronic wave functions. We then take the differences in the forces in these two calculations and bin them in a similar manner to the previous case with the GAP potential (see Supplementary Fig. 7 b). We can see that the errors from the GAP potential are comparable to the errors obtained from the reduced accuracy first principles calculations, which gives us confidence in the sampling method we used. b The force error probability distribution function for GAP and DFT with reduced accuracy (see text for more details). | 8,909.6 | 2020-12-15T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Generalized unitarity method for unstable particles
In theories with unstable particles, unitarity is satisfied by the inclusion of only stable states in unitarity sums. Hence unitarity cuts are not to be taken through unstable particles. This raises a challenge to the generalized unitarity method, whose aim is to reconstruct amplitudes by analyzing sets of unitarity cuts. Nevertheless, under some general physical conditions, and perhaps some methodological modifications, we prove that the method is still reliable for one-loop amplitudes containing resonances. We discuss some simple examples which illustrate these features.
INTRODUCTION
Measurements of parameters of the Standard Model to very high precision is a landmark for the physics program of the Large Hadron Collider [1]. In recent years these have triggered an ongoing stream of research activities dedicated to assess precise predictions within perturbative quantum field theories. The immediate consequence is the development of modern tools to address the calculation of loop integrals. In this regard there is a pressing need to obtain a better knowledge of the analytic structure of such integrals to uncover more streamlined methods for their computation. This implies taking into account the investigation of the so-called cuts of internal propagators associated with intermediate particles in a given scattering amplitude. The major importance of cuts in this context is that it allows one to efficiently probe the analytic structure of loop integrals [2,3].
From unitarity constraints, we know that Feynman integrals should be multi-valued functions, whose discontinuities are precisely described by cuts. Actually, scattering amplitudes can be structured in terms of their singularities, so in principle the investigation of branch cuts and other singularities enables one to calculate loop amplitudes. Fourdimensional amplitudes that are uniquely specified by the nature of their branch cuts are said to be cut-constructible. Modern unitarity methods build on Landau conditions [4] in order to use cuts to set up projectors onto a basis of master integrals [5][6][7][8][9][10][11][12]. For recent applications, see Refs. [13][14][15][16][17][18][19][20][21]. A precise definition of cuts is available for given classes of cuts. Among these, we can quote the so-called unitarity cuts which focus on a particular external channel [22][23][24][25]. Historically, the unitarity method [26][27][28][29][30] was established as a systematic framework for one-loop evaluations, and is applicable to both supersymmetric and non-supersymmetric theories.
The standard practice of the unitarity method requires the replacement of two internal propagators by Dirac delta functions which projects the loop momenta they carry onto their on-shell values. On the other hand, in generalized unitarity [5,6,[31][32][33][34][35][36][37], to be briefly reviewed below, one considers additional cut conditions to constrain other momenta to their associated on-shell values. As a consequence, if the momenta carried by more than two massless propagators take their on-shell values, the solutions to the cut conditions are now complex, which implies that the associated delta functions must give zero [6,8]. This observation has led to the concept that cuts should be computed via contour integration so that the associated contours should be suitably deformed in such a way as to encircle the poles of the cut propagators [8,38,39].
The obvious requirement here is that unitarity must be satisfied to all orders in perturbation theory. However, we know that many of our theories possess unstable particles which do not appear as asymptotic states. Should such unstable particles be incorporated nevertheless in unitarity relations? This issue was addressed by Veltman [40][41][42][43][44].
The conclusion is that one should not take cuts through unstable propagators 1 , hence unstable particles are not enclosed by unitarity sums. This can be generalized to encompass also unstable ghostlike resonances emerging in higher-derivative theories such as quadratic gravity and Lee-Wick theories [45]. As the astute reader might have noticed, this might potentially put some obstructions to unitarity methods. Notwithstanding the foregoing remark, in this paper we will discuss how such methods are still solid for the calculation of loop amplitudes that comprise resonances of any type. Here we will use units such that = c = 1. We take the Minkowski metric as η µν = diag(1, −1, −1, −1).
The knowledge of tree amplitudes can be used to seek information about loop integrands. The action of taking loop propagators on-shell is known as a unitarity cut. It comes from the unitarity constraint of the S-matrix, which is a statement on the generalized optical theorem. That is, for an arbitrary process a → b one has that where dΠ f is the Lorentz-invariant phase space measure [51] and the sum runs over all possible sets f of intermediate states and there is an overall delta function associated with energy-momentum conservation. In the above expression, the A's are (invariant) scattering matrix elements. In a perturbative quantum field theory, when an expansion in powers of a small coupling constant exists, this constraint instructs us that the imaginary part of scattering amplitudes at a given order is obtained from the product of lower-order amplitudes. For instance, in the case of one-loop processes, one finds a product of two tree amplitudes on the right-hand side of Eq. (1). Usually this product presupposes the sum over all possible on-shell states that can cross the cut. One fundamental requirement is that only states from the physical spectrum of the theory are allowed to be included in this sum [40,45]. In a unitarity cut, we restrict the loop-momenta to be on-shell and only physical modes are enclosed in the two on-shell amplitudes on the right-hand side of Eq. (1). The cutting rules also consider integrals of any remaining freedom in the loop momentum after prescribing the so-called cut constraints (and, of course, momentum conservation). Unitarity cuts are efficient tools that enable one to relate the pole structure of the integrand with the branch-cut structure of the associated loop integral. Unitarity cuts can also involve more than two cut lines, which implies that several internal lines are taken on-shell. Here we say that we are able to reconstruct amplitudes from sets of generalized unitarity cuts. It turns out that such a set is overcomplete, which means that we can recourse to different strategies for extracting the relevant information. For example, one interesting approach is to use the method of maximal cuts [50,52]. In this case we consider the maximum possible number of cut lines so that each cut furnishes a small piece of information. That is, we begin with generalized cuts possessing the maximum number of cut propagators. We use the information from such cuts to lay out an initial ansatz for the amplitude. Further cuts with reduced number of cut propagators are then considered and their information is systematically gathered in order to improve such an ansatz. The aim is to find an integrand that reproduces all the unitarity cuts. In principle this helps along the construction of the amplitude [50,[53][54][55].
Here we will study generalized unitarity cuts on the level of the integrand, which can be written as a product of on-shell (tree-level or lower-loop) amplitudes. In particular, we are interested in considering maximal cuts consisting of only three-point tree amplitudes, namely: The information from unitarity cuts can be used most efficiently if a complete basis of integrals is known. Indeed, all one-loop amplitudes in D dimensions can be written as a sum of one-loop scalar integrals I m , m = 1, 2, 3, . . . , D [52]: where R denotes rational terms (contributions that do not have branch cuts), C (i) D are coefficients associated with tree-level amplitudes and I (i) m are m-gon scalar integrals. In D = 4, one-loop integrals reduce to a combination of box, triangle, bubble and tadpole scalar integrals [49,[56][57][58][59][60][61][62]. The latter are related to the coefficients C (l) 1 ; such integrals vanish in dimensional regularization when only massless particles circulate in the loop. In D = 4 power counting demonstrates that the scalar box and triangle integrals do not display UV divergences, but IR divergences due to possible massless corners. The bubble integrals have UV divergences but no IR divergences as both corners are massive. Integrated results of such integrals can be found in several places in the literature, see for instance Refs. [52,58,59,61,[63][64][65][66].
Therefore in four dimensions the following expansion is generically valid to any one-loop amplitude: where K i are sums of external momenta and I n are the associated scalar integrals. The coefficients c n are calculated using generalized cuts. For instance, consider a generic one-loop point amplitude written in the basis above. In this section we are working with only stable particles circulating in the loop. If we cut four propagators then the four dimensional integral becomes trivial: where A tree j are tree-level amplitudes and, using a spectral representation [45], are the cut propagators (or positive-frequency Wightman functions) associated with stable particles. For brevity we have absorbed 2π factors into the definition of the loop integral in Eq. (5). Since in the case of stable particles the spectral function σ(s) has a pole at one-particle states, we can also write that (assuming we are not above a given multi-particle threshold) In other words, the "cut of a propagator" means removing its principal part while preserving the delta function imposing the on-shell condition. For simplicity we have taken all internal propagators to have the same mass m, which can be zero. On the other hand, when applied to the master integrals, the quadruple cut selects the contribution from the box integral with momenta K 1 , K 2 , K 3 , K 4 at the corners. Therefore where I 4 (K 1 , K 2 , K 3 , K 4 ) is the associated 4-point box scalar integral: In particular, the quadruple cut of the scalar box integral is a Jacobian factor. This factor appears on both sides of the equation. Hence, comparing both expressions for ∆ 4 A 1−loop , we see that the coefficient c 4 can be expressed as a product of tree-level amplitudes [49]: where the factor of 1/2 emerges as there are exactly two solutions for the set S of cut conditions determined by the four delta functions of the cut propagators. Hence in principle the quadruple cut of the scalar box integral would suffice to calculate the box coefficient. Furthermore, this implies that the maximal cut in this case reads which is a direct proof of the one-loop form of Eq. (2) for stable particles.
In conclusion, equipped with the integral reduction (4) valid to all one-loop amplitudes, and benefiting from the factorization property of the amplitude, by using unitarity methods one is in a position to reconstruct one-loop amplitudes from tree-level information without the often burdensome Feynman diagram expansion. Moreover, we see that the application of the generalized unitarity method requires exploring further discontinuities which implies that a different number of propagators ought to be put on-shell in comparison with textbook unitarity cuts. And this can only be achieved if there is a contribution of an isolated simple pole at p 2 = m 2 (or p 2 = 0 for massless particles) coming from one-particle states -in other words, if the cut propagators have the expected cut structure, as given by Eq. (7).
Actually, one must be more careful when resorting to generalized unitarity, since the solutions to the cut conditions are generally complex, leading to delta functions that trivially yield zero. The solution is to use contour integration. That is, instead of replacing the propagators by delta functions, one must replace the original contour of integration [8]. In summary, the idea is that, as the support of the delta functions is outside the physical region, the integration procedure is implemented in terms of contour integrals in C 4 , the loop momentum being regarded as a complex vector. Such contours are such that their product encircles the poles in the four-dimensional components of the loop momentum. By performing the four-dimensional loop-momentum integral over each contour, the residue at the corresponding encircled pole is attained. In fact, one defines the product of delta functions to generate exactly this contour integral [8].
Of course, the aforementioned operation does not leave expression (4) intact, as there are terms that integrate to zero in the original contour which no longer necessarily vanish if we integrate over general contours in the complex plane. In order to do away with such spurious terms, one evaluates the integral over a suitable linear combination of new contours in such a way that such additional contributions are always projected out. This produces the coefficients of the box integrals as given in Eq (10). For a careful survey of all subtleties associated with this discussion, see Ref. [8]. See also Ref. [12].
Possible issues with unstable particles
In order to understand what could be the general issues involving unstable particles, let us imagine a given reaction consisting of the scattering of a particles a 1 + a 2 + · · · + a n → A + c 1 + c 2 + · · · + c n (12) producing final products given by a collection of c particles and a particle labeled A. If all final products are described by stable particles, then in principle there is no concern in evaluating an on-shell amplitude such as A(a → A + c) in any given order in perturbation theory. However, if A is heavy enough, its coupling to lighter states in the theory, say labeled by b, makes it decay: and now the scattering process we have to consider is a → b + c with the associated on-shell amplitude A(a → b + c).
In this case A enters the calculation as a virtual particle, not as an external state, which implies, by the Feynman rules, the presence of its propagator 1/(p 2 − m 2 A ) in internal lines of given Feynman diagrams. The point is, if the phase space contains the resonance region ( p b ) 2 = m 2 A , then the results calculated from perturbation theory cannot be trusted close to this region.
In other words, basically a diagram with a single internal A propagator must met one of the following conditions [42]: (i)m A < m b , which always happen if A is stable or if A is an unstable particle with threshold E < m b such that it cannot decay into b. In this case perturbation theory can still be trustworthy 3 ; (i)m A > m b , which implies that A is unstable and can decay to b-particles. In this situation, the phase space contains the resonance, and perturbation theory can no longer be generically trusted.
The most direct approach to solve this problem is to consider a resummed form for the propagator of the unstable particle. However, for gauge theories, the resummation procedure must be done carefully, otherwise one might expect to be confronted with issues associated with gauge invariance and gauge-fixing parameter dependence [42][43][44][67][68][69][70][71]. Another possible approach is provided by the so-called complex-mass scheme (CMS) [72][73][74]. In few words, it corresponds to a suitable generalization of the on-shell renormalization scheme. In the latter, the renormalized mass m is specified by demanding p 2 = m 2 to be the pole position associated with the resummed propagator. This is fine for stable particles -for unstable particles, the self-energy acquires an imaginary part, and as a consequence the renormalized mass does not correspond to the pole position. The modification proposed by the complex-mass scheme is the following: Define a complex renormalized massm by requiring that p 2 =m 2 matches the pole position of the resummed propagator for unstable particles. The fact thatm is complex, and therefore cannot be associated with a physical entity, should be of no concern as renormalized parameters in the Lagrangian do not carry any physical meaning [42]. For a recent discussion on the definition of the mass and width of a normal unstable particle, see Ref. [75].
One can show that this modification put forward by the complex-mass scheme avoids the aforementioned issues appearing in gauge theories as it renders unnecessary the resummation of internal propagators [42][43][44]. Indeed, the bare propagator of the unstable particle A (or its scalar part) within this method acquires the form (in momentum space) By where m A andΓ are real, one can prove that the above propagator can be envisaged as the resummed form of the following propagator in a scheme in which the renormalized mass is given by m A [42]: The self-energy obeys Σ(m 2 A ) = im AΓ . That is, the bare propagator within the complex-mass scheme is intrinsically resummed. In addition, notice that, as m AΓ is evaluated throughout the renormalization procedure, we must envisage this quantity as a function of the coupling constant λ describing the interaction between the unstable particle and the lighter states, m AΓ ∼ O(λ). For more technical details concerning the complex-mass scheme, we refer the reader Refs. [42][43][44].
Given the result (14), one can be tempted to think that the complex-mass scheme allows for an adequate spectral representation for the propagator of the unstable particle such that an unambiguous one-particle state contribution can be identified. That this is not straightforward can be seen as follows. Within the complex-mass scheme, the positive-frequency Wightman function associated with the unstable particle in momentum space reads [42][43][44] Observe that the CMS cut propagator does not quite have the correct cut structure as given by Eq. (7) and, as a result, in principle we cannot connect D +,CMS with physical particles carrying positive energy forward in time. However, at leading orderΓ/m A ∼ λ this is possible; when taking the limitΓ → 0, D +,CMS (p 2 ) turns into a nascent delta function, thereby recovering the standard cut structure for the cut propagator, which allows us to associate with the propagation of positive-energy physical particles. In general, we can write [42][43][44] where the first equality is valid to leading order, and So we see that F (p 2 ,Γ/m A ) corresponds to higher-order contributions. For the phase space as a whole, the function D +,CMS (p 2 ) is suppressed as the imaginary part ofp 0 is small. However, in the region of resonance the small imaginary part yields a non-negligible contribution given by the nascent delta function above. This means that, outside the resonance region, where the CMS cut propagator does not have the correct cut structure (i.e., far from the poles of D +,CMS ), the cut of the CMS propagator of the unstable particle will produce a contribution of higher order in perturbation theory, which can thus be neglected. Only when one is close to the resonance region -which can take place depending on the external momentum configuration of a diagram -that the CMS cut propagator is nonnegligible. These features persist when including corrections to the leading-order result [42][43][44]. In any case, to leading order the cut of the unstable particle propagator is simply the cut through its one-loop correction; in other words, through stable particle propagators. This is a consequence of the fact that at one-loop order the widths in the Complex-Mass Scheme and the traditional on-shell scheme coincide.
When the unitarity method works for unstable particles
As shown in Refs. [40,45], in a theory with unstable particles (of any kind), unitarity is satisfied by the sole inclusion of asymptotically stable states. This means that cuts should not be taken through the unstable particles. Unitaritybased methods represent a kind of generalization of the optical theorem in that they investigate discontinuities of an amplitude in several kinematical channels in order to fully reconstruct loop amplitudes. But if discontinuities of a given loop amplitude are given by the cutting rules, how can one make sense out of the method when one is to cut a propagator associated with an unstable particle?
Let us suppose that all internal propagators of a given one-loop amplitude describe unstable particles. Naively one would conclude that one cannot apply directly unitarity methods to unstable particles. Fortunately, this is not the end of the story. That the method can still be applied to these cases can be observed by recalling the aforementioned discussion of unstable particles within the complex-mass scheme. Indeed, as we have mentioned, at leading order the cut propagator reproduces the nascent delta function typical of stable particles when one is close to the resonance region. Therefore in the complex-mass scheme we write (21) so that, at leading order and close to the resonance region, we find that This implies that the coefficient c 4 of the box integral is still given by Eq. (10) at leading order. In particular, the maximal cut of the one-loop amplitude is also given by the one-loop form of Eq. (2), which represents a proof of this result at leading order to the case of unstable particles running in the loop. Moreover, it is also clear when this procedure cannot be trusted -this is when is off resonance, so that we are not able to put the internal momenta on-shell, see Eq. (19). So when cutting an internal line corresponding to an unstable particle off resonance, the result we obtain is not a contribution to the imaginary part of the scattering amplitude, and, as a consequence, not a valid contribution to the coefficients of the scalar integrals in the above expansion given by Eq. (4). So it is not at all clear whether generalized cuts of propagators associated with unstable particles produce sensible results in this case. We will get back to this off-resonance topic shortly. At higher orders, as the cut of a propagator associated with unstable particles must correspond to the cut through a loop of stable particles when one is close to the resonance region, again we find the correct cut structure. As a result, we believe that Eq. (2) must still be valid to all orders in perturbation theory, but a general proof of this result is beyond the scope of the present work. This is an interesting exploration, and we hope to return to this calculation in the near future.
This discussion shows us that the unitarity method still makes sense in the case of unstable particles running in the loops; in order to implement the technique in a straightforward way, one must ensure that external momentum configurations of an amplitude allows the unstable particle propagator to become resonant. In this case the unstable particle cut propagator will have the correct cut structure to guarantee that unitarity is satisfied. In turn, from previous discussions, we know that the correct strategy for the cut of several propagators is to interpret the corresponding loop integral as a contour integral in C 4 . Moreover, the determination of the contours is such that it must encircle one-particle poles so that we can define the result of integrating over the product of delta functions as given by this contour integral. This is a necessary requirement on the grounds of unitarity -cuts are applied only on the stable particles of the theory so that the sum in Eq. (10) is guaranteed to be over only asymptotically stable states. It is only in this case that one can assert that the result of the integration over the contour |p 2 −m 2 A | = ε that encircle p 2 −m 2 A = 0 will represent an on-shell particle carrying positive energy forward in time. However, it is also clear that when one is off resonance, then one is also away from the pole of the propagator; the associated contour integration must have a vanishing residue in this case. To get a finite result, one must be close to the resonance region; in this case, the operation described in the previous section will yield a well-defined residue.
On the other hand, there is also other situation that the method can be applied without further issues -this is the so-called narrow-width approximation (NWA). In this situation, the coupling to the decay products is taken to be sufficiently small so that only resonance production is significant. In this limit, we can take Here γ = Γm, where Γ is the width of the resonance. In the narrow-width approximation, Γ m and hence that is, near the resonance, we can treat the resonant particle as being on-shell. This means that in this limit the cut taken through the unstable particle with Γ → 0 recovers the result from the cut through the decay products 4 . So effectively the NWA allows us to regard a long-lived resonance as being approximately a stable particle. Moreover, for gauge theories, the NWA does not suffer either from the gauge invariance problem alluded to above [42]. Therefore in this situation the usual reasoning that lies behind the generalized unitarity method can be fully applied.
In other words, for unstable particles the present practice of the unitarity method is valid if the assumption of a resonant unstable propagator is warrant. This can happen depending on external momentum configurations (and this can be proved at least in the Complex-Mass Scheme, as discussed above) or else one should verify whether the narrow-width approximation holds in the particular case under studied. In the complex-mass scheme, for a resonant unstable propagator, one can show that the cut of this propagator follows through the cut of only stable particles, preserving unitarity in Veltman's sense (i.e., by using the Largest Time Equation and employing suitably defined cut propagators). At higher-orders life will not be so simple, but in any case one can still prove that unitarity is satisfied.
Lee-Wick theories
Now let us discuss Lee-Wick-type theories [76][77][78][79][80]. As well known, these class of theories have a ghost mode which can be directly seen from the propagator, which has the form 1 The overall negative sign in the second term signals that this pole is ghost-like. However, the coupling to light particles of the theory makes the heavy ghost state unstable. As discussed above, this implies that generically the associated propagator must be resummed to ensure the validity of perturbation theory. A spectral representation of the corresponding cut propagator can be written as [45] Following the same reasoning employed for normal unstable particles, we find that the cut of internal Lee-Wick propagators of a given one-loop amplitude cannot produce in general a contribution to its discontinuity. However, recall that the structure of a normal resonance propagator is given by with Im[Σ(q)] > 0. Now if the Lagrangian gets modified with a 2 term, the propagator accordingly is changed to Setting Λ → ∞, we obtain the normal resonance. However for large finite Λ we find a heavy-mass resonance, when q 2 ∼ Λ 2 . Near this resonance, the propagator behaves as The residue at this pole is always negative. Furthermore, the sign of the width is always opposite from normal. That is, for both finite m and Λ, we verify the appearance of resonances of both types in the same propagator. In both cases, the imaginary part of the self-energy arising from the coupling to stable states is the same; notwithstanding it manifests itself in distinct ways near the resonances. This means that ghost resonances also obey a unitarity relation as a consequence of the fact that normal resonances satisfy this constraint [45]. The above discussion shows us how to implement unitarity-based methods to one-amplitudes involving unstable ghost modes. In the complex-mass scheme we write the Lee-Wick propagator as On the other hand, as the aforementioned discussion indicates, normal resonances and ghost-like resonances have a similar structure [45] iD(q) ∼ Zi with Z = +1 for a normal resonance and Z = −1 for the ghost resonance. The imaginary part is Z-independent, This implies that the CMS Lee-Wick cut propagator will have the same structure as the propagator associated with a normal unstable particle within the complex-mass scheme. In particular, close to the resonance region, at leading order it will have the form given by Eq. (18), producing the correct cut structure. Hence Eq. (2) is also valid for Lee-Wick theories when one is close to the resonance region. We finally remark that again we can also resort to the NWA in order to apply unitarity techniques to one-loop amplitudes with unstable ghost modes running in the loop. Nevertheless, we emphasize that one must be very careful when dealing with ghost modes in the NWA; in order to reproduce correctly the cuts one must resort to a modification of the contour in performing the loop momentum integration, as originally discussed by Lee and Wick [45].
When the unitarity method does not seem to work for unstable particles
Suppose we wish to study a particular process a + b → c + d which takes place exclusively through loops of unstable particles (of any type) and let us assume that we are off resonance. As asserted above, when one is off resonance the cut through the propagator of an unstable particle will always violate the cut structure. That is, the cut of an unstable particle propagator off resonance yields a contribution of higher order in perturbation theory -such cuts can surely be disregarded. Hence if we use the reasoning above, the cuts of internal unstable propagators will produce a vanishing contribution, resulting in a vanishing amplitude by employing the current practice of the unitarity method to reconstruct it. This is obviously an unsatisfactory answer since we know that amplitudes can be built using standard Feynman rules and Feynman diagrams. So can we take this as an indication that the unitarity method cannot be trusted in this case, as it seems that the associated amplitude (or some of its contributions) could not be determined from the knowledge of its cuts?
There are ways to circumvent this issue. For instance, recall that, in a theory containing unstable particles, unitarity is satisfied by the inclusion of only stable states in unitarity sums. This suggests that, in order to generically implement the generalized unitarity method to a theory containing unstable particles, we must consider the inclusion of only cuts from stable states in unitarity sums. This means that in general one must be able to reformulate the theory in terms of the stable particles only, eliminating from the outset any unstable fields in the Lagrangian. But this will actually introduce non-local vertices in our description. There is one constraint that we should impose in this situation. In order to preserve unitarity, the only acceptable poles in tree-level amplitudes are the ones that come from propagators. Since non-local vertices may generate unphysical poles that would not be be consonant with an exchange of a physical particle, we must impose that such poles have zero residue. Or we must claim that the residues of all such spurious poles must cancel to give zero. Of course other constraints can also be imposed on the non-local vertices, such as proper infrared behavior, valid Ward identities, etc. For a recent interesting discussion of tree-level scattering amplitudes of a particular category of non-local field theories, see Ref. [81]. One-loop unitarity for a class of perturbative scalar quantum field theories with non-local operators of fractional order was established in Ref. [82].
EXAMPLES OF THE USE OF THE UNITARITY METHOD FOR UNSTABLE PARTICLES
We now proceed to discuss with some detail three examples which can be relevant for particle physics in order to see how one can implement the unitarity method when unstable particles run inside loops in scattering amplitudes.
Normal unstable particles
Let us begin our discussions with normal unstable particles. Here we wish to investigate the one-loop helicity amplitude A 1−loop (+ + ++) associated with γ − γ scattering via W loops in the Standard Model [83][84][85][86][87]. It is known that the reaction γγ → γγ via W -boson at one-loop is finite [88]. The standard Feynman-diagram formulation proceeds via box, triangle and bubble diagrams in which one must allow also for unphysical Higgs bosons (when working in a suitable non-linear R ξ gauge) and Faddeev-Popov ghosts in the loops (besides, of course, the W particles). For our study we do not need to consider the unphysical particles; the finiteness of the amplitude will be easily established as we will see. We will use the method of maximal cuts in order to evaluate the amplitude in the expansion in terms of one-loop master integrals. The diagram is depicted in Fig. 1. Since the W boson is heavy, it is unstable and hence in principle we may not be allowed to cut the W internal lines. However, the W mass M is about 80 GeV and its decay width Γ is about 2 GeV [89], hence Γ M and in principle one is justified in resorting to the narrow-width approximation, at least in a primary analysis. In this case, the production and decay of the resonance can be treated approximately in a separate way. As discussed above, the propagator in the NWA has the correct cut structure and hence one can safely use the unitarity method in order to reconstruct the aforementioned box amplitude. On the other hand, as mentioned all one-loop integrals can be written in terms of a basis of scalar one-loop integrals as in Eq. (3), so we are in safe ground here -we can trust the results obtained here. In any case, we can also contemplate our results as an independent check of the helicity amplitude calculated in Ref. [86]; see also Ref. [90].
Using the method of maximal cuts, in the narrow-width approximation the coefficient of the scalar box function may be calculated from the formula (33) The above on-shell tree amplitudes involve one photon and two W bosons. The latter carry explicit SU (2) little-group indices associated with massive spinors. A review of the formalism designed to deal with massive particles can be found in the Appendix. In addition, S is the solution set for the four delta functions of the cut propagators: where we took all external momenta incoming. Notice that the cut conditions imply that Let us calculate the 3-particle amplitude that appears above. Feynman rules will tell us that Let us first choose s = +. Resorting to a bold notation for massive spinors, one finds where we used the Schouten identities and the following relations (which follow from the Schouten identities) [91] r|k 1 |1 k 1 k 2 = M rk 1 1k 2 + rk 2 1k 1 With an almost identical calculation for the s = − case, one finds that For concreteness, let us choose a specific set of helicities for the external photons. Now the cut reads (sum over repeated SU (2) little-group indices is implicit) where we used the on-shell conditions, symmetrization of SU (2) little-group indices and also that |λ I α λ I | β = −M δ α β and λ I λ J = M δ I J . In addition, we have chosen r 1 = p 2 , r 2 = p 1 , r 3 = p 4 and r 4 = p 3 and, as usual, s ij = (p i + p j ) 2 . The convention we use here is the following: We have so far calculated the coefficient associated with the scalar box integral. In order to calculate the coefficients of triangles, bubbles and tadpoles, one must resort to lower-order cuts. For instance, a triple-cut reads The other possible three-particle cut diagrams are obtained from this one by cyclic relabeling of the external particles. The cut conditions are given by Notice that these imply that · p 1 = 0. One finds that where we have chosen r 1 = 2 and r 2 = 1 and we used the cut conditions. As for the two-particle cut, we find As above, the other possible two-particle cut diagrams are obtained from this one by cyclic relabeling of the external particles. The cut conditions are given by One finds that where we used momentum conservation and the cut conditions. Now let us discuss our results. Concerning the triple cut, there are two possible integrals that can contribute, namely the box integral and the triangle integral. However, our result shows the presence of one uncut propagator. So this would exclude triangle integrals from the expansion. To confirm this, let us analyze the 2-particle cut. Again box and triangle integrals contribute, and now also bubble integrals can contribute. Nevertheless, our result shows the presence of two uncut propagators. This confirms the exclusion of triangle integrals from the expansion, and also states the absence of bubble integrals. We can perform a single cut to confirm that there will remain three uncut propagators in the result. Hence, the final answer is that only the box integral is present. So finally we can write where Perm. indicates permutations of external particles, R comprise rational terms and with, as already quoted, Γ M . We can think of the presence of Γ in the above equation as a consequence of the fact that, for unstable particles, we should use a resummed form for its propagator. In any case, one should bear in mind that, as we are considering the decay width to be very small, one must envisage I 4 (p 1 , p 2 , p 3 , p 4 ) in the limit Γ → 0, which ought to be taken at the end of the calculations. Otherwise, one can prove that the coefficient of the box will also display a finite Γ-dependence which is not captured by the unitarity method. As well known, the inclusion of the appropriate dependence on Γ both in the propagators as well as in the corresponding coefficients of the integral is mandatory to ensure the correct gauge cancellations. Nevertheless, as we are considering in this calculation a situation which is dominated by production of on-shell unstable particles with a vanishingly small decay width, finite-width effects are negligible as long as the required precision is taken to be small in comparison with Γ/M 5 .
Rational terms are not detected by unitarity cuts. Hence the above result have potential ambiguities in rational functions. In order to remove such ambiguities one may consider dimensionally regularized representations for the tree amplitudes. This means considering d-dimensional cuts, with d = (4 − 2 ). Photons live in four dimensions, whereas the loop momentum is d-dimensional. In this case one has to be careful when dealing with the summation over states.
A crucial component of generalized unitarity cuts is the sum over physical states. One must be careful in the sum over the physical states of gauge bosons in d dimensions [93]. It is given by the so-called physical state projector: where the ellipsis stand for terms depending on an arbitrary null reference momentum (for massless particles) or on the mass of the particle (for massive particles). In the present case, we will only be concerned with the maximal cut since we already know that only the box integral is present. Here we simply adopt the four-dimensional helicity scheme [94,95] in which all internal and external states (and also polarization vectors) are four-dimensional and loop momentum and phase-space integrals are in d = 4 − 2 dimensions. There are no remaining ambiguities to be considered in our case as the amplitude under consideration vanishes at tree level and there are no ultraviolet divergences.
In general, in the evaluation of the quadruple-cut, we have to discriminate between the dimension of loop momenta and the dimension of the space of physical states; in other words, we should envisage any factor of D emerging from contracting Lorentz indices (δ µ µ = D) as a different quantity in comparison with the dimension d of the loop momenta for which we take d = 4 − 2 . In the limit that D → d one should obtain the same result as before, except that the mass has undergone the shift M 2 → M 2 + µ 2 , where µ α is a vector associated with the (−2 )−dimensional part of the loop momentum. This means that we should use the following modified cut conditions The final result is given by where [48,95,96] and hence We recall that for a complete removal of the ambiguity associated with the rational terms, additional procedures should be carried out [95]; however, such procedures are trivial in the present case since the associated tree-level amplitude vanishes and there are no ultraviolet divergences. Scalar box integrals were explicitly calculated in Refs. [61,65,95]. Furthermore, as promised the helicity amplitude is free from UV divergences. Finally, by exploring the fact that −is 12 s 23 12 23 34 41 2 = 1 and taking into account the different permutations over the external photons, one can easily see that our result agrees perfectly with the ones given in the literature [86,90], apart from an overall phase factor (which is unimportant); one simply needs to be careful with the different conventions on external momenta.
Lee-Wick QED
Now we will discuss a simple example coming from higher-derivative QED. The Lagrangian for the gauge sector reads [79] As well known, Lee-Wick Lagrangians can be rewritten by introducing auxiliary gauge bosons with a very large mass M , very much larger than any other particle masses in our problem. As extensively discussed elsewhere, the coupling of these auxiliary massive gauge bosons to light fields makes them decay, and positive energy is required to excite this resonance [45,97]. Furthermore, this resonance has a "backwards in time" feature in that the propagator has the approximate form (close to the resonance) where we have suppressed Lorentz indices. Notice that there are two minus sign differences from a normal resonance, the −i in the numerator and the −iγ in the denominator. These combined sign differences lead to the distinguishing property of a time-reversed version of a usual unstable particle propagator. This unusual resonance was dubbed a Merlin mode in Refs. [45,98]. There are evidences that point to the stability of theories containing Merlin modes [45,97].
Here we are interested in using the unitarity method to calculate a scattering amplitude involving Merlin particles circulating in the loop. For simplicity, we will consider the narrow-width approximation, Γ M , where Γ is the width of the Merlin particle. The process we have in mind is the muon-electron scattering e + e − → µ + µ − at next-to-leading order, which is one of the simplest in QED processes, but a crucial one in the comprehension of all reactions in e + e − colliders [99]. The calculations with photons running in the loop were carried out in a number of places, see for instance Refs. [100][101][102][103][104][105][106][107][108]. Here we wish to consider solely the one-loop box diagram depicted in Fig. 2. We show how to calculate the coefficient associated with scalar boxes when internal gauge lines are associated with Merlin propagators.
For simplicity, consider the high-energy scattering limit in which the fermions are massless. The 3-particle amplitude involving a fermion, an anti-fermion and a Merlin particle reads (all momenta incoming) where we used that η µν σ αα µ σ ββ ν = 2ε αβ εαβ. Notice that both fermions need to have opposite helicity to give a non-vanishing result. Since p|γ µ |q = q|γ µ |p , one also obtains The 3-point amplitudes involving fermions and a Merlin particle can be computed similarly.
In order to calculate the associated contribution to the coefficient of the scalar box integral, we will resort to the maximal-cut technique, which in the present case means evaluating a quadruple cut. We choose the associated helicities to be h e − = h µ − = −1/2 and h e + = h µ + = +1/2. We have that with the following cut conditions: where p 1 , p 1 (p 2 , p 2 ) are the momenta associated with external µ − and µ + (e − and e + ), respectively. Using the 3-particle amplitudes derived above, we find that As a simple check, one can easily prove that we obtain the same result by resorting to the standard evaluation in terms of polarization sums: where we used the Fierz identity and momentum conservation at each vertex. So we can write that for the contribution coming from Merlin particles running inside the loop, and now Observe the change in the overall sign of the Merlin propagators -as well as in their imaginary parts -in comparison with the W -boson propagators discussed in the previous subsection. Moreover, the presence of Γ can be understood along the same lines as in the normal case -it is important for defining the contour associated with the loop integration but it cannot appear in the final answer obtained after performing the loop integral. Indeed, integrals associated with the Merlin propagators have to be evaluated using the Lee-Wick prescription for integration in the complex 0 plane so that the Wick rotation remains well defined [76][77][78][79][80].
Non-local theories
Previously we have claimed that we can work only with stable particles at the expense of locality. That is, when one is off resonance, a way to deal with the problem of unstable particles is to eliminate them altogether and as a consequence we introduce a non-local description of the problem, albeit one containing only stable modes. Let us briefly discuss this method for the case of light-light scattering. A similar reasoning can also be used for the case of Lee-Wick theories. Since what one has is a non-local interaction, we will discuss the process γγ → γγ within a non-local effective theory. We consider a variant of the theory described in Ref. [109], i.e., we will consider a nonlocal scalar QED. This effective context should account for the issues that one will face when dealing with unstable particles off resonance. The non-local interaction between complex scalars and photons is described by the following Lagrangian density where the non-local coefficient Σ(x − y), which plays the role of a scalar self-energy term, is assumed to be a function of the scalar invariant (x − y) 2 . In addition, the path-ordered exponential U (x, y) is defined as where dω µ is the element of integration along a path connecting points x and y. The path ordering in the definition of U (x, y) is necessary to maintain the gauge-transformation property of U (x, y) for the non-Abelian case. In any case, the path ordering is not required for the photon case [110]. There are also further conditions that should be imposed on the path [109]. The non-local gauge-boson-scalar-scalar vertex can be derived in the standard way and the result is (assuming that the non-local coefficient has analyticity properties resembling standard self-energy functions) where Σ(p 2 ) being the Fourier transform of the self-energy Σ. It is easy to verify that, for the full vertex (containing also the local part, which is not of interest to us here), the Ward identity for dressed scalar propagators is respected [109].
As discussed above, any poles coming from non-local vertices should have zero residues. It is easy to see that this is the case. Furthermore, notice also that This implies the following three-particle amplitudes involving two complex scalars and one photon: q|p|ξ qξ (69) where m is the mass of the scalars. Observe that this is a gauge-invariant amplitude. Since the other gauge interactions are determined from the condition of off-shell gauge invariance, they should not comprise any new on-shell information.
Hence calculation of higher-point tree-level amplitudes may proceed via the usual BCFW recursion relations 6 . The one-loop contribution to the process γγ → γγ proceeding through scalar loops with the above non-local interaction can be calculated in the same way as in the previous case with the W boson. For instance, for the all-plus helicity amplitude one finds that where an extra factor of two was taken into account due to the fact that there is a complex scalar propagating in the loop.
SUMMARY
Here we have discussed the use of unitarity methods in field theories containing resonances. We have shown, through the detailed assessment of three physical situations, how the technique can still be put in practice to such theories at one-loop. Our purpose was to provide an one-loop proof of the validity of unitarity-based methods for unstable particles as a consequence of unitarity itself. That is, generically speaking, if unitarity is satisfied by the inclusion of only stable states in unitarity sums, this implies that, for the unitarity method, one must sum over only the asymptotic states of the theory in Eq. (2). In the complex-mass scheme, this means that, to leading order and close to the resonance region, the cut of the unstable propagator proceeds through the cut of the loop of stable particles. On the other hand, in the NWA, the cut taken through the unstable particle (setting its width to zero) recovers the same result as a cut through the stable decay products. This is the basic requirement for a four-dimensional amplitude involving unstable particles to have cut-constructible parts. Without it, unitarity cuts could not enable one to establish a relationship between the pole structure of the integrand and the branch-cut structure of the loop integral.
Our aim here was not to devise a complete account of all the aspects of the method for unstable particles. Indeed, even though the proof of applicability of unitarity-based methods might be extended to higher orders in perturbation theory, in the present study we have limited ourselves to one-loop order. This is because unitarity cuts provide useful information that can be used in the most efficient way when a complete basis of integrals is known, and this is somewhat straightforward at one loop; the set of master integrals necessary to perform the reduction of generic one-loop tensor integrals is well known, as discussed above. On the other hand, at leading order and close to the resonance region, the CMS cut propagator of the unstable particle turns into a nascent delta function, reproducing the stable particle result, and hence we are allowed to associate the outcome with physical particles carrying positive energy. Such observations allows one to prove Eq. (2) in a straightforward way. In turn, through the examination of simple one-loop examples, we demonstrated explicitly how powerful the method still is when constructing oneloop amplitudes with unstable particles. Further work is recommended to better understand the application of such methods to these theories. One should establish the validity of the method to higher loops: In principle, the proof to higher orders proceeds in much the same way -however, in this case other methods (such as integration-by-parts techniques) should also be employed as integral reduction of the loop integral in terms of a set of master integrals is no longer straightforward and the cuts of internal propagators become more involved.
We have tried to fill this aforementioned gap in the literature of unitarity methods with this primary exploration, and we believe that our study can be useful in the investigations of the Standard Model Effective Field theory (SMEFT) [111], or in the Higgs Effective Field Theory (HEFT) [112]. Indeed, the calculation of loop amplitudes of massive on-shell SMEFT amplitudes focusing on the electroweak sector will obviously involve internal resonances, and an understanding of the unitarity method as a framework to tackle this computation would be most welcome. On the other hand, loop amplitudes of higher-derivative theories also contains resonances, the Merlin modes, and now a careful treatment of those within unitarity methods is available. We believe this will have an important impact on the evaluation of amplitudes of quadratic gravity, a promising conservative ultraviolet completion of quantum gravity. This would indeed be interesting to investigate, and we hope to explore this calculation in subsequent works [113,114].
The gamma matrices in the Weyl basis read where σ µ = (1 σ) andσ µ = (1 − σ). Let us briefly discuss the formalism for massive particles that we used in this work [91,[115][116][117][118]. This is obtained by noting that det p αα = m 2 in the massive case and now p αα has rank 2. So it can be written as the sum of two rank-one matrices: The index I = 1, 2 indicates a doublet of the SU (2) little group. Since det p αα = det λ detλ = m 2 , we simply take det λ = detλ = m. Just like spinor indices, the little group indices are raised and lowered by the SU (2)-invariant tensor ε IJ , ε IJ . It implies that p αα = λ α Iλα I = −λ α Iλα I = |p I p I | pα α = −λ Iα λ α I =λα I λ I α = −|p I p I |.
Comparing this with the usual Dirac equations of motion one is led to the natural identifications for the Dirac spinors: and similarly for the conjugate spinorsū I (p) = −λ α Iλα I v I (p) = λ α Iλα I .
Since the massive spinor bilinears satisfy λ I λ J = mδ I J , λ I λ J = −mε IJ , λ I λ J = mε IJ λ Iλ J = −mδ I J , λ IλJ = mε IJ , λ IλJ = −mε IJ λ I α λ I β = |λ I λ I | = −mδ α β ,λ Iαλβ I = |λ I λ I | = mδβ α it is easy to see that the Dirac spinors obey the usual spin sums. Let us introduce a bold notation to indicate symmetric compositions of the SU (2) little-group indices of massive spinors. One has that p αα = |p p| pα α = −|p p| (86) and the Dirac equation can be rewritten as In addition: We can alternatively write the massive momentum as where k 2 = q 2 = 0 and p 2 = 2k · q = kq qk = m 2 . In terms of bispinors: One has the following identifications The polarization vector of a massive vector boson of momentum p and mass m is given by or, in terms of bispinors: There is an implicit symmetrization on SU (2) indices. These polarizations correspond to transverse and longitudinal modes: | 12,664.4 | 2021-11-22T00:00:00.000 | [
"Physics"
] |
Estimation of Discriminative Feature Subset Using Community Modularity
Feature selection (FS) is an important preprocessing step in machine learning and data mining. In this paper, a new feature subset evaluation method is proposed by constructing a sample graph (SG) in different k-features and applying community modularity to select highly informative features as a group. However, these features may not be relevant as an individual. Furthermore, relevant in-dependency rather than irrelevant redundancy among the selected features is effectively measured with the community modularity Q value of the sample graph in the k-features. An efficient FS method called k-features sample graph feature selection is presented. A key property of this approach is that the discriminative cues of a feature subset with the maximum relevant in-dependency among features can be accurately determined. This community modularity-based method is then verified with the theory of k-means cluster. Compared with other state-of-the-art methods, the proposed approach is more effective, as verified by the results of several experiments.
is NP-hard 19 . To avoid the combinatorial search problem to find an optimal subset, variable selection methods are employed. The most popular of these methods mainly include forward 20 , backward 21 , and floating sequential schemes 22 , which adopt a heuristic search procedure to provide a sub-optimal solution.
In the subset evaluation method, evaluation of the relevance of a feature subset, including relevance and redundancy in a feature subset, is important in multivariate methods; however, this task is difficult in practice. Relevance evaluation methods based on mutual information (MI) have become popular recently [23][24][25][26][27][28] . However, these algorithms approximately estimate the discriminative power of a feature subset because loss of intrinsic information in raw data can occur while estimating the probability distribution of a feature vector by the discretization of a feature variable 27,28 .
A good feature subset should contain features that are highly correlated with the class but uncorrelated with one another 29 . In other words, in a good feature subset, the samples in different classes can be separated well; that is, the within-class distance in samples is small and between-classes distance is large. Therefore, if the samples are shown in a graph (also referred to as a complex network), the graph should exhibit obvious community structures 30 and a high community modularity Q value 31,32 . Thus, the community modularity Q value can be utilized to evaluate the relevance of a feature subset with regard to the class. In this paper, a novel method is proposed to address the feature subset relevance evaluation problem by introducing a new evaluation criterion based on community modularity. The method accurately assesses the relevance independency of a feature subset by constructing a sample graph in different k-features. To the best of our knowledge, this work is the first to employ community modularity in feature subset relevance evaluation. The proposed method indiscriminately selects relevant features through the forward search strategy. This method not only selects relevant features as a group and eliminates redundant features but also attempts to retain intrinsic interdependent feature groups. The effectiveness of the method is validated through experiments on many publicly available datasets. Experimental results confirm that the proposed method exhibits improved FS and classification accuracy. The discriminative capacity of the selected feature subset is significantly superior to that of other methods.
Related Work
FS has elicited increasing attention in the last few years. In the early stage, individual evaluation methods were more popular, such as those in [7][8][9][10] , which measure the discriminate ability of each feature according to a related evaluation criterion. Based on class information, these methods belong to the supervised FS algorithm. An unsupervised feature ranking algorithm has also been proposed; this algorithm considers not only the variance of each feature but also the locality preserving ability, such as the Laplacian score 33 .
A known limitation of individual evaluation methods is that the feature subset selected by these methods may contain redundancy 15,34 , which degrades the subsequent learning process. Thus, several subset evaluation-based filter methods, such as those in 17,29,[35][36][37] , have been proposed to reduce redundancy during FS.
MI is gaining popularity because of its capability to provide an appropriate means of measuring the mutual dependence of two variables; it has been widely utilized to develop information theoretic-based FS criteria, such as MIFS 23,38 , CMIM 39 , CMIF 24 , MIFS-U 25 , mrmr 27 , NMIFS 28 , and FCBF 40 . MI is calculated with a Parzen window 41 , which is less computationally demanding and provides better estimation. The Parzen window method is a non-parametric method to estimate densities. It involves placing a kernel function on top of each sample and evaluating density as the sum of the kernels. The author in 42 pointed out that common heuristics for information-based FS (including Markov Blanket algorithms 43 as a special case) approximately and iteratively maximize the conditional likelihood. The author presented a unifying framework for information theoretic-based FS, bringing almost two decades of research on heuristic filter criteria under a single theoretical interpretation. Analysis of the redundancy among selected features is performed by computing the relevant redundancy between the features and the target. However, MI-based FS methods have been criticized for their limitations. First, loss of intrinsic information in raw data could occur because the probability distribution of the feature vector is estimated by the discretization of the feature variable. The second limitation is that these methods only select relevant features as an individual and disregard these informative features as a group 44 . Several researchers have also found that combining optimal features as an individual does not provide excellent classification performance 45 .
Graph-based methods, such as the Laplacian score 33 and improved Laplacian score-based FS methods [46][47][48][49] , have been widely applied to feature learning because these approaches can evaluate the similarity among data. Generally, the graph-based method includes two phases. First, a graph is constructed in which each node corresponds to each feature, and each edge has a weight based on a criterion between features. Second, several clustering methods are implemented to select a highly coherent set of features 50 . Optimization-based FS algorithms are preferred by many researchers. R. Tibshirani 51 proposed a new method called "lasso" for estimation in linear models. Based on graphical lasso (GL), a new multilink, single-task approach that combines GL with neural network (NN) was proposed to forecast traffic flow 52 .
Statistical methods have been widely applied to FS. Two popular feature ranking measures are t-test 53 and F-statistics 54 . Well known statistic-based feature selection algorithms include χ 2 -statistic 55 , odds ratio 56 , bi-normal separation 57 , improved Gini index 58 , measure using Poisson distribution 59 , and ambiguity measure 60 . Most of these methods calculate a score based on the probability or frequency of each feature in bag-of-words to rank features according to a feature's score; the top features are selected. Yan Wang 61 introduced the concept of feature forest and proposed feature forest-based FS algorithm.
Results
Experiments on artificial datasets, including binary class and multi-class datasets, were conducted to test the proposed approach. The proposed approach was also compared with several popular FS algorithms, including MIFS_U, mrmr, CMIM, Fisher, Laplacian score 33 , RELIEF 62 , Simba-sig 63 , and Greedy Feature Flip (G-Flip-sig) 63 . Off-the-shelf codes 42 To evaluate the effectiveness of the proposed method, the nearest neighborhood classifier (1NN) with Euclidean distance and support vector machine (SVM) 64 using the radial basis function and the penalty parameter c = 100 were employed to test the performance of the FS algorithms. We utilized the LIBSVM package 65 for SVM classification. All experiments were conducted on a PC with Intel(R) Core(TM) i3-2310<EMAIL_ADDRESS>GHz and 2G main memory.
Datasets and preprocessing.
To verify the effectiveness of the proposed method, six continuous datasets from the LIBSVM datasets 65 , two cancer microarray datasets, and two discrete datasets from UCI were utilized in the simulation experiments. All the features in the datasets, except discrete features, were uniformly scaled to zero mean and unit variance. The details of the 10 datasets are shown in Table 1.
Feature selection and classification results. Classification performance was utilized to validate the FS method, and tenfold cross validation was employed to avoid the over-fitting problem. To reduce unintentional effects, all the experimental results are the average of 10 independent runs. In comparing the different methods, the feature subset was produced by picking the top s selected features to access each method in terms of classification accuracy (s = 1, ..., P). We discretized continuous features to nine discrete levels as performed in 66,67 by converting the feature values between μ − σ/2 and μ + σ/2 to 0, the four intervals of size σ to the right of μ + σ/2 to discrete levels from 1 to 4, and the four intervals of size σ to the left of μ − σ/2 to discrete levels from − 1 to − 4. Extremely large positive or small negative feature values were truncated and discretized to ± 4 appropriately. Table 2 indicates the average classification accuracy of both 1NN and SVM classifiers at different s. A bold value indicates the best among the FS methods under the same classifier and the same number of selected features. To avoid the influence of data scarcity, the average value of accuracy at different s for all datasets in the same selector is shown in the bottom line of Table 2 (Avg.). The results in Table 2 indicate that the proposed method (k-FSGFS) exhibits the best average performance compared with other methods in both classifiers. The Avg. values are 83.65% and 83.97% in 1NN and SVM classifiers, respectively. These values are higher than those of the other methods. CMIM is superior to mrmr and MIFS_U. Figures 1 and 2 show the performance of SVM and 1NN at different s of selected features for six datasets, namely, Sonar, Glass, Svmguide4, Segment, DLBCL_A, and Lung-cancer. The six datasets were selected because they cover a diverse range of characteristics, including continuous and discrete data, in terms of the number of features and number of examples. Figures 1 and 2 show that the proposed method (k-FSGFS) outperforms the other methods. In most cases, the average accuracy of the two classifiers is significantly higher than that of other selectors. High classification accuracy is commonly achieved with minimal selected features, which indicates that our evaluation criterion based on community modularity Q not only selects the most informative features but also provides the solution of relevant independency among selected features. The proposed method can evaluate the discriminatory power of a feature subset.
Additionally, the proposed approach was compared with other popular FS methods, including Laplacian score 33 , Relief 62 , Simba-sig 63 , and Greedy Feature Flip (G-Flip-sig) 63 . Relief 62 , Simba-sig 63 , and G-Flip-sig 63 are margin-based FS or feature weighting methods, in which a large nearest neighbor hypothesis margin ensures a large sample margin. Thus, these algorithms find a feature weight vector to minimize the upper bound of the leave-one-out cross-validation error of a nearest-neighbor classifier in the induced feature space. For fairness, only the 1NN classifier was utilized to evaluate the performance of the compared FS algorithms in all the datasets. Figure 3 shows that the proposed method is also superior or comparable to other methods in most cases. Particularly, the proposed method can achieve significantly higher classification accuracy in the first several features than the other methods in most cases. To verify, the classification accuracy results with the 1NN classifier at different selected features s (s = 2, 3, 4) for different methods are illustrated in Table 3. The table clearly indicates that our method significantly improves the classification results with fewer selected features. Thus, our method achieves optimal performance with an acceptable number of features.
To further confirm the effectiveness of this feature evaluation criterion, the decision boundary of the 1NN classifier in 2D feature spaces from the Wine database was used, as shown in Fig. 4(a-d). The indicated dimensions are the two best features selected by each method. The two features selected by k-FSGFS and CMIM are relatively informative (Fig. 4(d)) and help in effectively separating the sample data. Both Fish Score and mrmr selected the same top two features, as indicated in Fig. 4(a), and separated the samples better than MIFS_U in Table 2.
The capability of k-FSGFS to obtain the discriminatory attribute of a feature subset and the relevant independency among features is so effective that it can select these informative features with fewer redundancies. Thus, k-FSGFS performs better than other FS algorithms. For parameter K during the construction of k-FSG in our method, numerous experiments demonstrate that a value of K selected from 2 to 11 is effective for most datasets for either SVM or 1NN classifier. In this study, K was set to 2.
Statistical test.
The classification experiments demonstrated that the proposed framework outperforms the other FS algorithms. However, the results also indicate that k-FSGFS does not perform better than several algorithms in a number of cases. Therefore, paired sample one-tailed test was used to assess the statistical significance of the difference in accuracy. In this test, the null hypothesis states that the average accuracy of k-FSGFS at different numbers of subsets is not greater than that of the other FS algorithms in terms of classification. Meanwhile, the alternative hypothesis states that k-FSGFS is superior to other FS algorithms in terms of classification. For example, if the performance of k-FSGFS is to be compared with that of Fisher Score method (k-FSGFS vs. Fish Score), the null and alternative hypotheses can be defined respectively as follows: H 0 : μ k−FSGFS ≤ μ Fish_Score and H 1 : 5 indicate that regardless of whether 1NN or SVM is used, the p-values obtained by the pair-wise one-tailed t-test are substantially less than 0.05, which means that the proposed k-FSGFS significantly outperforms the other algorithms.
Justification of k-FSGFS based on K-means cluster. The justification of the proposed feature evaluation criterion based on community modularity was demonstrated by adopting the theory of K-means cluster to determine why k features with a higher Q value are more discriminative. The K-means cluster 68 is the most well-known clustering algorithm. It iteratively attempts to address the following objective: given a set of points in a Euclidean space and a positive integer c (the number of clusters), the points are split into c clusters to minimize the total sum of the Euclidean distances of each point to its nearest cluster center, which can be defined as follows: where x i and µ c t are the i-th sample point and its nearest cluster center, respectively, and ⋅ 2 is the L 2 -norm. In the feature weighting K-means, the feature that minimizes within-cluster distance and maximizes between-cluster distance is preferred, thus obtaining higher weight 56 . Confirming whether the features with a high community modularity Q value in our method can minimize within-cluster distance and maximize between-cluster distance is necessary.
According to Equation (7) exhibits a large inner-degree d in (small out-degree d out ), and the sample points in the k-features space with the same labels can be correctly classified as many as possible into the same class and as few as possible into different classes while these k features are good features as a group. The expected number of sample points in the k-features space that are correctly classified can be calculated through Neighborhood components analysis 69 .
Given the selected feature subset S and candidate features f, each sample point i in S ∪ f feature space selects another sample point j as its neighbor with probability P ij . P ij can be defined by a soft max over Euclidean distances as follows: Under this stochastic selection rule, we can compute the probability P i that point i will be correctly classified (denote the set of points in the same class as i by C t = { j|c t = c j }).
Hence, the expected number of sample points in the S ∪ f space correctly (ENC) classified into the same class is defined by Feature f with larger ENC is more discriminative. According to Eqs. 2 to 4, maximizing ENC is mutually equivalent to minimizing the K-means cluster objective J(c, μ).
c is the number of clusters. The lower bound of ENC( f ∪ S) is defined by ENC L_bound. ENC( f ∪ S) can be maximized simultaneously by maximizing its lower bound ENC L_bound and equivalently , which denotes that lower bound ENC L_bound has been maximized. ENC(f ∪ S) obtains the maximum value when the K-means objective (Eq. 1) is optimized for the minimum.
is equivalent to minimize while maximizing the ENC(f ∪ S), and because Hence, k-means cluster function J(c, μ) is min- μ) in the S ∪ f space must be minimized when the community modularity Q value of SG in S ∪ f space obtains a high value, which indicates that the features selected by the proposed method can minimize within-cluster distance. Similarly, the expected number of points incorrectly classified is defined by where n is the number of samples. A small ENIC(f ∪ S) results in a few edges between communities and large between-cluster distance. The feature subset with a high Q value is highly relevant, which not only minimizes within-cluster distance but also maximizes between-cluster distance.
Discussion
In this study, a novel feature subset evaluation criterion using the community modularity Q value by constructing k-features sample graphs (k-FSGs) is presented to measure the relevance of the feature subset with target variable C. To address the redundancy problem of ranking in filter methods, the sample graph in k-features that captures the relevant independency among feature subsets is utilized rather than the conditional MI criteria. By combining the two points above, a new FS method, namely, k-FSGFS, is developed for feature subset selection. The method effectively retains as many interdependent groups as possible during FS. The proposed k-FSGFS works well and outperforms other methods in most cases. The method remarkably or comparatively improves FS and classification accuracy with a small feature subset, which demonstrates the ability of the proposed method to select a discriminative feature subset. The experimental results also verify that interdependent groups commonly exist in the real dataset and play an important role in classification. Unlike the other methods used for comparison, the proposed method accurately evaluates the discriminative power of a feature subset as a group. The Fisher method, which is an individual evaluation criterion, cannot eliminate the redundancy in a feature subset, thereby reducing classification performance. The experiment results for the Fisher method verify this finding. The MI-based methods, such as mrmr, MIFS_U, and CMIM, consider the relevance and redundancy among feature subsets as a group and are superior to the Fisher method. However, these MI-based methods can only approximately estimate the relevance and redundancy in a feature subset (such as considering all the redundancy between pair-wise features to estimate the redundancy among a feature subset as a group in mrmr method) because of the difficulties in accurately computing the probability density function. The results in Table 2 and Figs 1 to 2 indicate that mrmr, MIFS_U, and CMIM methods perform better than the Fisher method but worse than the proposed method.
From the mentioned above, our method perform better than MI-based methods in most cases. In our method, larger inter-class distance implies that the local margin of any sample should be large enough. By the large margin theory 70 , the upper bound of the leave-one-out cross-validation error of a nearest-neighbor classifier in the feature space is minimized and usually generalizes well on unseen test data 70,71 . However, traditional mutual information based relevance evaluation between feature and class can not accurately measure the discriminative power of a feature. In order to better illustrate this, for simplicity, the features f 1 According to MI-based methods, the feature f 1 has the same relevancy as f 2. In our method, the feature f 2 has more discriminative power than f 1 because the community modularity Q in feature f 2 is larger than feature f 1 .
Intuitively, feature f 2 should be more relevant than f 1 due to its between-class distance is larger than f 1 . However, the MI-based method can not capture the difference between f 1 and f 2. Therefore, our relevancy evaluation criterion based on community modularity Q is more efficient and accurate.
However, in practice, the proposed method is not always efficient for all types of datasets, such as imbalanced datasets, especially when a few samples in one class are compared with other classes. For example, in the dataset Lung-cancer, our method performs worse than simba-sig and G-flip-sig. Because, modularity optimization is widely criticized for its resolution limit 72 illustrated in Fig. 5, which may prevent the approach from detecting clusters. The clusters are comparatively small with respect to the graph as a whole, which results in maximum modularity Q not corresponding to a good community structure, that is, features with a high Q value may be irrelevant. The KNN searching needs to be conducted iteratively in our method, thus, the efficiency of our method is low for larger data amounts in real applications with regard to time complexity. Our future work will focus on resolving these problems.
Methods
In this paper, a new feature evaluation criterion based on the community modularity Q value is proposed to evaluate the class-dependent correlation 73 of features as a group instead of identifying the discriminatory power of a single feature. Detailed information on our method is presented in Algorithm 2. The innovations of our work mainly include the following points.
(1) The discriminatory power of features as a group can be evaluated exactly based on the community modularity Q value of sample graphs in k-features. (2) The proposed method can select features that have discriminatory power as a group but have weak power as an individual. (3) Relevant independency instead of irrelevant redundancy between features is measured using the community modularity Q value rather than information theory.
The proposed framework is presented in a flow diagram in Fig. 6.
Community modularity Q. The community structure in an undirected graph exhibits close connections within the community but sparse connections among various communities relatively 31,32 . Figure 7 shows a schematic example of a graph with three communities to demonstrate the community structure. Thus far, the most regarded quality function is the modularity of Newman and Girvan 32 . Modularity Q can be written as follows: in the same community and equal to zero otherwise. Another popular description of modularity Q can be written as follows: 23 assume that a high value of modularity indicates good partitions. In other words, the higher modularity Q is, the more significant the community structure is. Based on the definition of community, the within-class distance in a community is small and the between-class distance is large. Thus, if a graph has a clear community structure, the nodes in different communities can be locally and linearly separated easily, as shown in Fig. 7. The features that minimize within-cluster distance and maximize between-cluster distance are preferred and obtain a high weight. If the sample graph in k-features (k-FSG) has an apparent community structure, these k features will have strong discriminative power as a group because intra-class distance is small and inter-class distance large. This condition can be proven sequentially with the theory of K-means cluster.
Sample graph in k-features (k-FSG).
Given an m × n dataset matrix (m corresponding to samples and n corresponding to features), the sample graph in k-features (k-FSG) can be constructed as follows: an edge A(i, j) (A(i, j) = 1) exists between samples X i and X j if X i ∈ K − NN(X j ) or X j ∈ K − NN(X i ).where X i is the node i corresponding to the sample i, K − NN(X i ) is the K-neighborhood set of node i, and A is the adjacency matrix, which is symmetrical. K is the predefined parameter and does not have large values, which generally range within {3-11}.
The discussion above indicates that if k-FSG in k-features exhibits clear community structures corresponding to a large Q value, these k features are highly informative as a group. The algorithm of constructing k-FSG is shown as Algorithm 1. values will be selected in feature subset S. The procedure will not stop until the number of selected features satisfies |S| = P. To facilitate understanding of our evaluation scheme, we regard a UCI dataset, iris, as an example. The dataset consists of 150 samples and four features. The dataset is divided into three classes with 50 samples in each class. The iris dataset is processed with zero mean and unit variance according to 1-FSG in one feature. The 3rd feature with the highest Q value is the most informative as an individual. Given the 3rd feature, Fig. 8 illustrates the sample scatter points in 2-FSGs for the remaining features {1 2 4} in dataset iris. Three community modularity Q 3↔q values are shown in Table 6 (q = 1, 2, 4). Figure 8 clearly indicate that the 2-FSG in 3 ↔ 4 feature space exhibits more obvious community structures, and the sample points in different classes in 3 ↔ 4 features can be easily separated. The results in Table 6 show that the 2-FSG in 3 ↔ 4 feature space provides the largest community modularity Q value. Thus, the 4th feature has strong informative power combined with the 3rd feature. Given the 3rd and the 4th features, the 1st and the 2nd features can be selected according to the 3-FSGs and 4-FSGs, respectively. The selected feature subset in iris using our method is {3 4 1 2}, which is the selected features of most of the methods. In short, given selected feature subset S, feature f selected by our criterion can be defined as follows: where Q f∪S is the community modularity value of SG in features f ∪ S and F and S are the set of all features and selected feature subset, respectively.
Relevancy analysis.
Ranking-based filter methods cannot handle high redundancy among the selected features. To solve this problem, conditional MI (CMI) is applied in this study to obtain the relevant independency (RI) or relevant redundancy 74 instead of the irrelevant redundancy between features, as shown in Fig. 9. RI(f i , C; f j ) is now the amount of information features f i that can predict target variable C when feature f j is given; In this study, the discriminative capability of k features as a group was evaluated using the community modularity Q value of the constructed k-FSG. A high Q value of k-FSG denotes large RI among the k features as a group, and the sample points in different classes can be separated well. Thus, the community modularity Q value of k-FSG in k-features can accurately illustrate relevant independency RI(f i , C; S) in selected feature subset S. The community modularity Q value of k-FSG was utilized to measure relevant independency instead of MI theory. For verification, the iris dataset was used as an example. Different RI(f i , C; f 3 ) values were calculated, and the third 2 − FSG 3↔q 3 ↔ 4 3 ↔ 1 3 ↔ 2 Q 3↔q 0.6057 0.5719 0.5430 Table 6. The community modularity Q values of 2-FSG (k = 2) in different pairwise features in iris dataset. The more larger the community modularity is, the more relevant the pairwise features are. The features 3 and 4 as a group have more discriminative power. Table 7. The RI in different pairwise features in terms of the third feature in iris dataset. The larger RI states that the features 3 and 4 as a group have more discriminative power.
feature was selected (i = 1, 2, 4), as indicated in Table 7 The table clearly indicates that RI(f 4 , C; f 3 ) is the largest, which demonstrates that fourth feature f 4 can provide more informative information when the third feature is given. Similarly, the Q 3↔4 value in Table 6 is also the highest in Table 7, which demonstrates that the community modular Q value of k-FSG in k-features can replace MI to effectively evaluate the RI of feature subset S. Thus, our method can resolve relevant redundancy among selected features. CMI can be computed with the FEAST tool 42 .
Relevant independency RI(f i , C; S) between feature f i and selected feature set S was replaced by the community modularity Q value of SG in f i ∪ S, which can be defined as follows: ∪ = RI f C S Q ( , ; ): (9) i f S i A larger value of RI(f i , C; S) indicates that f i is highly independent with features in S but relevant in terms of target variable C and has strong informative power combined with features in S. These results indicate that our method can select these features with more relevancy as a group in terms of class and larger RI among selected features.
The details of k-FSGFS are presented in Algorithm 2. Algorithm 2: k-FSGFS: k-features sample graph based feature selection Time complexity of k-FSGFS. Algorithm 2 shows that k-FSGFS mainly includes two steps. The first step is to construct k-FSG in k-features space. The second step is to calculate the community modularity Q value of each k-FSG. The most time-consuming step is establishing k-FSG, whose time complexity is about ο(Pnm 2 ), where n is the number of features in feature space, m is the number of samples in the dataset, and P is the number of predefined selected features. Fortunately, fast K-nearest neighbor graph construction methods 75,76 can be applied to the construction of k-FSGs; such application would reduce the time complexity from ο(Pnm 2 ) to ο(Pnm 1.14 ). In the second step, the spending time is approximately ο(mlog m). Thus, the overall time cost of k-FSGFS is approximately ο ο + . Pnm m ( ) ( log m) 1 14 . | 7,070 | 2016-04-28T00:00:00.000 | [
"Computer Science"
] |
Demonstrating the Principles of Aperture Synthesis with TableTop Laboratory Exercises
Many undergraduate radio astronomy courses are unable to give a detailed treatment of aperture synthesis due to time constraints and limited math backgrounds of students. We have taken a laboratory-based approach to teaching radio interferometry using a set of college-level, table-top exercises. These are performed with the Very Small Radio Telescope (VSRT), an interferometer developed at the Haystack Observatory using satellite TV electronics as detectors and compact fluorescent light bulbs as microwave signal sources. The hands-on experience provided by the VSRT in these labs allows students to gain a conceptual understanding of radio interferometry and aperture synthesis without the rigorous mathematical background traditionally required. The data are quickly and easily processed using a user-friendly data analysis Java package, VSRTI\_Plotter.jar. This software can also be used in the absence of the equipment as an interactive computer activity to demonstrate an interferometer's responses to assorted surface brightness distributions. The students also gain some familiarity with Fourier transforms and an appreciation for the Fourier relations in interferometry using another Java package, the Tool for Interactive Fourier Transforms (TIFT). We have successfully used these tools in multiple offerings of our radio astronomy course at Union College
Introduction
The radio astronomical technique of aperture synthesis, in which high-resolution images are produced from interferometer arrays, has been a productive tool for astronomers for decades. From its initial development in the 1950s and early 1960s, aperture synthesis was quickly recognized as a significant advancement for science. Sir Martin Ryle shared the Nobel Prize for Physics in 1974 "for his observations and inventions, in particular of the aperture synthesis technique" [1]. The best resolution attained with single-dish radio telescopes has historically been of order 20 arcseconds, while that achieved with aperture synthesis can be four orders of magnitude better. With the subsequent construction of the (now renamed) Jansky Very Large Array 1 and continent-scale arrays for very long baseline interferometry, radio astronomers have been able to produce images with much higher angular resolution than has been possible at other wavelengths. Yet, because of the complexity of the math involved, aperture synthesis is often excluded from undergraduate curricula in physics and astronomy.
We introduce here a set of labs at the undergraduate level which provide hands-on experience with the basics of aperture synthesis observations and how the data reveal information about the size and structure of the observed sources. These labs are designed to give students a conceptual understanding of aperture synthesis without the need for the dense mathematical formalism that usually accompanies the topic in graduate-level classes.
The basic concept of aperture synthesis
Successful completion of the labs does not require that the students, or instructors, know the mathematical formalism. As such, in this section, we give a brief discussion of what an aperture synthesis observation entails, touching on only those aspects necessary to comprehend the principles demonstrated in these labs. Details about interferometry can be found in textbooks [2][3][4][5][6].
In aperture synthesis a number of antennas, arranged in a particular pattern, or "array," receive the radiation from a celestial source simultaneously and the signals are combined pairwise. The method of signal combination in most modern interferometers is a cross-correlation, but the signals can also be added. The response of each pair of antennas contains an amplitude and a phase which, customarily, are represented as a complex number. (The "amplitude" in a radio interferometer's output is called the "modulus" of a complex number in standard mathematical usage.) For a single point of emission, the amplitude is proportional to the flux density of the source while the phase is related to the difference in path lengths to the two antennas, as depicted in Figure 1. For a general source of arbitrary structure, the detected amplitude and phase are the complex superposition of the responses due to each unresolved point of emission within the field of view. In an aperture synthesis observation, the fully calibrated response of each pair of antennas, termed a "visibility," is a function of the separation of the antennas, generally referred to as the "baseline," b. If the source is not located along the mid-plane of the baseline, b is the component of the baseline perpendicular to the direction of the source.
The visibility for any particular antenna pair is most sensitive to source structure on an angular scale proportional to λ/b, where λ is the wavelength of the observation. This is a familiar concept from optics, in which the resolution of a telescope is proportional to λ/D, where D is the diameter of the objective lens or mirror. With an array of more than two antennas, the visibility for each pair is obtained and hence the visibility function can be probed over a wide range of baseline lengths and orientations. In this way, information about the source structure on multiple angular scales enables one to produce an image of the source. When written as complex values, the visibility function is related to the image of the source via a two-dimensional Fourier transform.
In the labs described here, the presentation of the important principles is simplified in two ways. First, since the Very Small Radio Telescope (VSRT) measures only the amplitude of the visibilities and not the phase, complex numbers can be avoided. Second, the labs are performed with all sources and antennas in the horizontal plane and so the analysis is reduced to one-dimension. Students are introduced to the visibility function by measuring the interferometer response as a function of antenna separation and discover the relationship between visibilities and simple source structure. By extrapolation, the students gain an appreciation of how information about source structure can be recovered from data obtained with many pairs of antennas.
The visibility as a function of baseline is analogous to the diffraction and interference patterns produced in the single and double slit experiments. Radiation from each point in the observed source is detected by each antenna and the different path lengths result in constructive or destructive interference depending on the point's position in the sky and the baseline geometry when the signals are combined. For a source of arbitrary brightness pattern, the resulting visibility function is the sum of the interferometer responses for all points of emission in the source. When written as complex values, the visibility function, with b/λ as the independent variable, is related to the image of the source, i.e. intensity as a function of angle, θ, on the sky, via a Fourier transform. In practice, the radio astronomer obtains an image of a source with software which performs an inverse Fourier transform on the visibility data.
Equipment
The laboratory exercises described herein use the VSRT (Very Small Radio Telescope), a laboratory interferometer developed by MIT Haystack Observatory. A description of the operation of the VSRT as an additive interferometer is provided online [7]. Composed of commercially-available electronics and satellite TV equipment, the VSRT is a low-cost instrument designed to demonstrate the principles of radio waves and interferometry in high school and college-level labs [8]. The VSRT is stable, reliable, and easy to assemble, operate and manipulate in the lab room. (Full details of the system can be found at the VSRT project web page [9].) The VSRT uses Ku-band satellite TV feeds to receive radiation near 12 GHz. The feeds can be installed in satellite dishes to increase the collecting area and directionality of the instrument, as is done for an experiment measuring the solar diameter [10]. Alternatively, the feeds can be used without the dishes to observe strong radio sources in the lab. The exercises discussed here use the instrument in this manner, with compact fluorescent light bulbs (CFLs) serving as radio sources (see Figure 2).
In addition to visible light, CFLs produce broadband radio emission from 100 MHz up to 100 GHz when the free electrons in the plasma collide with the glass walls of the bulb [11]. The CFLs can be hidden by an optically-opaque material that is transparent to radio waves (e.g., a cardboard box) to demonstrate that the feeds are sensitive to the bulbs' radio emission, not their visible light. Union College students working with the VSRT. Two "triple" DirecTV feeds, taken from TV satellite dishes, used here as radio antennas, receive and detect the radio emission from compact fluorescent light bulbs (CFLs). Only one feed of each triple is active. The CFLs can be moved to assorted separation distances. The output spectrum from the interferometer is displayed on the laptop screen, and the data files are recorded.
Laboratory Exercises
We now describe the labs, designed to help students develop an intuitive sense of how an array of antennas can be used to infer aspects of the spatial structure of radio sources. To facilitate data analysis, we have produced a package of Java programs, named "VSRTI_Plotter," available online [9, 12]. These programs also have links to lab instructions and can produce overlays of theoretical models with adjustable parameters. For the sake of simplicity the lab set-ups occur entirely in the horizontal plane so that the maps of the sources, as seen from the position of the feeds, are simple plots of intensity vs. position in the horizontal direction.
The primary beam
Astronomy students are usually familiar with the fact that the angular resolution of a single telescope, ignoring the atmosphere, is limited by diffraction and is approximately λ/D. With interferometers, the central diffraction peak of each individual telescope is known as the "primary beam." When an astronomical source is not at the center of the primary beam, the detected power is decreased. The primary beam size of the individual antennas of an array, therefore, places an effective upper limit on the maximum field of view in aperture synthesis.
In the first exercise, students measure the primary beam pattern of the VSRT feeds by placing the active feeds one above the other, making a baseline with zero horizontal length and placing a CFL two meters away at the mid-plane position. Keeping the feeds fixed, they record data with the CFL placed at various horizontal angles to the mid-plane. The data files can then be drag-and-dropped into the VSRTI_Plotter "Plot Beam" program, producing a graph of the data. The VSRTI_Plotter program can also overlay a theoretical beam plot, yielding a fitted value of the effective diameter of a feed (see Figure 3).
A single resolved source
The distance between the antennas (i.e., the baseline) determines the angular sizes of structure in the radio source that the interferometer is most sensitive to. In short, the baseline acts like a spatial filter. With longer baselines, the interferometer response becomes more dependent on finer scale structure in the source, while its sensitivity to extended structure decreases. Flux distributed over larger angles is said to be "resolved out." If the baseline is too long, it will not detect the source at all. The detected power of a resolved source, therefore, decreases with increasing baseline length. One can use the rate of fall-off of detected power to determine the angular size of the source.
In this lab, students use a single CFL located at the mid-plane position between the two feeds and then vary the horizontal separation between the feeds. They produce a plot of the measured power versus the baseline length, finding that the power decreases as the baseline length increases. In this way students become acquainted with the visibility function, in which the independent variable is baseline length divided by wavelength.
Students are instructed to obscure the bottom half of the CFL using a metal plate. They find that the power decreases by about half and verify that this holds at a variety of baseline lengths. Next, they place two metal plates vertically obscuring the sides of the CFL in order to make a source whose apparent size is smaller. They find that the measured power at short baseline lengths is smaller than for an unobscured CFL, but that the power at long baseline lengths is actually greater. Figure 4 displays results of this lab. The students discover that the rate of decrease of the measured power is inversely related to the angular size of the source.
Double Sources
When the radio source contains two components separated by an angle resolved by the interferometer, the detected power varies with baseline in an oscillatory manner. In this exercise, students discover this by using two CFLs with a fixed separation between them. The students move the VSRT feeds to larger separations and discover that the measured power oscillates with baseline length. The students then move the two CFLs farther apart and repeat their measurements, discovering a reciprocal relationship between the angular separation of the two sources and the distance between minima in the visibility function ( Figure 5).
Mystery Source
After completing the exercises discussed in Sections 4.2 and 4.3, the students' new abilities to infer the angular separations and angular sizes from visibilities are tested by placing a cardboard box over a pair of CFLs. After obtaining their data and inferring the source structure, the students can check their answer by lifting the box.
A slight increase in complexity of source structure and a challenge to the students' analytical abilities can then be demonstrated with an exercise using three CFLs hidden under a cardboard box. The instructor sets the CFLs with equal spacings, so that CFLs 1 and 2 and CFLs 2 and 3 create two pairs separated by angle θ and CFLs 1 and 3 make one pair with a separation of 2θ. The visibility data then show the sum of two sine waves -one with a larger amplitude and period = 1/θ and the other with a smaller amplitude and half the period ( Figure 6). The students are not informed that there are three CFLs in the box, and are asked to infer, working as team, the source structure considering the principles they learned in the labs.
Fourier Transform
In actual aperture synthesis observations one obtains an image of the source by performing a Fourier transform on the complex visibilities. Since the VSRT data contain only amplitudes, and no phases, Figure 5. The visibilities as a function of baseline length for a pair of CFLs. For the left plot, the CFLs were separated by 30 cm and at a distance of 2 m, while for the right plot the CFLs were separated by 40 cm. Students discover that the detected signal for a double source has an oscillating dependence on baseline length and that the period of oscillation is inversely dependent on the separation of the sources. The overall decrease in power with baseline is due to the resolving of each CFL, as seen in Figure 4. (The difference in the visibility values between the graphs results because different lamps were used in the two experiments.) Figure 6. The detected power as a function of baseline length for a "mystery source" hidden inside a box 2 meters away. The box contained three CFLs with equal separations of 20 cm between the middle and end CFLs, yielding two pairs with angular separations of 0.1 radians and one pair with a separation of 0.2 radians.
we have provided another java package which students can use to discover the relation between Fourier function pairs. They find that this relation is identical to that between the visibility function and source structure. Using the "Tool for Interactive Fourier Transforms" (or TIFT), also available online [12], students use a simple click and drag operation to make functions representing the brightness distributions of the CFLs in the previous labs. In a companion plot, they discover that the In the "f(t)-Magnitude" window, two signals of width 0.03 s and separated by 0.2 are drawn. The "F(ν)-Magnitude" window shows that the Fourier transform oscillates with a period equal to 5 and a decreasing envelope and is similar to the visibility function found in the lab observing double sources (see Figure 5).
Solar Diameter
As an exercise using the VSRT to make a measurement of an actual celestial source, the VSRT can also be used to determine the diameter of the Sun. This involves a slightly more complicated set-up but also provides a practical culmination of the VSRT labs [10]. The results of this experiment are shown in Figure 8.
Practical Experience in the Classroom
Starting in 2012, these labs have been incorporated into the radio astronomy class at Union College. The class consists of seniors, juniors and sophomores with majors in either physics or engineering. The coverage of aperture synthesis in the course is accomplished using only the labs; no lectures are provided. The effectiveness of the labs was tested in the first course they were used. Both before and after performing the labs, the students were given a quiz on the basics of aperture synthesis, containing the following questions: 1. Explain conceptually how receiving signals with a number of radio telescopes, as with the VLA, contains information about the image of a radio source. 2. What is an "array" of telescopes and what are the important criteria in designing the array? 3. What is the "Visibility function?" 4. Describe what a 1-dimensional visibility function looks like when observing: a) a single, unresolved source. b) a single, resolved source. How does the shape of the visibility function change as the angular size of the source increases? c) a pair of unresolved sources. How does the shape of the the visibility function change as the separation of the two sources increases?
The average score on the quiz before performing the labs was 0.3 points out of 18 (or 1.5%), demonstrating that none of the students had any prior knowledge about aperture synthesis. Afterwards, the average quiz score was 11.9 (or 66%), indicating a normalized gain (defined by the fraction of the material not known a priori that was learned by the time of the post-test) was 0.66.
The mystery source lab, in which three CFLs were hidden from view, was found to be challenging by the students. However, with all students collaborating as a team to brainstorm, within a class period and with extensive discussion, the class succeeded in inferring the actual source structure correctly.
Concluding remarks
These labs have been incorporated in undergraduate radio astronomy courses at Union College and the data for all the plots shown in the figures were obtained by students in these classes. While singledish radio observing methods are sometimes included in undergraduate astronomy classes, aperture synthesis is often de-emphasized due to the need to include high-level mathematics. Using these labs, the students in these classes were able to gain an intuitive understanding that data associated with pairs of antennas at different spacings leads to a function from which one can infer the size of a single source or the separation between sources in the sky. | 4,379.2 | 2018-07-10T00:00:00.000 | [
"Physics",
"Engineering"
] |
An Analysis Educational Guidelines of Mathematics Education Provision for Primary School from High Score Countries on Timss 2015
The objectives of this research were 1) to study effects of student characteristic variables and educational institution characteristic variables on the quality assessment of mathematics study management at primary school of countries achieving high test scores, i.e. Singapore, South Korea, and the Hong Kong Special Administrative Region and 2) to compare similarities and differences of student characteristic variables and educational institution characteristic variables on the quality assessment of mathematics study management in Singapore, South Korea, and the Hong Kong Special Administrative Region. The data used in this research are secondary data from the 2015 Trends in International Mathematics and Science Study (TIMSS). The data were collected from 4,669 students across 100 schools in the Republic of Korea, 6,517 students across 100 schools in Singapore and 3,600 students across 100 schools in Hong Kong. The three steps taken for data analysis are as follows – 1) To analyze descriptive statistics 2) To estimate the ability of students from the assessment of their mathematics proficiency using IRTPRO program; 3) To analyze variables affecting student and school with Hierarchical Linear Modeling (HLM) and two levels of analysis. The findings from the study revealed that 1. The proportion of all variances explained for variables or coefficient of prediction ( R 2 ) indicated Singapore’s coefficient of prediction at a student level was 0.2750 (27.50%) and at a school level was 0.7250 (72.50%). Republic of Korea’s coefficient of prediction at a student level was 0.2183 (21.83%) and at an school level was 0.6288 (62.88%), the Hong Kong Special Administrative Region’s coefficient of prediction at a student level was 0.1477 (14.77%) and at a school level was 0.4482 (44.82%). 2. Multi-level analysis of student-level variables found that 7 variables affecting the quality of mathematics study management. Namely, 3 countries had 2 variables, 2 countries had 4 variables, and 1 country had 1 variable. As for multi-level analysis of educational-level variables, it was found that 5 variables affecting the quality of mathematics study management. Namely, 2 countries had 1 variable and 1 country had 4 variables.
Introduction
Value-added model is a method that helps report results reflecting information about educational management whether or not educational institutions create value-added to learning outcomes more or less by comparing scores from actual learning outcomes or observed scores to predictable learning outcomes (predicted scores) based on student background variables, community context 7228 An Analysis Educational Guidelines of Mathematics Education Provision for Primary School from High Score Countries on Timss 2015 variables, social variables or variables associated with existing achievement (Sirichai [1], [2]. The use of value-added model in education is gathering statistical techniques with the use of student test scores to estimate the effect size of educational institutions or teachers (MaCaffrey, Lockwood, Koretz, & Hamilton, 2003) [3]. There are two ways for using the value-added model. The first one is assessing responsibility in educational institutions that can be auditable and the second one is assessing the relative effectiveness of teachers. Some models consider only students' existing knowledge or take other variables such as sex, religion, and economic status into consideration [4], [5], [6], [7]. The TIMSS 2015 assessment results reported Singapore achieved the highest mean score in mathematics (621 points), followed by South Korea (the average score of 606) while Thailand had the average score of 431 which was listed in a low-level group [8]. Therefore, the researcher was interested in assessing the quality of educational management of the country members and analyzing mathematics study management guidelines for basic education of countries achieving high test scores from Trends in International Mathematics and Science Study with the application of value-added analysis and analysis of differential item functioning in tests. Data in the study were employed from the 2015 Trends in International Mathematics and Science Study (TIMSS) conducted by The International Association for the Evaluation of Educational Achievement (IEA) and Singapore scored the highest on the TIMSS, followed by Hong Kong respectively.
Definitions
Efficiency of quality assessment means coefficient of determination (R 2 ) from the model of quality assessment, which is the multi-level analysis model with factor control in the level of students and educational institutions affecting the learning of students.
TIMSS 2015 is the internationally comparative assessment dedicated to improving teaching and learning in mathematics and science for students around the world. TIMSS 2015 uses the broadly defined curriculum as the major organizing concept in considering how educational opportunities are provided to students and the factors that influence how students use these opportunities. Test items were designed to measure the breadth of content in number, geometric shapes and measures and data display. TIMSS 2015 included an extensive test development effort to support the mathematics assessment framework. At the fourth grade, the test includes 179 items and approximately half the items are constructed responses while the other half are multiple choice.
Methods
The research was conducted and divided into 3 parts as Part 1: Data used in the study; Part 2: Details of data collection; Part 3: Data analysis. Details are shown below:
1) Samples for Study
The study employed secondary data of the 2015 Trends in International Mathematics and Science Study (TIMSS 2015). The sample participating in the study for data collection were students, teachers of mathematics who teach students being the sample in this study, and administrators of educational institutions where the students were learning as seen in
Part 2: Details of data collection
Test -TIMSS 2015 assessment test contained 179 items in mathematics, considered to be many items. To enable students to complete all items within the specified response time, 1 hour 30 minutes (the amount of time spent on each subject was 45 minutes), each subject test was divided into 14 booklets consisting of multiple-choice items and constructed-response items. The tests were made of content and curricula from countries participating in the TIMSS 2015. Each cluster of the items had a proportion of subject content and learning behavior according to the TIMSS assessment framework of mathematics achievement. To collect data for the assessment, the assessment involved assembling the items into 14 blocks and each block contained 23-29 mathematics items. TIMSS provides a standard for systematic random assessment tests with a very careful rotated design. Therefore, students with seating arrangement next to each other will not have a chance to be given the same test booklet and everyone shall start doing the test of each part at the same time.
Student questionnaire: All students being participants in the study were required to respond to a questionnaire after they finished the test. Students completed the 30-minute questionnaire designed to provide information about their general information, studying mathematics in their educational institutions, using computers for studying mathematics and other, their educational institutions, activities they do outside their educational institutions, and doing mathematics homework.
Educational institution questionnaire: An administrator whose students were the study participants was required to respond to a questionnaire designed to provide characteristics of the educational institution, school operation as being an administrator, parents' participation in school activities, learning atmosphere in school, mathematics study management in school, students' behaviors, and sources of learning and technologies.
Part 3: Data Analysis
This research had three steps of analysis as follows: Step 1 To analyze descriptive statistics The analysis of basic statistic values of data was conducted for analyzing fundamental data by means of descriptive statistics, i.e., the frequency, percentage, mean, standard deviation, highest value, and lowest value.
Step 2 To estimate students' ability To estimate the ability of students from the assessment of their mathematics proficiency using the IRTPRO program.
Step 3 To analyze affecting variables To analyze the variables affecting the student and school with Hierarchical Linear Modeling (HLM) and two levels of analysis and comparing the similarity and difference of characteristic variables of students and schools toward the quality assessment of the mathematics subject. Competency in describing variance of dependent variables with predictor variable or coefficient of determination (R 2 ) in each model had the following equation [3], [9], [10]: Variance of residual value reduced when with predictor variable Variance of residual value reduced when without predictor variable
Results
The results of variables affecting on a quality assessment of learning mathematics were concluded as follows:
Singapore
Effects of student characteristic variables and educational institution characteristic variables on the quality assessment of mathematics study management at primary school of countries achieving high test scores, i.e. Singapore, Republic of Korea, and the Hong Kong Special Administrative Region are detailed below:
Singapore
Multi-level analysis in Singapore based on fixed effect tests found that the mean students' abilities from mathematics skill assessment in all educational institutions (G00) had no variation from zero (G00 = -0.0009), the regression coefficient of educational institution variables having the maximum positive value was policies supporting academic achievement in educational institutions ( β = 0.0390). This is to say that educational institutions providing policies to support academic achievement will have increased students' abilities from assessment skills on mathematics. The student-level variable having the most positive influence was confidence in learning mathematics ( β = 0.1859).
Random effect test results found that the educational institution-level remainder of students' abilities from mathematics skill assessment with controlled student-level and educational institution-level variables (U0) or value-added of educational institutions had variation among educational institutions with a statistical significance level of 0.01 ( 2 χ = 733.91281). The variation among educational institutions was 0.0449 which variance could be described by 8.30% and variation within educational institutions was 0.4959 which variance could be described by 91.70%.
Republic of Korea
Multi-level analysis in Republic of Korea (South Korea) based on fixed effect tests indicated that the mean students' abilities from mathematics skill assessment in all educational institutions (G00) had no variation from zero (G00= -0.0221), the regression coefficient of educational institution variables having the maximum positive value was recruiting students to schools based on mathematical calculation skills ( β = 0.0720). This is to say that educational institutions recruiting students based on mathematical calculation skills will have increased students' abilities from assessment skills on mathematics. The student-level variable having the most positive influence was confidence in learning mathematics ( β = 0.2617).
Random effect test results found that the educational institution-level remainder of students' abilities from mathematics skill assessment with controlled student-level and educational institution-level variables (U0) or value-added of educational institutions had variation among educational institutions with a statistical significance level of 0.01 ( 2 χ =378.38310). The variation among educational institutions was 0.0312 which variance could be described by 5.16% and variation within educational institutions was 0.5734 which variance could be described by 94.83%.
The Hong Kong Special Administrative Region
Multi-level analysis in the Hong Kong Special Administrative Region based on fixed effect tests indicated that the mean students' abilities from mathematics skill assessment in all educational institutions (G00) had no variation from zero (G00= 0.0069), the regression coefficient of educational institution variables having the maximum positive value was a low level of discipline problems among students ( β = 0.0499). This is to say that educational institutions having a low level of discipline problems among students will have increased students' abilities from assessment skills on mathematics. The student-level variable having the most positive influence was confidence in learning mathematics ( β = 0.1502).
Random effect test results found that the educational institution-level remainder of students' abilities from mathematics skill assessment with controlled student-level and educational institution-level variables (U0) or value-added of educational institutions had variation among educational institutions with a statistical significance level of 0.01 ( 2 χ = 695.01457). The variation among educational institutions was 0.0955 which variance could be described by 15.02% and variation within educational institutions was 0.5405 which variance could be described by 84.98%. Proportion of all variances of variables that can be described or coefficient of prediction (R 2 ) indicated Singapore's coefficient of prediction at a student level was 0.2750 (27.50%) and at an educational institution level was 0.7250 (72.50%).
Republic of Korea's coefficient of prediction at a student level was 0.2183 (21.83%) and at an educational institution level was 0.6288 (62.88%).
The Hong Kong Special Administrative Region's coefficient of prediction at a student level was 0.1477 (14.77%) and at an educational institution level was 0.4482 (44.82%).
Discussion
The research results revealed that there were various variables having influence on changes in mathematics assessment test scores of Republic of Korea, Singapore, and the Hong Kong Special Administrative Region with regard to the student-level such as sex, feeling of being a part of school, being bullied by friends at school, positive attitude towards teachers of mathematics, confidence in learning mathematics, learning resources at students' homes, and support of learning outside the classroom, and with regard to the educational institution-level such as support of instructional media, policies supporting academic achievement, a low level of discipline problems among students, recruiting students to schools based on mathematical calculation skills, and students' level of preparedness [11], [12]. It can be seen that characteristics of student-level and educational institution-level variables had effects on the quality assessment of educational management, having consistency with real situations. The quality of educational institutions in many countries is highly different. One of various reasons is difference in significant resources such as finance, personnel or sizes of educational institutions. Such differences appear to be value-added different in each educational institution.
With regard to the comparison of similarities and differences of student characteristic variables and educational institutions towards the quality assessment of mathematics study management among Republic of Korea, Singapore, and the Hong Kong Special Administrative Region, the quality assessment of educational management using value-added analysis for maximum efficiency, it is necessary to take details of student-level variables into consideration so as to increase reliability of the obtain assessment results as well [13], [14].
Suggestion for application of research
This research was conducted to study variables of Republic of Korea, Singapore, and the Hong Kong Special Administrative Region being the countries achieving a high level of TIMSS2015 assessment test scores. Therefore, educational institutions can apply both student-level and educational institution-level variables affecting the quality of educational management to develop their students for having more achievement of mathematic study.
Suggestion for future research
The data used in the research were secondary obtained from Trends in International Mathematics and Science Study in 2015 or TIMSS 2015. Some restrictions that the researcher found were student-level and educational institution-level variables. As the data were collected, some interesting variables from studying relevant documents and research studies were not included in the database. For future research, a researcher should administer a self-collection of data. A researcher probably prepares a test and synthesizes variables to really meet the context of Thailand so as to create additional value to a research study increasingly. | 3,435.6 | 2020-12-01T00:00:00.000 | [
"Education",
"Mathematics"
] |
Transport properties of Methane, Ethane, Propane, and n-Butane in Water
In this work, we have estimated self diffusion coefficients along with the binary diffusion coefficients of mixtures of alkane (methane, ethane, propane and n-butane) in SPC/E water(H$_2$O). Molecular dynamics study of a binary mixture of alkane gas and SPC/E water, with alkane as solute and water as solvent, have been carried out at different temperatures ranging from 283.15 K to 333.15 K. We have taken a dilute solution of 3 alkane (methane, ethane, propane and n-butane) molecules and 971 water molecules in a system. The role of interaction in the structure of the constituents of the system as a function of temperature is studied with the help of the radial distribution function (RDF) and the coordination numbers. The self-diffusion coefficient of the constituents of the mixture was calculated by using mean square displacement (MSD) and the binary diffusion coefficients of alkane in water have been calculated by using Darken's relation. The results are then compared with the available experimental values. The values of self-diffusion coefficients of water from the present work come in good agreement with the experimental values within 9% error. The binary diffusion coefficients of ethane, methane, propane and n-butane agree with the previously reported experimental values. The dependence of the diffusion coefficients on temperature is approximated by Arrhenius-type exponential relationship.
Introduction
Molecular Dynamics (MD) simulations provide a powerful technique for predicting and understanding the structure, function, dynamics, and interactions of atoms and molecules starting from a simple to complex systems in Physics, Chemistry, Biology and Materials Sciences. These techniques are valued because they provide a movie of what atoms do in real life, assuming a given potential energy function. This serves as a complement to conventional experiments, enabling us to learn something new, something that cannot be found out in other ways [1][2][3][4][5][6]. MD simulations also can be used to make quantitative predictions of thermodynamic and transport properties, with applications in fields including protein folding, drug discovery, chemical engineering, and nanoengineering [7]. The two main families of computer simulation technique are molecular dynamics (MD) and Monte Carlo (MC); additionally, there is a whole range of hybrid techniques which combine features from both. The obvious advantage of MD over MC is that it gives a route to dynamical properties of the system: transport coefficients, time-dependent responses to perturbations, rheological properties and spectra [1,2].
Alkanes are saturated hydrocarbons that consist only of the elements carbon (C) and hydrogen (H), where each of these atoms are linked together exclusively by single bonds. The smaller members of the alkane family are gases, while the larger are liquid and solid compounds. First four members (lighter alkanes) of alkane series are methane, ethane, propane, and butane with molecular formula CH , C H , C H in aerosol sprays and have a number of industrial applications beyond fuels, including uses in cosmetics and plastics [8][9][10][11][12]. Furthermore, the first four memebers of alkanes are also neutral analogs of amino acid side chain. Amino acid side chain analogs represent a natural test case for biomolecular interaction [13,14].
As different species of a mixture move under the influence of concentration inhomogeneity, molecular diffusion occurs. The kinetics of many micro structural changes that occur during preparation, processing and heat treatment of materials include diffusion. The typical examples are nucleation of new phases, diffusive phase transformation, precipitation and dissolution of a second phase, homogenization of alloys, recrystallization and thermal oxidation [15]. It plays a vital role in variety of biospheric and atmospheric sciences. Diffusion is basic for transport of matter and for ionic conduction in disordered materials [16,17].
Alkanes are hydrophobic molecules, and as such its solubility in water is rather low, and alaknes molecules tend to aggregate when solvated in water. This behavior is more clearly exhibited by longer n-alkane chains, which may be considered as polymers of methane. The diffusion, solubility or hydrophobicity of hydrocarbons (alkanes) in water is a basic consideration in many processes like processing of natural gases and petroleum, understanding the tertiary structure of proteins, as well as the important role it plays as a driving force in a number of processes occurring within living cells [18][19][20]. The diffusion coefficients of short n-alkane molecules in water for wide ranges of temperatures and pressures have been repeatedly measured experimentally and from numerical simulations during the last 3-4 decades. The experimental values of binary diffusion coefficients of alkane-water mixture have been obtained by Wise and Houghton in 1966 [21], by using the rate of collapse of small bubbles in gas free water. Also, Witherspoon and Saraf in 1965 [22], have been obtained by using the capillary cell method. The binary diffusion coefficients of the two experimental works, obtained by Wise and Houghton [21] are deviated at most 76% than that as obtained by Witherspoon and Saraf [22]. Most recently, Michalis et al [23] calculated the diffusion coefficients of light n-alkanes in water over a wide range of temperature and pressure by numerical simulation method. The calculated diffusion coefficients agree well to some of the earlier experimental results but these results are forcefield parameter dependent. The authors in [23] have used TraPPE [24] force field for the representation of n-alkane. The TraPPE is a united atom model in which methyl (CH 3 ) and methylene (CH 2 ) groups of alkanes are represented as two point particles (pseudoatoms along the alkane chain connected with bonds of fixed length) and positively charged hydrogen atoms are merged with an electronegative carbon atom in each group. All groups have zero charge and hence Coulomb interactions are excluded. The number of dihedral (or torsion) interaction constants is less and therefore the parametrization becomes simpler. The total energy also becomes lower. This model seems less realistic though it takes the least computational cost. On the other hand, OPLS-AA [25] is an all-atom model for alkane that includes Coulomb interaction and the changes in the distribution of the electron density (due to the polar covalent bonds inside molecules) create non-zero atomic charges. Due to explicit hydrogen atoms, OPLS-AA uses a more complex form of dihedral interaction and there is more steric clashes between hydrogen atoms on neighboring molecules. There is both Coulomb and Lennard-Jones interactions. The Coulomb interaction and the more complicated torsion should give the difference in the physical properties though it needs the more computational cost. All-atom model is more realistic than that of united atom model, so it is better to use OPLS-AA to reproduce and predict transport properties of alkanes [26,27]. Such discrepancies in previous experimental works and forcefield parameter dependent of diffusion coefficients calculated by numerical simulation motivated us to carry out a computational work by different forcefield to study the diffusion phenomena of alkane in water. Our results obtained from simulation also can be used as a crude reference for any further studies of diffusion in complex fluid mixtures and improve our understanding of hydrocarbons and other more complex biological macromolecules in water.
The outline of the paper is as follows: in section 2, we discuss the theory of diffusion and method of calculation of diffusion coefficient. Computational details of the our work are stated in section 3. Results of the our work are presented in sections 4. Our conclusions are collected in section 5.
Diffusion coefficient
Diffusion is the process by which matter is transported from one part of the system with higher concentration to the another part of the system with the lower concentration as a result of random molecular motion. The driving force of diffusion is thermal motion of the molecules. The higher concentration of a species in a system at a particular site corresponds to its higher value of chemical potential. The net transport of the mass takes place from the region of higher chemical potential to the region of lower chemical potential. At the end of such net transport, the system attains a situation where there exists same value of chemical potential in the system. In this situation, free energy of the system is minimum and hence its entropy becomes maximum; system is at dynamic equilibrium [28]. The response property of a system to a concetration gradient is measured by diffusion coefficient [1]. The diffusion in a homogeneous system where no chemical concentration gradient exists is known as self-diffusion and the corresponding diffusion coefficient is called self-diffusion coefficient [29]. The mathematical expression to calculate self-diffusion coefficient from molecular positions is famously known as Einstein relation [1,2]. For 3-D system, where α denotes the type of component (solute or solvent) and t 0 is any time origin. The angled brackets á¼ñ indicate the ensemble average. The ensemble average is taken over all atoms of the component α in the simulation and all time origins [30]. The method using Einstein relation for calculating diffusion coefficients is known as mean square displacement (MSD) method.
In this work, we calculate the self diffusion coefficients of both the components i.e. alkane (methane, ethane, propane, n-butane) and water (H 2 O) which can be used to estimate the binary diffusion coefficient using Darken's relation [31] where D A , D B are the self diffusion coefficients of species A and B respectively and N A , N B are the corresponding mole fractions.
Molecular models
The SPC/E (simple point charge/extended) potential model [32] is used in all the simulation for water as a solvent. The OPLSS-AA (Optimized Potentials for Liquid Simulations-All Atom) potential model [25] is used for alkanes (methane, ethane, propane, n-butane) as solute. The system under study consists of 3 alkane (methane, ethane, propane, n-butane) molecules and 971 water molecules separately. In classical force fields like OPLS-AA, the potential functions are derived empirically to describe the atomic interactions. The atoms are treated as spherically symmetric particles and are considered to be connected through covalent bonds to form molecules. Each and every atom experiences a force resulting from its pairwise additive interactions with the rest of the system. The total potential energy U tot includes contributions from both bonded and non-bonded interactions [33]. The bonded interactions are bond stretching (2-body), bond angle (3-body) and dihedral angle (4-body) interactions. A special type of dihedral interaction (called improper dihedrals) is used to force atoms to remain in a plane or to prevent transition to a configuration of opposite chirality (a mirror image). The nonbonded interactions are represented by the Lennard-Jones potential and Coulomb potential. Therefore, the total potential energy function of a system can be written as as [33]: The bond stretching between two covalently bonded atoms i and j is represented by harmonic potential [33] where k ij b is the force constant and b ij is the equilibrium bond length between two atoms i and j. The bond angle vibration between a triplet of atoms i−j−k is also represented by a harmonic potential on the angle Θ ijk [33] where k ijk Q is the force constant and ijk 0 Q is the equilibrium bond angle.
The proper dihedral angle is defined by the angle between the ijk and jkl planes. In this study, we have used the following dihedral potential (Ryckaert-Bellmans potential) [33] for alkanes: where f is the dihedral angle and c 0 , c 1 , c 2 , c 3 are constants. The bonded parameters for water and alkanes are given in the table 1.
The non-bonded interatomic interaction is the sum of Lennard-Jones interaction (U LJ ) and Coulomb interaction (U Coul ), that can be written as: where r ij is the Cartesian distance between the two atoms i and j; α and β indicate the type of the atoms. The nonbonded parameters for alkanes and water is given in the table 2 Here OW and HW represent the oxygen and hydrogen atoms of the water molecules respectively and C(CH 4 ), C(CH 3 ) and C(CH 2 ) are the methane, methyl and methylene carbon atoms of the alkane molecules respectively. The parameters for the non-bonded Lennard-Jones interaction between two different atoms for OPLS-AA force field are written as [33]:
Simulation procedure
MD simulation was carried out in a cubic box with periodic boundary conditions [2] using GROMACS 4.6.5 [34,35]. The distance to the edge of the box from the solute (alkane) is an important parameter for defining the size of the box. Since we are using periodic boundary conditions, we must satisfy the minimum image convention. That is alknane (solute) should never see its periodic image, otherwise the forces calculated will be spurious. The size of the box defined here is sufficient for just about any cutoff scheme commonly used in simulations. After solvation, addition of 971 water molecules and 3 alkanes molecules in simulation box, energy minimization is carried out with a cut off restriction of 1.0 nm to avoid unphysical van der Waals contact caused by the atoms that are too close [33]. Energy minimization brings the system to equilibrium configuration, removes all the kinetic energy from the system, reduces thermal noise in structure and brings the system to one of the local minimum. Steepest descent algorithm has been used for energy minimization and the algorithm stops when the maximum of absolute value of force components is smaller than the specified value [33]. The energy (potential) of the system after energy minimization is shown in figure 1.
After energy minimization, isobaric-isothermal (NPT) equilibration was carried out at different temperatures, from 283.15 K to 333.15 K and a pressure of 10 5 Nm −2 by using velocity-rescaling thermostat [36] and Berendsen barostat [37] at a coupling time τ t =0.01 ps and τ p =0.8 ps respectively. We used MD integrator [38] with time step size 2 fs for 10 9 steps, which makes equilibration run of 200 ns. The velocity is generated initially according to a Maxwell distribution function at a specified temperature [33]. All the bonds are converted to constraints using SHAKE algorithm [39]. During equilibration short range coulomb interaction Table 1. Force-field (bonded) parameters for SPC/E water and OPLS-AA Alkanes. The units of equilibrium bond length (b) and equilibrium bond angle (Θ 0 ) are nanometer (nm) and degrees (°) respectively. Similarly, the units of k b , k Θ and c i (c , c , c , c 0 1 2 3 ) are kJ mol nm 1 2 --, kJ mol rad 1 2 -and kJ mol 1 respectively. Table 2. Force-field (non-bonded) parameters for SPC/E water and OPLS-AA Alkanes. Atoms and Lennard Jones interaction each with a cut off parameter of 1.0 nm were considered with periodic boundary conditions [2]. The long range Coulomb interaction is handled via the PME (Particle Mesh Ewald) algorithm [40,41] with fourier spacing 0.12. We monitored the temperature, pressure, density, and energy of each studied system to bring it in thermodynamic equilibrium because dynamic property like diffusion coefficient varies with such parameters. The density and simulated temperatures at different coupling temperatures for propane in water are shown in table 3. Table 3 shows that our simulated value of system density is in maximum deviation of around 1% with that of water density. After equilibration run we perform the production run to calculate the equilibrium properties of the system such as diffusion coefficient by fixing the number of particles, volume and temperature i.e.NVT ensemble. We use velocity-rescale thermostat for this case. We don't couple the system to a fixed pressure and use the structure obtained after equilibration run by which we fix the volume of the system. The production run was carried out for 100 ns with the time step of 2 fs.
Results and discussion
In this section, we discuss the structural and dynamical properties of the constituents of the systems. Table 3. Values of simulated temperature (T sim ) and density at various coupling temperatures (T co ) of propane-water system.
Radial distribution function
Radial distribution functions (RDF) were obtained from the simulations, in order to analyse the local structure around the solute and solvent molecule. Radial distribution function (RDF) gives the idea of distribution of neighboring molecules with respect to the reference molecule considered in the calculations. In periodic systems, RDF shows sharp peaks and troughs up to infinity where the separations and heights are the characteristics of the lattice structure. In liquids however, RDF oscillates up to certain orders and then attains constant value as unity [43]. We have calculated RDF g(r) of oxygen atoms of water molecules g OW-OW (r), oxygen of water and methyl (CH 3 ) and methylene (CH 2 ) carbons of alkanes (methane, ethane, propane, butane) g C-OW (r). Figure 2(a) represents the RDF of oxygen atoms of water molecules at different temperatures. For the structure of the water molecule, the centre of mass is practically the same as the oxygen centre, which is also the van de Waals sphere centre. This makes the results for the oxygen atom representative of the whole water molecule. The figure explores three different peaks which implies that the molecules are correlated up to third solvation shell. The value of σ for OW-OW is 0.3165 nm, and the van der Waals radius (2 1/6 σ) is 0.3553 nm [33]. The figure 2(a) shows that excluded region remains fairly independent (0.276 ± 0.002 nm) of changing temperature. It also calculates that the excluded region is smaller than the van der Waals radius which indicates the contributions from other potentials in addition to the van der Waals potential [29] (see figure 2(b)). The first peak position remains at the same position within the error of±0.002 nm as a function of temperature. The magnitudes of all the peaks in RDFs decrease on rising temperature. Furthermore, the width of the peaks increases on increasing temperature. Both variations are the consequences of excess volume created in the system and the decrease in co-ordination number with increase in temperature. These results show that the movement of the particles enhances and the solvent becomes less structured as temperature is increased. The figure 2(a) and that of 2(b) [29] show that Lennard-Jones plus Coulomb potential covers almost entire potential except many body effects. The second peak and third peak positions of the g OW-OW (r) are 0.450±0.002 nm and 0.680±0.002 nm respectively. These results are in good agreement with the available references [44,45]. From the simulations, we found that the RDFs between oxygen atoms of water molecules in different alkane-water system are identical in all respects. It showed that the presence of the solute molecule in an infinite dilution has a negligible effect on the global structure of the solvent.
The RDF between carbon of alkane and the oxygen of water describes solute-solvent interaction. Figure 3 shows the RDF between the methyl (CH 3 ) and methylene (CH 2 ) carbons of the alkanes and the oxygen atom of water, calculated from the simulations at 293.15 K. In figure 3, it can be seen that height of the both CH 3 -OW and CH OW 2 -peak clearly decrease with increase in the length of the carbon atoms of the alkane. The methyl carbon groups can always approach the water molecule at closer distances (first peak position ∼0.38 nm), and the corresponding peaks are systematically more intense than the CH OW 2 -for distances under ∼0.47 nm. Moreover, the magnitude as well as the excluded regions for g CH OW 3 -and g CH OW 2 -are different. This is because methyl and methylene carbon do not possess the same partial charge. Furthermore, when oxygen of water (OW) approaches to methyl carbon, it (or the water molecule) also experiences the interactions due to three hydrogens attached to methyl carbon and when oxygen of water (OW) approaches to methylene carbon, it (or the water molecule) experiences the interactions due two hydrogens attached to methylene carbon. This means when OW approaches to these carbons of alkane, it does not exactly experience the same nature of interaction field around methyl and methylene carbon. Furthermore, to obtain the number of interaction sites or co-ordination number (N c ) of each type in a coordination shell around the reference site, we have integrated the radial distribution functions (RDFs) as [43]: Where r min is the radius of the coordination shell (location of the RDF minima) and ρ is the number density. We have estimated the number of sites of a given groups or molecules around another groups or molecules, as a function of the distance from its centre. In figure 2, for g OW-OW (r), the peak maxima (r max ) of the first shell are obtained at 0.276 nm and the minima (r min ) at 0.334 nm for all the alkane-water system. The first shell coordination number was found to be 5.3±0.1 for water molecules. These values or the coordination numbers are in good agreement with the available reference values [44,45]. The first shell co-ordination number of water molecules around methyl carbon is n H ∼23 , in agreement with the MAS NMR data [46]. The first cell co-ordination number of water for methylene carbon is greater than that of methyl carbon. Furthermore, to test the solubility of alkanes in water, we have calculated free energy of solvation of methane, ethane, propane and n-butane in water at 300 K. The estimated free energy of solvation for methane, ethane, propane and n-butane in water are 9.08±0.12, 9.02±0. 19, 9.61±0.21, and 10.99±0.18 in units of kJ mol −1 respectively. The values of free energy of solvation follows the same trend as reported by Ashbaugh et al [47]. The combination of these effects; RDF analysis, co-ordination numbers of methyl and methylene carbons of alkanes and free energy of solvation of alkanes in water, suggests that the methyl groups of alkane molecules have a preferential tendency to be dissolved in the vicinity of water molecules and that this tendency decreases with chain length. The details of the structural properties with the co-ordination numbers of water molecules around the methyl and methylene carbons of alkane-water system is provided in table 4.
Diffusion coefficients
The self-diffusion coefficient of alkane (methane, ethane, propane, butane) and water are calculated by using Einstein's relation (MSD method).
The figure 5 shows the variation of diffusion coefficient r t t D 6 2 = á ñ * ( ) ( ) with time for ethane at temperature T=283.15 K. In figure 5, at first the diffusion coefficient is high due to ballistic motion and later as time passes it remains constant after 2 ns. This constant portion of the graph gives the diffusion coefficient [29]. Figures 6 and 7 show the MSD plot of propane and water at different temperatures respectively. The value of self-diffusion coefficients of the desired species is calculated using equation (1). In our case, we have a simulation time of 100 ns and the best statistics for alkane (methane, ethane, propane and n-butane) molecule is found within 2 ns which can also be justifiable from figure 5 and is very small in comparison to simulation time; this is due to lesser number of alkane molecules. For water molecule best statistics is found within 5 ns due to larger number of water molecules. The binary diffusion coefficient of the alkane-water system is estimated using Darkens relation (equation (2)). Our system consists of 3 alkane molecules (methane, ethane, propane and n-butane each) and 971 water molecules, a separate sytem, so the mole fraction for alkane is 0.003 and that of water is water is 0.997. The binary diffusion coefficient is very close to that of self-diffusion coefficient of solute in the mixture due to low solute concentrations studied in this work. Table 4. Strucural paramerters from MD simulaion of alkane-water system at 293.15 K. The positions of the first maxima (r max ), first minima (r min ), and co-ordination numbers (N c ) in the first shell of the radial distribution functions are presented.
System
Groups r max (nm) r min (nm) N c Figure 8 shows the variation of binary diffusion coefficients of the molecule with increasing the numbers of carbon atoms of the alkane chain. The diffusion coefficents of the molecules decreases with increasing the number of carbons present in the alkane molecules. Thus, the diffusion coefficients of methane is highest and that of n-butane is lowest among the studied system.
The values of self diffusion coefficients of water (H O 2 ) and binary diffusion coefficients of alkane (methane, ethane, propane, n-butane) in water along with the references at different temperatures are presented in tables 5 and 6 respectively . The comparison of the values from the tables and also from other references explores that self-diffusion coefficients of water from the present work, in general, come in very good agreement with the previous studies [29,42,48,49]. The experimental and simulated values of self-diffusion coefficients of water in all system are in good agreement with maximum deviation of 11% at 283.15 K [48]. The simulated values of alkane, on the other hands, show different attitude towards the references [21][22][23]50]. They lie very well in between the experiment performed by (1) Wise and Houghton [21] and (2) Witherspoon and Saraf [22]. Figure 9 is the comparision of simulated values with the experimental values [21,22] of binary diffusion coefficients of ethane-water system at different temperatures. They lie very well in between the experimental values [21,22] within the error of 33%. The deviations of the simulated values with the experimental values follows the same trends in all alkane-water system. There are very large differences in the values of the binary diffusion coefficients reported by them [21,22]. The diffusion coefficient for both the solute (alkane) and solvent (water) molecules increases with the enhanced temperature, which is due to the increase in the velocity of the molecules, as per relation of the thermal energy with temperature. Moreover, as the density of the system decreases with increasing temperature, the space available for the alkane molecules to execute random-walk motion increases [49]. Finally, based on these facts, the mean squared displacement increases and this change is incorporated by Einsteins relation to yield an increased self-diffusion coefficient.
Temperature dependence
Diffusion coefficients of a system generally depends strongly on temperature, being low at low temperatures and are found to be increased with increase in temperature. Temperature variations in diffusivity is explained by the Arrhenius formula [15].
which can be expressed as where, D 0 denotes the pre-exponential factor, also called frequency factor, E a is the activation energy for diffusion, T is the absolute temperature, R=N A k B is molar gas constant whose value is is 8.31 J mol K 1 1 Both E a and D 0 are called the activation parameters of diffusion. The simulated binary diffusivities of table 6 have been fitted to Arrhenius-type expression equation (12) by least squares method, the pre-exponential constant D 0 and the activation energy E a are reported in table 7. Figure 10 shows the temperature dependence of diffusion coefficient of alkane in water. As the simulation data fit to the equation (11), the temperature dependence of diffusion coefficient follows Arrhenius behavior. From figure 10, it is seen that the diffusion coefficients increase with increase in temperatures. This could be due to the fact that at higher temperature the difference in the density of the system (i.e. alkane and water) and water increases with increase in temperature (see table 3). Figure 11 shows the temperature dependence of diffusion coefficient of water in the system containing water and alkane. Figure 11 explicitly shows the temperature dependence of diffusion coefficient of water also follows Arrhenius behavior.
Conclusions
In this work, we have computed self diffusion coefficients along with binary diffusion coefficients of the system containing 971 water (H 2 O) molecules and 3 alkane (methane, ethane, propane, n-butane) molecules over a wide range of temperatures from 283.15 K-333.15 K, using molecular dynamics simulation technique. The Extended Simple Point Charge (SPC/E) model of water and Optimized Potential for Liquid Simulations-All Atom (OPLS-AA) of alkane were used. Here alkane molecule acts as a solute and water (H 2 O) as a solvent. Prior to the production run for the calculation of structure and transport properties, we monitored the temperature, energy, density of the studied system during equilibration to know the equilibrium state of the system. Structural properties has been studied using radial distribution function (RDF) and co-ordination number of the interaction cites has been calcualted integrating RDF to the first co-ordination shell. The obtaind RDFs Table 6. The simulated value of binary diffusion coefficient of alkanes (methane, ethane, propane, n-butane) and also the references for them as a function of temperature are listed. show that the system becomes less structured at high temperatures. The equilibrium structural properties of both the components (alkane and water) were studied calculating corresponding radial distribution function (RDF) namely g OW-OW (r) RDF of oxygen atoms of water molecules, g r CH OW The main aim of our work was to study diffusion phenomenon of the mixture of water and alkane and study its temperature dependence. The self-diffusion coefficients of water and alkane (methane, ethane, propane and n-butane) was estimated using Einstein's method separately. The diffusion coefficients of water are deviated within 11% of the available experimental data. The binary diffusion coefficient of the system was calculated using Darken's relation. The values of binary diffusion coefficients of alkane in water do not agree well with the previous experiments values. It lies in between these two experimental values of Wise and Houghton and Witherspoon and Saraf and the deviation is increasing with increase in temperature. The Arrhenius diagram (plot of natural logarithm of diffusion coefficient versus inverse of temperature) was plotted for self-diffusion coefficients of water and binary diffusion of the alkane-water system separately and it showed temperature dependence of diffusion coefficient of both are of Arrhenius type. | 6,928.6 | 2017-10-14T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
Vortex Correlation Functions in Maxwell-Chern-Simons Models
Maxwell-Chern-Simons models in the presence of an instanton anti-instanton background are studied. The saddle-point configuration corresponds to the creation and annihilation of a vortex localized around the Dirac string needed to support the nontrivial background. This configuration is generalized to the case in which a nonlocal Maxwell term is allowed in order to fulfill the finite action requirement. Following 't Hooft procedure, we compute the vortex correlation functions and we study the possibility of obtaining spin 1/2 excitations. A possible connection with the bosonization of interacting three-dimensional massive fermionic systems is also discussed.
Introduction
Bosonization is an important tool to study interacting fermionic systems. Concerning the case of parity breaking models in (2 + 1)D, many efforts are being undertaken in order to improve this program. In particular, it is well established that the correlation functions of U(1) fermionic currents correspond to correlation functions of topological currents in the dual bosonized theory [1,2]. This feature holds for both (1 + 1)D and (2 + 1)D models and has a universal character [3], as stated by the following formula (1.1) where K F stands for the free fermionic action and K B is the corresponding bosonized version. The term I[j F ], with j F µ =ψγ µ ψ, represents a generic current interaction. The bosonizing field λ is a scalar field φ in (1 + 1)D, and a vector field A µ in (2 + 1)D. Accordingly, ε∂λ has to be read as ε µν ∂ ν φ or ε µνρ ∂ ν A ρ , respectively. It is worth mentioning here that the mapping (1.1) provides a unifying framework to derive universal transport properties of both one and two-dimensional interacting fermionic systems [4].
Similarly to the (1+1)D case, where fermions can be associated to soliton configurations in the dual massive sine-Gordon theory [5], one would like to understand the elementary fermionic modes in (2+1)D in terms of topological excitations in the bosonized dual theory. The latter is a gauge theory whose quadratic part is given by a nonlocal Maxwell-Chern-Simons (MCS) term [1,2,3]. In particular, when a large mass expansion is performed, the dominant term reduces to the usual local MCS action, namely where m is proportional to the fermion mass and η is the Chern-Simons coefficient in the fermionic effective action. In (2 + 1)D, it is a common wisdom to believe that fermions should be related to vortices in the dual theory. The aim of this letter is to pursue this investigation. Combining 't Hooft approach [6] to the quantization of extended objects in euclidean space-time with the Hennaux-Teitelbolm work [7] on instantons in MCS theory, we shall be able to show that vortices may appear as excitations with definite mass and spin in a generalized MCS model. The relationship among vortices in MCS and fermionic excitations will be analysed through Polyakov's spin action for Bose-Fermi transmutation in (2 + 1)D [8]. 't Hooft framework is particularly adapted whenever the Mandelstam operators are not known. As an example, it has been successfully used to obtain a covariant quantization for the soliton excitations of the Skyrme model [9]. We also point out that the finite action requirement for vortex configurations is fulfilled by introducing a suitable nonlocal Maxwell term.
The present letter is organized as follows. In Sect.2 we study MCS vortex solutions in the presence of an instanton anti-instanton background. Sect.3 is devoted to the vortex quantization through the corresponding correlation functions and to the analysis of Polyakov's term. In Sect.4, the nonlocal MCS case is discussed.
Vortices in Maxwell Chern-Simons
In recent works [10,11] the existence of vortex solutions in Maxwell-Chern-Simons (MCS) in the presence of singularities has been discussed. These singularities turn out to be related to the continuum limit of a compact lattice version of the theory. The resulting classical solution to the equations of motion displays the behavior of a vortex. Although this configuration could be interpreted as a kind of energy lump due to its fast decay given by the MCS topological mass, the corresponding total energy has a mild logarithmic divergence in the ultraviolet region [11]. In addition, the vortex is pinned around the position of the singularity, which is introduced as an external fixed source. In order to promote this field configuration to a particle-like excitation we have to give translational degrees of freedom to the vortex and render its energy finite. Also, the vortex propagator should be well behaved, without unphysical modes.
Following 't Hooft procedure, the vortex propagation in euclidean space is obtained by integrating over configurations where a vortex excitation is created out of the vacuum at a space-time point x 1 and after an intermediate propagation is annihilated at x 2 . Before x 1 and after x 2 the topological charge vanishes, while it is nonvanishing in between due to the existence of the vortex. Therefore, suitable instanton anti-instanton singularities have to be introduced at x 1 and x 2 in order to match these inequivalent topological configurations. In the present three-dimensional case these singularities can be seen as a monopole anti-monopole pair [7,12] for the dual field strength configuration F µ = (1/2)ε µνρ F νρ , located at x 1 and x 2 , respectively. One possible action describing the coupling of this pair with the MCS field is given by where γ is an open smooth string running from The equations of motion are easily worked out and yield [10] F cl The term R µ in the above expression represents a vortex configuration propagating from x 1 to x 2 , having both magnetic and electric field. We observe that, due to the presence of the exponential factor in eq.(2.6), R µ is localized around the curve γ, on a scale of the order of 1/m. We also note that the Bianchi identity ∂ µ F cl Therefore, the flux Φ of the nonsingular part R z of the magnetic field, computed through any constant time plane Σ located between x 1 and x 2 , is where the second equality follows by closing Σ with the addition of a surface at infinity giving no contribution due to the exponential decay of R µ . The static limit corresponds to a configuration where the vortex is created in the far past and annihilated in the far future, and it always sits at the same position, that is, the associated string γ is an infinite straight line along the euclidean time-axis, identified with the z−axis. In this case, eq.(2.6) reproduces the vortex profile discussed in ref. [11]. In particular, for the magnetic field we get with K 0 being the Bessel function and ρ the radial coordinate in the (x, y)− plane. Also, the point-like singularity introduced in [11], where the vortex is pinned, is nothing but the intersection of the string with the constant time plane Σ.
Quantization of the MCS vortices
Following 't Hooft prescription [6], in order to compute the vortex propagator we have to path integrate over all physical inequivalent configurations representing the creation, propagation and annihilation of the vortex. Therefore, we integrate over the gauge fields and all possible strings, and define the two-point vortex correlation function as where Γ γ represents the effective action obtained by integrating over all gauge configurations in a fixed string background. The presence of the measure Dγ is natural in a path integral approach [8], being in fact needed in order to ensure the string independence of G(x 1 − x 2 ). This prescription should guaranty the locality of the quantum vortex field operators whose expectation value has to be identified with G(x 1 − x 2 ), although, in general, a closed form for these operators is not known.
In the pure Maxwell case, corresponding to the limit m → 0, Γ γ turns out to be independent from the particular Dirac string joining the singularities meaning that here the string is not observable. The integration over the paths is now trivial and results in a pure normalization factor. The pathindependence of Γ Max γ allows us to deform the original γ into two strings γ 1 , γ 2 , where γ 1 goes from x 1 to ∞ and γ 2 from ∞ to x 2 . In this case, the vortex correlation function in eq.(3.9) can be written in terms of Mandelstam variables µ(γ 1 ) , µ(γ 2 ), according to The string independence of the effective action (3.10) corresponds to the well established locality properties of the Mandelstam operators, in models containing pure Maxwell terms [13]. Coming back to the MCS case, it is easy to convince oneself that the effective action Γ γ in eq.(3.9) has a nontrivial dependence on γ. Therefore, as the string is now observable, we have to integrate over all paths, according to the general definition (3.9) . On physical grounds, this amounts to take into account all possible intermediate processes representing the vortex propagation. We underline that in this case an explicit expression for the vortex operators is not available. However, the knowledge of the vortex propagator is sufficient to characterize the physical properties of the vortex at the quantum level. As the integration over the gauge fields in eq.(3.9) is quadratic, we obtain where A cl is a vector potential for the saddle point configuration F cl in eq.(2.6) . After performing the space-time integral, Γ γ can be cast in the form of a double-line integral over the curve γ, with a kernel which is found to be localized on a scale of the order of 1/m (see eq.(4.22) in Sect.4). For well separated x 1 and x 2 , and smooth strings, the effective action Γ γ , up to order 1/m, is where L is the length of the curve γ, e α (s) is the tangent vector dy α /ds and the parameter s is defined through the relation e α (s)e α (s) = 1. The factor λ is logarithmic divergent [11], and will be discussed in the next section.
Notice that the presence of the second term in (3.13) is in fact already known [8] and takes into account velocity correlations at different points along γ. In order to obtain the vortex propagator G(x 1 − x 2 ) it remains to perform the integration over all possible paths γ with fixed end-points. This integration can be found in [8], yielding as final result the Klein-Gordon propagator.
The spinless character of this excitation is due to the complete cancellation of all imaginary terms of the kind arising from the presence of the Chern-Simons action. Observe that, for closed γ, this expression is known as the self-linking of the curve. It is worth underlining that, depending on the coupling between the string and the MCS gauge potential, different kinds of correlation functions will be obtained, leading to different quantum numbers for the corresponding vortex excitations. For instance, if instead of (2.3) one considers the more general coupling for the leading terms of the effective action Γ γ one gets In particular, for ηϑ 2 = 2π, Polyakov's Bose-Fermi transmutation occurs and the vortex propagator turns out to be that of a spin one-half fermionic excitation [8,14] where σ µ are the Pauli matrices. With respect to the spinor index structure of this propagator we refer the reader to the original work [8]. In this regard, it is useful to point out that the functional integration in eq.(3.9) should be equipped with appropriate fixed boundary conditions around the monopole anti-monopole singularities, carrying a representation of the rotation group. At the locations of these singularities vortices with given quantum numbers will be created and destroyed. This will lead to the correct index structure for the final expression of the propagator. This framework has been worked out in ref. [9] in the case of skyrmions.
Vortices in nonlocal MCS models
So far, we have seen that vortex configurations are present in MCS theory when a nontrivial instanton anti-instanton background is introduced. Depending on the coupling with the string, the vortex quantum numbers may correspond to a bosonic or a fermionic excitation. However, as it has been already pointed out in [11], the energy of this configuration displays an ultraviolet logarithmic divergence. The aim of this section is to face this problem. One possibility in order to have a finite action configuration is that of introducing nonlocal terms in the action, whose effect is that of properly regularizing the ultraviolet region. For instance, this can be done by modifying the Maxwell term in (3.15) according to We also require that the Fourier transform is positive definite. The local Maxwell term is recovered by taking O(x − y) = (1/m) δ (3) (x − y). We remark here that nonlocal MCS models appear in a natural way in the context of bosonization [2]. Indeed, these terms arise from the evaluation of the massive fermionic determinant in a generic background. We also observe that the presence of a current-current interaction in the starting fermionic action will produce in the bosonized action an additional nonlocal Maxwell term, which follows from the universal bosonization rule (1.1), namely Coming back to the nonlocal MCS action (4.18), the corresponding classical vortex profile gets modified according to Upon substitution of this expression in eq.(4.18) one obtains We note that the real part of the action is positive. Also, in the static limit in which γ is an infinite straight line coinciding with the z−axis, the action per unit length turns out to be where the quantities in boldface correspond to the two-dimensional projection k → (k,0). In the local case ( O = 1/m) this expression contains a mild logarithmic ultraviolet divergence [11]. However, in the case where O behaves in the uv region as k α (α > 0), the action per unit length is rendered finite, no matter how small α is.
Conclusions
Following 't Hooft procedure, we have studied vortex correlation functions in MCS models considering different couplings between the gauge fields and the string associated with the instanton anti-instanton pair. This string arises in the continuum limit of a compact lattice version of the theory [11,10].
With the exception of the pure Maxwell type case, the string is observable. Therefore, we have defined vortex correlation functions by path integrating over both the gauge fields and the string. This corresponds to take into account the vortex translational degrees of freedom. It is the integration over the string which finally leads to a well behaved propagator, without unphysical poles.
Concerning the bosonization of (2 + 1)D fermionic systems we remind that, for large m, the dominant term in the bosonized action corresponds to the local MCS [1]. Furthermore, we have been able to see that the coupling in eq.(3.15) leads to a vortex excitation with spin 1/2, whenever the condition ηϑ 2 = 2π is satisfied. Although a direct derivation of the bosonization formula for fermion propagators has not yet been obtained, this result gives a strong indication that the elementary fermionic excitations correspond indeed to vortices in the dual theory.
These vortex configurations have been generalized to the case in which a nonlocal Maxwell term is present. We have shown that this kind of term could improve the ultraviolet behavior so as to render the vortex energy finite.
On the other hand, for ηϑ 2 = 2π, the possibility of identifying vortex and fermionic correlation functions together with the universal bosonization rule (4.20) could give a useful framework to analyse the spectrum of the excitations for interacting fermionic systems. While in the local MCS case the localization of the vortex on a scale of the order 1/m leads to the existence of a pole in the vortex propagator due to eq.(3.16), in the nonlocal case, depending on the fermionic interaction kernel G(x − y) in eq.(4.20), the vortex profile (4.21) could spread out. This would imply the breaking of the validity of the long distance approximation (3.16). This may result in the absence of the pole in the propagator, meaning that the quasiparticle picture could be destabilized by the interaction among fermions. | 3,775.8 | 2002-01-02T00:00:00.000 | [
"Physics"
] |
Data on isolation and purification of fibrinolytic enzyme from Pseudomonas baetica SUHU25
The present dataset provides methodology to isolate and purify fibrinolytic enzyme from microbe isolated from the natural source. The information provided in this data article includes (1) isolation and identification of Pseudomonas baetica SUHU25, (2) optimization of cultural conditions, (3) extraction and purification of fibrinolytic enzyme, (4) protein estimation, (5) assay of fibrinolytic activity, (6) SDS PAGE for purified enzyme protein, (7) effect of pH, temperature and metal ions on fibrinolytic activity of enzyme protein, and (8) In-vitro blood clot dissolution assay.
Data
Pseudomonas baetica SUHU25 isolated from a local fish market showed proteolytic activity on skim milk agar as presented in Fig. 1 and fibrinolytic activity on fibrin agar in Fig. 2. Fig. 3 shows the Phylogenetic tree of Pseudomonas baetica SUHU25 obtained after 16s rRNA sequencing. Figs. 4e8 shows relative activity (%) of fibrinolytic enzyme of Pseudomonas baetica SUHU25 with change in carbon source, nitrogen source, pH, temperature and incubation period respectively. Fig. 9 shows fibrinolytic enzyme activity of cell free media supernatant of Pseudomonas baetica SUHU25. Fig. 10 represents the SDS PAGE of purified fibrinolytic enzyme from Pseudomonas baetica SUHU25. Fig. 11shows In-vitro blood clot dissolution by the purified enzyme preparation. Figs. 12e14 shows relative activity (%) of fibrinolytic enzyme with change in pH, temperature and presence of various metals respectively. Purification scheme of fibrinolytic enzyme form Pseudomonas baetica SUHU25 is detailed in Table 1.
Screening for fibrinolytic microorganisms
Sample was collected in sterile container from a local fish market at wash and waste disposal site in Pune, India for the isolation of potential fibrinolytic microbes. Primary Screening was done by serially diluting sample with physiological saline up to 10 À3 . 0.1 ml of dilution was plated on nutrient agar plates and incubated at 37 C for 24 hours for microbial growth. Each well isolated bacterial colony after 24 hours of growth on nutrient agar was spot inoculated on skim milk agar for analyzing proteolytic activity. Cultures showing zone of clearance on skim milk agar were further spot inoculated on fibrin agar for 24 hours at 37 C for secondary screening. Zone of clearance was noted after incubation for each culture [1].
Specifications Table Subject Microbiology Specific subject area Microbial enzymes, fibrinolytic enzymes from microbes Type of data Table Figure How data were acquired Isolation of Pseudomonas baetica SUHU25 was done from selective nutrient medium. Identification was achieved by DNA Extraction using Genomic DNA Extraction Kit (BioEra Life Sciences Pvt. Ltd., India), 16s rDNA gene amplification by PCR (BioEra Life Sciences Pvt. Ltd., India) and sequencing (ABI 3130 genetic analyzer and Big dye terminator version 3.1 cycle sequencing kit). Protein estimation and fibrinolytic enzyme protein assay was performed using spectrophotometer (Model ELITE, BioEra Life Sciences Pvt. Ltd., India). SDS PAGE to analyze molecular weight of purified enzyme proten performed using vertical gel electrophoresis system (BioEra Life Sciences Pvt. Ltd., India). Data format Raw Parameters for data collection Incubation temperature ( C), incubation time (hours), relative activities (%), concentrations (mM), molecular weight (kDa) Description of data collection The experimental data was obtained to optimize of cultural conditions, to extract and purify fibrinolytic enzyme, and to characterize fibrinolytic activity of purified enzyme protein from Pseudomonas baetica SUHU25. Also In-vitro application of the purified fibrinolytic enzyme was demonstrated by Blood clot dissolution assay Data source location BioEra Life Sciences Pvt. Ltd., Survey Number 125, Mumbai e Bangalore Highway, Tathawade, Pune -411033, Maharashtra, India.
Data accessibility Data is presented in this article
Value of the data There is extensive research in search of a effective fibrinolytic enzymes as presently available fibrinolytic enzymes are prone to low specificity. The presented data is first report of fibrinolytic enzyme from Pseudomonas baetica SUHU25. The scientific fraternity looking for novel sources of fibrinolytic enzymes can use the presented data A further study on characterization of fibrinolytic enzyme and testing efficacy of enzyme by other available In-vitro and In-vivo methods can be planned
Identification of fibrinolytic isolate
Isolated culture showing larger zone of clearance on fibrin agar plate was identified by 16s rRNA Sequencing. Genomic DNA was extracted from the isolates using BioEra's Genomic DNA extraction kit. 16s rDNA gene was amplified by forward primer 5 0 -AGAGTRTGATCMTYGCTWAC-3 0 and reverse primer 5 0 CGYTAMCTTWTTACGRCT-3 0 with the programme consisting of denaturation at 94 C for 5 minutes and subsequent 35 cycles of denaturation at 94 C for 30 seconds, annealing at 55 C for 30 seconds, and extension at 72 C for 2 minutes followed by final extension at 72 C for 5 minutes. The sequence analysis was performed using the ABI 3130 genetic analyzer and Big Dye Terminator version 3.1 cycle sequencing kit. The amplified product sequence comparison with database was performed using BLAST through the NCBI server [2].
Optimization of cultural conditions
The isolated culture was grown in the mineral salt medium (g/L: KH 2 37, and 40 C) and incubation period (24, 48, 72, and 96 hours) were checked. Optimization of variables was done by one-variable-at-a-time approach. All the experiments were conducted in triplicates.
Extraction and purification of fibrinolytic enzyme
The isolated culture was grown at 37 C for 24 hours in the optimized nutrient medium (g/L: glucose, 10; casein, 10, KH 2 PO 4 , 0.42; K 2 HPO 4 , 0.375; NaCl, 0.015; CaCl 2 .2H 2 O, 0.015; MgSO 4 .7H 2 O, 0.05; and FeCl 3 .6H 2 O, 0.054; pH 6 ± 0.1). After incubation cells were separated from nutrient medium by centrifugation at 6000 rpm for 15 minutes at 4 C. The fibrinolytic enzyme protein was extracted from the cell free supernatant by precipitation with five volumes of ice-cold acetone. Fibrinolytic enzyme from the acetone precipitated proteins was purified by anion exchange resin (DEAE Sephadex A50) and gel filtration chromatography on sephadex G100. All the steps were performed at 4 C.
Protein estimation
Protein was estimated using dye-binding method described by Bradford [3].
Assay of fibrinolytic activity
Fibrinolytic activity was determined as reported by Tharwat et al. [4]. 2.7. SDS PAGE for purified enzyme protein SDS-PAGE was carried out to determine the purity and molecular weight of enzyme protein as described by Laemmli [5] using 10% polyacrylamide resolving gel. The gel was stained with Coomassie Brilliant Blue G 250 to visualize the protein bands.
Effect of pH and temperature on fibrinolytic activity of enzyme protein
The optimal pH for fibrinolytic activity of the enzyme was determined within pH range of 3e10 using acetate buffer (0.1 M, pH 3, 4 and 5), phosphate buffer (0.1 M, pH 6, 7 and 8), and Glycine buffer (0.1 M, pH 9 and 10). The effect of temperature was determined by measurement of residual activity of enzyme after incubation at different temperatures (4,10,20,25,30,37,40, 50 and 60 C). 1 mg ml À1 of concentration of 1 mg ml À1 was pre-incubated in both the presence and absence of each cation for 1 hour at 37 C. The activity of enzyme was assayed as reported by Tharwat et al. [4].
In-vitro blood clot dissolution assay
In vitro blood clot analysis was carried out according to Avhad et al. [6] with some modifications. 5 ml of purified enzyme preparation added to 0.5 g human blood clot in a test tube and kept for incubation at 37 C till the complete blood clot lyses. | 1,710.8 | 2019-09-18T00:00:00.000 | [
"Biology"
] |
Leptogenesis and muon $\boldsymbol{(g-2)}$ in a scotogenic model
We present a detailed study of a scotogenic model accommodating dark matter, neutrino masses and the anomalous magnetic moment of the muon while being consistent with the existing constraints on flavour violating decays of the leptons. Moreover, this model offers the possibility to explain the baryon asymmetry of the Universe via leptogenesis. We determine the viable regions of the model's parameter space in view of dark matter and flavour constraints using a Markov Chain Monte Carlo setup combined with a particular procedure to accommodate neutrino masses and the anomalous magnetic moment of the muon at the same time. We also discuss briefly the resulting collider phenomenology.
Introduction
The Standard Model (SM) gives an accurate description for most of the data up to the TeV scale. Despite its successes, it should nevertheless be considered as an effective theory, which has to be embedded in a more fundamental framework. One reason is the flavour hierarchies in the fermion sector for which we do not know the underlying principle governing its structures. Moreover, there are several experimental observations which require an extension of the SM. This includes neutrinos, which are massless in the SM, but need to be massive in view of neutrino oscillations experiments [1]. Strong arguments from cosmology underline the call for new physics beyond the Standard Model (BSM), such as the presence of dark matter (DM) [2], as well as the baryon asymmetry observed in the Universe [2]. The SM is also challenged by precision measurements of the anomalous magnetic moment of the muon [3,4].
Most of these deviations are located in the lepton sector. Moreover, despite the fact that it ultimately concerns the hadronic sector, the baryon asymmetry can be explained through leptogenesis [5][6][7], a mechanism stemming from the leptonic sector and translating the generated lepton asymmetry to the hadronic sector through the sphaleron processes. Finally, it is worth noting that generating neutrino masses generally leads to the opening of lepton flavour violating processes, involving, for example, transitions from electronic to muonic states, which are strongly constrained by very precise experimental data. We take this as a motivation to consider models featuring new contributions to the lepton sector, while the hadronic sector may be less relevant to explain the above shortcomings of the SM.
One potential class of such frameworks are the so-called scotogenic models, originally aiming at the simultaneous explanation of neutrino masses and cold dark matter. The two are linked in the sense that neutrino masses are generated radiatively through particles and couplings from the dark sector. After the first works on minimal scotogenic realisation [8][9][10][11][12], more complex models have emerged in recent years, studied mainly at the level of dark matter phenomenology and lepton flavour violating observables, see for example [9,[13][14][15][16][17]. A general classification of viable scotogenic frameworks can be found in Ref. [18]. Recently, two of us have studied a particular framework, the so-called 'T1-2-A' model, where the SM is extended by a scalar doublet, a scalar singlet, a fermionic Dirac doublet, and a fermionic singlet [19,20]. This setup features a very predictive dark matter phenomenology, especially for fermionic dark matter [20] and, in principle, it can explain the recent measurements of the anomalous magnetic moment of the muon. However, the corresponding region in parameter space is excluded by the the constraints on flavour violating decays of the leptons. Moreover, the 'T1-2-A' setup fails to accommodate leptogenesis as an explanation for the observed baryon asymmetry.
In the present work, we extend the 'T1-2-A' setup by adding an additional fermionic singlet. The additional degrees of freedom allow for the successful generation of three nonzero neutrino masses, while the couplings can be chosen such that the deviation related to the anomalous magnetic moment of the muon can be accommodated while being consistent with the bounds on flavour violating lepton decays. Moreover, this set-up allows for an explanation of the observed baryon asymmetry via leptogenesis, as we will demonstrate below. Our paper is organised as follows: in Sec. 2, we start by introducing the scotogenic model under consideration. Sec. 3 is then devoted to a discussion of the anomalous magnetic moment within our model and the required coupling hierarchies. In Sec. 4, we discuss the applied constraints and the observables of our interest. The results from our Markov Chain Monte Carlo (MCMC) analysis are presented in Sec. 5, where we analyse the parameter space, charged lepton flavour violating decays, dark matter observables and discuss colliderrelated aspects. In Sec. 6, we present our findings concerning leptogenesis as a means to generate the baryon asymmetry. Conclusions are drawn in Sec. 7.
Model
We consider a scotogenic framework extending the SM by two Weyl fermion SU (2) L doublets, Ψ 1 and Ψ 2 , two Majorana fermion singlets, F 1 and F 2 , a scalar SU (2) L doublet, η, and a real scalar singlet, S. In addition, we assume a Z 2 -symmetry under which the SM fields are even and the additional ones are odd. This ensures neutrino mass generation at the one-loop level together with the existence of a stable dark matter candidate. We note for completeness that the additional fields are singlets with respect to SU (3) C .
The new field content including their respective representations under SU (2) L ×U (1) Y is summarised in Tab. 1. In the following subsections, we briefly summarise the different sectors, present the corresponding Lagrangian, and set the notation.
The scalar sector
The scalar sector of the model consists of the SM Higgs doublet H, an additional real singlet S, and a SU (2) L doublet η. Their charges are given in Tab. 1. Upon electroweak symmetry breaking (EWSB), which involves the Higgs doublet only, the doublets can be expanded into components according to Here, h 0 is the SM Higgs boson, G 0 and G + are the would-be Goldstone bosons, and v = √ 2 H ≈ 246 GeV denotes the vacuum expectation value (VEV). Moreover, η 0 and A 0 are CP -even and CP -odd neutral scalars, and η + is a charged scalar. Neither S nor η may obtain a VEV due to the assumed Z 2 -symmetry.
The scalar potential of the model is given by (2. 2) The first two terms are the SM part related to the Higgs doublet H. We assume here for simplicity that λ η and α are real. After EWSB, the usual minimisation relation in the Higgs sector, allows to eliminate the mass parameter M 2 H in favour of the Higgs self-coupling λ H . Imposing m h 0 ≈ 125 GeV leads to a tree-level value of λ H ≈ 0.13.
The mass matrix of the neutral scalars in the basis {S, η 0 , A 0 } reads as after EWSB. Here, we have defined λ L,A = λ η + λ η ± λ η . We order the mass eigenstates as follows The corresponding squared masses at tree-level read as where m φ 0 1 < m φ 0 2 . Finally, the tree-level mass of the charged scalars is given by
The fermion sector
The Lagrangian for the additional fermions presented in Tab. 1 reads with i, j = 1, 2 and k = 1, 2, 3. L k and e c k denote the left-handed and right-handed leptons, respectively. Moreover, we have introduced the notationφ = iσ 2 φ * for φ = H, η. Without loss of generality we work in a basis where M F 12 = 0. Moreover, we impose |M 1 | ≤ |M 2 |, where we have simplified the notation by setting M i = M F ii for i = 1, 2. Finally, we adopt the phase-convention Ψ 1 = (Ψ 0 1 , Ψ − 1 ) and Ψ 2 = (Ψ + 2 , −Ψ 0 2 ) for the SU (2) L doublets. After EWSB, we have a charged heavy Dirac state Ψ + with mass M Ψ and four neutral Majorana fermions. Their mass matrix is given in the basis This matrix is diagonalised by a unitary matrix U χ according to
Neutrino masses
The main difference with respect to the T1-2-A model discussed in Refs. [19,20] is the extra copy of the singlet fermion. Although the mechanism is very similar, in this case, due to the extra degree of freedom, the neutrino mass matrix has rank three instead of two, and consequently, all three active neutrinos will acquire a non-zero mass. After EWSB, rotating to the mass eigenbasis, a Majorana mass term is generated at the one-loop level via the diagram where the neutrino mass matrix can be expressed as This is a well-known structure common to most of the scotogenic models and similar to the type-I seesaw, where the matrix G contains the couplings defined in Eq. (2.8) ordered as and M L is a 3 × 3 symmetric matrix which encodes the information of the loop function, and the mixing in the neutral scalar and fermion sectors, defined in Eqs. (2.5) and (2.10), respectively. For completeness, we explicitly write the expressions for the components of M L , where k = 1, 2, 3, 4 and n = 1, 2, 3. Moreover, the loop integrals are encompassed in the functions (2.20) We make use of the Casas-Ibarra parametrisation [21,22] to express the couplings in Eq. (2.13) in terms of neutrino oscillation data [1,23] according to where D L is the diagonal matrix defined by and D ν is the diagonal matrix containing the neutrino mass eigenvalues. Finally, U PMNS is the usual unitary matrix relating neutrino flavours to their mass eigenstates, assuming that the charged leptons are already in their mass eigenbasis. Moreover, as even a precise knowledge of all the parameters and observables in M L and M ν does not univocally define G, the extra degrees of freedom are encoded in the orthogonal 3 × 3 matrix R. This matrix can be parameterised as depending on three complex angles θ i with s i = sin θ i and c i = 1 − s 2 i . Note the importance of these degrees of freedom, since they modify the flavour structure of the Yukawa matrix. The latter is of great relevance when considering charged lepton flavour violation and the anomalous magnetic moment, as we will show in the next sections.
The anomalous magnetic moment of the muon
As mentioned in the introduction, a deviation persists between the SM prediction and the experimental value of the anomalous magnetic moment of the muon, defined as a µ = (g − 2) µ /2. The discrepancy amounts to a significance of 4.2σ, and leads to the following range for the new physics contribution to a µ [4], 1 In general, every scotogenic-like model will contribute to the anomalous magnetic moment of leptons at one-loop level. These contributions can be encoded in the effective electromagnetic (EM) dipole moment operator c ij R¯ i σ µν P R j F µν , coming from the operator O eB ≡ (Lσ µν e R )HB µν before EWSB [27]. The diagonal part of the Wilson coefficient c R is related to (g − 2) and the electric dipole moment (EDM), and the off-diagonal part is associated with charged lepton flavour violating (cLFV) processes [28]. For more details see Appendix A.
The contribution to (g − 2) µ is generally suppressed by the muon mass. Moreover, as the EM dipole operator connects the left-and right-handed parts of the leptons, while neutrino mass models contain usually only couplings of BSM fields to the left-handed components, such an operator will always be chirally suppressed. Consequently, new physics explanations of (g − 2) µ are pushed towards low mass scales and large non-perturbative couplings. A possible way out is to add new fields outside the neutrino mass mechanism, that couple to µ R , in order to enhance the contribution to (g − 2) µ and be able to fit the anomaly within a phenomenologically reasonable parameter space [29]. Note that this situation is realised in the T1-2-A model [18], and consequently its extension under consideration here. In both models, there is a coupling g R of the lepton singlets to η and Ψ 1 , see Eq. (2.8). The latter two also participate in the generation of the neutrino mass matrix. Note that in this way no extra BSM field is needed on top of those involved in the neutrino mass mechanism to have a chirally enhanced contribution to (g − 2). The new leading contributions to the anomalous magnetic moment are shown in Fig. 1.
We note for completeness, that in the original T1-2-A framework the coupling matrix G, see Eq. (2.13), is a 2 × 3 matrix, where the relative sizes between the various entries are fixed by the neutrino mixing angles up to one complex angle. An explanation of the muon (g −2) in this model implies large couplings which in turn lead to too large flavour violating decays of the leptons. In our extension of this model, we have more freedom allowing us to circumvent this problem, as we will show in Sec. 5.2.
Both diagrams depicted in Fig. 1 also generate a sizeable contribution to strongly constrained LFV processes in the charged sector, in particular µ → eγ with an upper limit to its branching ratio of 4.2 × 10 −13 from the MEG collaboration [30]. Although they seem unavoidable, given that the off-diagonal part of the Yukawa matrix G is connected to the neutrino mixing (see Eq. (2.21)) there are several strategies to get a sizeable contribution to (g −2) µ while keeping charged LFV under control. For example, cancellations can be found Figure 1: Dominant one-loop contributions to (g − 2) and charged LFV processes before EWSB. Arrows indicate the flow of quantum numbers. Couplings are given for clarity, see their explicit definitions in Sec. 2. A photon should be attached to the respective charged components.
among the several independent contributions to the EM dipole operator. However, such a scenario is not very appealing, as one should reproduce a difference of more than five orders of magnitude between the diagonal and off-diagonal components of the Wilson coefficient c R [28]. Another possibility is to assume certain flavour structures for the Yukawa couplings, which suppress the off-diagonal components in favour of the diagonal [31]. Following the latter approach, we focus for simplicity on a region of the parameter space where the first diagram in Fig. 1 dominates over the second, as the flavour structure of the diagram is simpler having just two three-component Yukawa vectors involved. We extend the usual Casas-Ibarra parameterisation by the following elements so that these constraints can be easily fulfilled. To do so, we consider y 1,2 to be small and push the trilinear coupling α to larger values, i.e. we suppress the mixing in the neutral fermion sector, while enhancing the one in the neutral scalar sector. Note that, while g R is mainly free, g F and g Ψ are constrained by the fit to neutrino oscillation data, see Eq. (2.21). This means that changing y 1,2 and α not only directly modifies the dominant contributions depicted in Fig. 1, but also indirectly suppresses g F and enhances g Ψ through the neutrino fit. We are looking for a Yukawa matrix G featuring a coupling hierarchy as shown in Fig. 2. Making use of the freedom on the components of g R as well as on the remaining degrees of freedom in g ψ , stemming from the rotation matrix R appearing in Eq. (2.21), we fit the value of a BSM µ while keeping the contributions to the lepton flavour violating decays µ → eγ and τ → µγ under control.
In practice, for each point of our numerical scan, in the region of the parameter space where y 1,2 are small, we use the angles of the matrix R given in (2.23) to suppress the dominant contribution to cLFV processes while enhancing the diagonal contribution associated to (g − 2) µ 2 . Ultimately, we fit the experimental value of the muon (g − 2) within its limits by solving for g 2 R . With this method, we obtain for each point the correct anomalous 2 Actually, solving for two angles is sufficient, such that one angle is left as a free parameter and scanned over for generality. Figure 2: Hierarchy of the Yukawa matrix G of Eq. (2.13) that generates the neutrino masses. This hierarchy is realised for y 1,2 small and solving for the angles defining the rotation matrix R, such that (g − 2) µ is maximised while charged lepton flavour violating decays are kept under control. See text for more details.
magnetic moment, while fulfilling the current limits for charged LFV decays. We note that this is not the most general approach, as we are selecting a specific region of the parameter space. However, given the complexity of the system, we were not able to find a more general approach that could deliver results within a reasonable computing time. Moreover, the parameter space discussed previously is also preferred for low-scale leptogenesis, as we will show later in the paper.
Constraints and observables
In the spirit of the analysis presented in Ref. [20], we use an MCMC scan [32] based on the Metropolis-Hastings algorithm [33,34] to efficiently scrutinise the parameter space of the model in view of the numerous constraints presented above. This technique, especially powerful for high-dimensional spaces, explores the parameter space iteratively, restricted by a set of constraints through the computation of the likelihood. We refer the reader to Ref. [20] for further details about the implementation of the MCMC.
In addition to implicitly satisfying the constraints from neutrino masses and the anomalous magnetic moment of the muon (see above), we explicitly impose constraints coming from various sectors, comprising dark matter, lepton flavour violating processes, and the mass of the Higgs boson. All constraints are listed in Tab. 2 together with their associated experimental limits, as well as applied uncertainties applied in our study. Note that for the Higgs mass m H and the dark matter relic density Ω CDM h 2 , the theory uncertainties 3 [35][36][37][38][39] are larger than the experimental ones, and, consequently, we apply the theory uncertainties. We also ensure that the lightest Z 2 -odd particle, is electrically neutral in order to have a viable, stable DM candidate and to avoid stable charged relics, essentially excluded in the mass range of 1, 10 5 GeV [40][41][42][43].
In total, our MCMC scan runs over 20 free parameters: Eight couplings in the scalar potential, six Yukawa couplings, five masses, the lightest neutrino mass, and the unconstrained angle of the rotation matrix R, which is assumed to be real. The ranges of the scalar and fermion mass parameters are chosen such, that they could be in principle in the reach of high luminosity LHC. The exception is those for the singlet fermions for which we Table 2: Constraints considered in the MCMC analysis: Higgs mass and cLFV observables [40] and DM relic density [2]. The limits from XENON1T [44] to the direct detection crosssection are also taken into account. Note that the errors given for m H and Ω CDM h 2 are not the experimental uncertainties but estimates of the theoretical ones, see text for details. We implement 1σ intervals using a Gaussian function and 90% C.L. for the limits via a single-sided Gaussian, allowing for a 10% uncertainty.
allow a larger range. The reason is that this model can also explain the baryon asymmetry of the Universe via the leptogenesis mechanism as discussed in Sec. 6. The sign of the quartic couplings λ H , λ 4S and λ 4η is fixed from the requirement that the scalar potential is bounded from below. We vary all parameters on a logarithmic scale and assign possible signs on a random basis.
The scan is performed over the parameter ranges specified in Tab. 2 with 75 chains of 200 points each. The first 35 points of each chain have been deleted in order to keep only the points for which the chains were already well initialised, i.e. presenting a phenomenologically viable likelihood value. For the scan we implemented the model in SARAH [45] and generate code for SPheno [46], FlavorKit [47] and micrOMEGAS [48]. The former two compute the mass spectrum and low energy observables, while the latter evaluates the DM relic density and the direct detection (DD) cross-sections.
Assuming a Gaussian likelihood of uncorrelated observables, the likelihood associated with a given parameter point n is computed as where the product runs over the imposed constraints and individual likelihood value L n i associated to each constraint. In the case of a two-sided limit, i.e. for the Higgs mass m H and the DM relic density Ω CDM h 2 , the likelihood is computed following Parameter Interval where O n i is the calculated value of the considered observable for the parameter point n, O exp i is the associated experimental value given in Tab. 2, and σ i is the associated uncertainty. The likelihood computation for upper limits is implemented as a step function, which is smeared as a single-sided Gaussian with a width of 10% of the value corresponding to the experimental upper limit. In this case, we have L n i = 1 if the predicted value O n i is below the upper limit O exp i . In the opposite case, L n i is computed according to Eq.
Results
In this section, we present the main outcome of our MCMC analysis. We shall first show the resulting parameter space for the couplings and then discuss certain observables of interest. In addition, we will discuss possibilities to test part of the available parameter space at the LHC.
Couplings
We are interested mainly in the Yukawa couplings that connect the SM particles with the new fields, i.e. g F 1 , g F 2 , g Ψ , and g R , which are all three-component vectors, see Eq. (2.13). These are relevant for neutrino masses, the anomalous magnetic moment and flavour violating decays of the leptons. Fig. 3 shows the correlations among the different components for each coupling vector, while in Fig. 4 we show the correlation of selected components with the trilinear coupling α.
We clearly see that all components of g F 1,2 behave in a similar way, with an approximate upper limit of |g i F 1,2 | 10 −3 for i = 1, 2, 3. As already explained in Sec. 3, this upper limit is due to our fit of neutrino masses together with the anomalous magnetic moment (g − 2) µ and the constraints coming from µ → e transitions. At the same time, the overall scaling behaviour of all the components of g F 1,2 is caused by the trilinear coupling α, as shown in the left plot of Fig. 4 for g F 1 . An analogous behaviour is found for g F 2 (not shown). Larger values of α imply a larger scalar mixing, which then suppresses the scale of g F 1,2 via the neutrino mass fit.
For g Ψ , as already described in Sec. 3 and depicted in Fig. 2, a specific hierarchy among its components is realised to fit the muon anomalous magnetic moment and be below the : Correlation of selected Yukawa couplings with the trilinear coupling α. The couplings g Ψ and g F 1 are connected to the trilinear couplings α through the fit of the neutrino masses, while the connection of g 2 Ψ and g 2 R with α stems from the fit of the anomalous magnetic moment (g − 2) µ . Figure 5: Results for the relevant cLFV decays with the current limits from MEG collaboration [30] and Belle [52,53] (full lines) and expected sensitivities (dashed lines) from MEGII [49], Mu3e [50] and Belle II [54]. The other decays not shown here lay below the expected future bounds.
limits of charged LFV searches. This hierarchy is linked to that of g R , as both contribute equally to these processes, see Appendix A. While both g 2 Ψ and g 2 R have to be large to fit (g − 2) µ , g 1 Ψ and g 1 R must remain small to not exceed the current limit of BR(µ → eγ). On the same grounds, g 3 Ψ and g 3 R are similarly constrained by the upper limit on BR(τ → µγ). It is worth noting that the fit of (g − 2) µ links the components of g R and g Ψ with the trilinear coupling α, as can be seen in Fig. 4 (right). As discussed in Sec. 3, the dominant contribution to (g − 2) µ and charged LFV decays comes from the left diagram in Fig. 1, proportional to α. For example, smaller values of α imply larger values of g 2 Ψ and g 2 R in order to fit the anomalous magnetic moment (g − 2) µ , as can be seen in the upper corner on the right panel of Fig. 4.
The perturbativity requirement for both the Yukawa couplings and α sets then a lower and upper limit on the trilinear coupling of roughly 30 GeV α 4 m φ 0 1 . Note, that the upper bound is actually given for α/M φ where M φ is the average masses of the scalars involved in this coupling.
Charged lepton flavour violating decays
Charged lepton flavour violating decays rank among the most stringent constraints for neutrino mass models, as fitting the neutrino mixing angles, in general, requires nondiagonal Yukawa matrices that connect also to the charged leptons and allow for transitions between different lepton flavours. While the limits to the branching ratios of these processes are already remarkable, especially for the limit on the decay µ → eγ from the MEG collaboration [30], there is a renovate interest with new experiments expected to take place in the near future, such as MEGII [49], Mu3e [50], or COMET [51], with an expected improvement on the sensitivity of even four orders of magnitude for certain processes like µ → 3e, or Belle and Belle II for the tau decays [52][53][54]. Figure 6: Histograms of the mass and nature of the dark matter candidate. The separation into fermionic and scalar dark matter candidates clearly exhibits a preference for fermionic dark matter with a mass around 1100 GeV.
Although the charged LFV decays are considered as constraints in our analysis, see Tab. 2, it is worth checking how these processes behave in the present model and exploring to which extent future experiments may restrict the parameter space. In Fig. 5 we show the branching ratios of the most relevant charged LFV decay channels for the muon and the tau, together with their current limits and future expected sensitivities. Note that the muon decays are completely dominated by the dipole contribution, i.e. the diagrams depicted in Fig. 1, while there is an important contribution from box diagrams to the tau decays τ → 3µ. This is due to the large values for g 2 Ψ and g 2 R needed to the fit of (g − 2) µ , which makes the box diagram proportional to g 3 R g 2 * Ψ g 2 * R g 2 Ψ dominate over the dipole contribution with the off-shell photon.
For conciseness, we do not show the results for the charged LFV decays of the tau to electrons. For these, we find that, in the best case, the branching ratio of τ → µe + e − is of the order of 10 −9 , just on the border of the future expected sensitivity. For the remaining processes τ → eγ and τ → 3e, we obtain branching ratios below 10 −17 , not observable in the foreseeable future.
Dark matter observables
Let us recall that the model under consideration includes three possible candidates for cold dark matter (CDM), the lightest Z 2 -odd neutral fermion χ 0 1 , the lighter scalar φ 0 1 , and the pseudo-scalar A 0 , depending on the mass hierarchies in a given parameter configuration. The dark matter relic density is taken as a constraint in our MCMC analysis, with a theory uncertainty of 10%, such that Ω CDM h 2 = 0.120 ± 0.012 [2].
Starting with the overall situation, Fig. 6 shows the obtained distribution for the DM mass, separating fermionic (χ 0 1 ) and scalar (φ 0 1 ) DM. The shown results exhibit similar behaviour to that found in Ref. [20] for the simpler T1-2-A scotogenic model. Fermionic DM dominates the model parameter space, with a preferred mass of around 1100 GeV. Scalar DM accounts for about 38% of the viable parameter points, with a preferred masses of roughly 600 to 1000 GeV. As in the T1-2-A model [20], fermionic DM is essentially doublet-dominated. This can be seen from Fig. 7 showing the split into doublet and singlet-dominated DM candidates together with the singlet content of the fermionic DM. This feature can be traced to necessary co-annihilations which occur naturally in the doublet case between χ 0 1 and χ ± and χ 0 2 due to the very small mass splitting between these states. The (co-)annihilation processes are dominated by gauge interactions similar to the case of pure higgsinos in supersymmetric models. As already mentioned, sizeable Yukawa couplings to the muons are necessary to explain the potential deviation of its anomalous magnetic moment. This gives additional annihilation channels into muons via scalars, stemming from the doublet η, in the t-channel.
In the case of a fermionic singlet-dominated state χ 0 1 , it is much harder to satisfy the relic density requirement from Planck data [10]. While singlet fermions F i are produced thermally, they can only annihilate via the Yukawa g F i or through the mixing with Ψ 1,2 , both small because of the charged LFV constraints and our fit of (g − 2) µ .
In the case of scalar DM, the doublet-like states also dominate the phenomenologically viable parameter regions with preferred masses around 700 GeV. For the same reason as in the case of fermionic DM discussed above, doublet-like DM is preferred, as it allows meeting the relic density constraint through efficient co-annihilations between the different doublet states mediated through gauge interactions. Note that the associated mass peak is wider here, as the mass splitting in the doublet can be larger in the scalar case as compared to the fermionic one.
Finally, we note that we do not find any pseudoscalar DM in this model in contrast to Ref. [20]. The reason can be easily understood by inspecting the mass matrix given in Eq. (2.4). The mass splitting between the scalar doublet component and the pseudoscalar is given by 1 2 λ η v 2 and the mixing between the doublet scalar and the singlet is given by |αv|. For a pseudoscalar DM candidate one needs the doublet to be lighter than the singlet. As can be seen in Fig. 9 the mixing between the scalar components is always larger than the mass splitting between scalar and pseudoscalar implying that the scalar will be lighter than the pseudoscalar. Note that this feature is less pronounced in the T1-2-A model studied in Ref. [20], as in their study the trilinear coupling α is rather restricted and, in Figure 10: Spin-independent direct detection cross-section versus the mass of the DM in the scalar case, differentiating between singlet (blue) and double-like (orange). The current limit from XENON1T [44] and the future limits from XENONnT [55] and DARWIN [56] are given, as well as the corresponding line for the neutrino floor [57]. The fermionic DM case is not shown as the DD cross-section lays below the neutrino floor, around 10 −60 cm 2 . See text for details. addition, the constraint from (g − 2) µ has not been taken into account.
In Fig. 10 we show the results for the spin-independent direct detection cross-section for the scalar DM case. As already said in Sec. 4, the XENON1T [44] limit is taken as a constraint, such that points not satisfying the current limits are excluded. Most of the remaining viable points can be tested by future experiments like DARWIN [56]. On the other hand, we do not find any constraints in the case of fermionic dark matter. The reason is that our scan requires the modulus of the relevant Yukawa couplings, |y ij |, to be smaller than 10 −4 , suppressing the dominant contribution and pushing the cross-sections well below the neutrino floor.
We note that, in both cases, the direct detection cross-section is mainly dominated by Higgs exchange, since we actually have an inelastic dark matter candidate. Inelastic dark matter refers to DM candidates with a mass splitting between the CP -even and CP -odd components of a neutral state. As the Z-boson couples always between the CP -even and CP -odd components, for the part of the parameter space where the mass splitting between these two states is larger than the kinetic energy of the DM, the contribution from Z channel to the DD cross-section is kinematically forbidden. Since the coupling of the DM to the Z-boson has typically gauge strength, if it is active, then it will be excluded by direct detection. We note here that this contribution was added by hand as micrOMEGAS does not include inelastic channels. Nevertheless, this excluded very few points, as in practice given Figure 11: Mass of the charged fermion χ ± as function of the mass of the neutral fermion in the scenarios with a fermionic DM candidate.
the typical DM average relative velocity, the mass splitting needs to be only larger than O(100) keV to kinematically close the Z-channel [58].
Collider aspects
We have seen in Sec. 5.3 that the preferred range for fermionic DM is between 700 GeV and 1.4 TeV with most of the points having a mass between 1 and 1.2 TeV. Moreover, they are essentially always SU (2) L doublets with the same quantum numbers as higgsinos in supersymmetric models. From Ref.
[59] we can thus infer that the cross-section σ(pp → χ + χ 0 ) at the LHC with √ s = 14 TeV varies between 0.43 fb (1 TeV) and 0.14 fb (1.2 TeV) assuming that both states have the same mass. The signatures depend on the mass difference between the charged and the neutral state which is displayed in Fig. 11. In case the DM candidate stems from SU (2) L , one finds only a small difference and the dominant decay mode is χ + → π + χ 0 . We can infer from the corresponding supersymmetric scenarios that the LHC will not be able to discover the corresponding states, see for example [60] and references therein. In case the DM candidate is singlet-like we find mass splittings between 10 and 150 GeV. The main decay modes are via off-shell neutral scalars The interesting point is that the requirement of explaining the (g − 2) of the muon implies that nearly all cases one has a muon in the final state. While this is a potentially interesting final state, one should keep in mind that the case of singlet fermionic DM seems to be rarely realised in this model, see Sec. 5.3.
In scenarios with a scalar DM candidate, the situation looks somewhat more promising. We see from the left of Fig. 12 that in a large portion of the corresponding parameter space the charged fermion has a significantly larger mass than the DM candidate. Note, that the SU (2) L have a similar mass as the charged fermion. Both will decay into SM-leptons and a Z 2 -odd scalar, where the j corresponds mainly to the mass eigenstates which are dominantly SU (2) L fermions. As above, we expect the decays into muons to be dominant. Thus, the signal will dominantly consist of muons in combination with missing transverse energy. We note for completeness, that one has of course also direct production of scalar doublets. However, already with a mass of about 700 GeV the cross-section is about 0.1 fb as can be inferred from the production of left-sleptons [59] which have the same quantum numbers. We see from the left of Fig. 12 that m η ± > ∼ 700 GeV in scenarios with a sizable mass splitting in the scalar sector. Thus, the direct production will hardly contribute to an LHC signal for this model.
Leptogenesis
This model features heavy Majorana fermions, lepton number violation and complex couplings which are all ingredients for leptogenesis. In this section, we investigate to which extent one could also explain the observed baryon asymmetry of the Universe via the leptogenesis mechanism 4 , in the region of parameter space discussed in the previous section. We present here the main results and collect in Appendix B further details.
We are in a region of parameter space where the couplings y ij , which determine the mixing between the SU (2) L doublet and singlet fermions in Eq. (2.9), are small. Thus, in practice, only the singlet-like fermions will contribute to a possible lepton asymmetry.
These states decay dominantly according to F i → ηL , η †L and F i → HΨ , H †Ψ . (6.1) The decays will occur at a mass scale which is significantly above the scale of electroweak symmetry breaking and, thus, it is more convenient to work in the gauge basis. At tree-level the former is governed by the couplings g i F and the latter by the couplings y ij , where we have neglected the masses of the SM leptons and Higgs boson. The asymmetry is generated at the one-loop level [5] by the diagrams displayed in Fig. 13. Similar to the type-I seesaw model [6,61], there are the typical wave-function and vertex diagrams depicted in the upper row involving the other singlet fermion in the loop. Moreover, there are additional possible vertex diagrams, which are depicted in the lower row. These involve additional couplings like g k R and α which turn out to be important. The CP asymmetry parameters i are given by Explicit expressions for the various contributions to i in this model may be found in Appendix B.
The MCMC yields that in general max(|g k F i |) max(|y ij |) in most of the parameter space and in the remaining part, they are of equal size. Thus, we will base our estimations in the text below on g k F i , but we stress that all parameters were properly taken into account in the numerics.
An important question is to which extent a generated asymmetry gets washed out in the thermal history of the Universe. To answer this, one defines the decay parameters with H being the Hubble parameter. The weak washout regime is realised for K i 1, the strong washout regime for K i 3, and an intermediate regime in between these values [6,7]. In the parameter space discussed in Sec. 5, we are always in the strong washout regime, as can be seen from the second part of Eq. (6.5). As an example we display K 1 in Fig. 14.
The very large values of the decay parameter allow us to neglect washout through scattering processes, as the inverse decays are the dominant sources of the washout. Thus, we may treat leptogenesis as competition between decays and inverse decays [6], with the B − L asymmetry being generated when the inverse decays freeze out and the surviving number density N F i of the F i decays.
We solve numerically the corresponding Boltzmann equations given in Appendix B with the following initial conditions at high temperatures, and track these number densities down to lower temperatures. As mentioned earlier, we do not assume a large hierarchy between the masses of F 1 and F 2 . Consequently, we take into account both contributions. Note that the inverse decays of the singlet fermions must freeze-out at a temperature before the sphalerons fall out of equilibrium (T ∼ 100 GeV), otherwise, the B − L asymmetry generated in its decay is not converted into baryon asymmetry. As we are in the strong washout regime, we can estimate the freeze-out temperature for the inverse decays as [62], This gives an approximate lower bound on the mass of the lightest decaying singlet M i 2 TeV for which the sphalerons remain active. At lower temperatures, T min{M 1 , M 2 }, we obtain the final B − L asymmetry, N fin.
B−L , which is converted to the baryon-to-photon ratio, η B , via the sphaleron process [63,64] using C sphal. = 8/23 is the sphaleron conversion factor [65], g 0 * = 43/11 the present value of the number of relativistic degrees of freedom (DOF) and g * the relativistic DOF of the full model at high temperatures. Figure 14 depicts the final baryon-to-photon ratio obtained from solving the Boltzmann equations, using the sets of parameters mentioned before, against the mass of the singlet fermion driving leptogenesis. We observe that in contrast to the typical case of strong washout in the type-I seesaw model and the 'classic' scotogenic model [66], the final value of η B has a tendency to decrease with increasing M 1 . Besides, we also find that the CP asymmetries generated in the decays of lighter F 1 are much larger. We note large contributions to i come from the loop diagrams in the lower row of Fig. 13.
In the minimal scotogenic model, it is possible to express the decay parameter (6.5) as a function of the lightest neutrino mass and the λ η parameter [66]. In our model, the link is not so direct due to the additional couplings and particle states present. Moreover, the requirement to explain the potential deviation of the (g − 2) of the muon while being consistent with the bounds on the LFV lepton decays requires, for example, larger values of the trilinear coupling α. As can be seen from the left plot in Fig. 14, the decay parameter decreases with increasing |α|. Note that this coupling does not appear directly in the calculation of the decay parameter (6.5), but reduces the value of the coupling g F through the neutrino fit, which in turn decreases the tree-level decay width and, hence, lowers the value of K i .
We also see from the right plot in Fig. 14 that only a few points are able to explain the observed baryon asymmetry. Investigating the details of the required parameter combinations to obtain the correct baryon asymmetry would be highly interesting, but is beyond the scope of this paper and left for a future publication. However, one feature that we have observed is that nearly all points present a fermionic doublet as dark matter candidate. Only one out of 25 points in parameter space contains a singlet scalar dark matter candidate.
Conclusion
We have investigated a scotogenic model with a very rich phenomenology. We have presented a complete analysis of the associated parameter space, taking into account constraints from the Higgs sector, the neutrino sector, lepton flavour violating processes, the muon anomalous magnetic moment and dark matter observables.
Neutrino data governs the couplings of the new particles to the left-handed leptons and the requirement of explaining the observed deviation of the anomalous magnetic moment of the muon (g − 2) requires that couplings to muons are sizeable. This in turn implies that the decays µ → eγ, µ → 3e, τ → µγ and τ → 3µ are in the reach of upcoming experiments in a sizeable part of the parameter space.
We have found that the dark matter relic density constraint leads to a preference of fermionic dark matter candidates, in most cases the neutral component of an SU (2) doublet in the mass range 1 to 1.2 TeV. Scenarios featuring a scalar dark matter candidate can be tested by future direct detection experiments like XENONnT or DARWIN, whereas the corresponding cross-sections for fermionic dark matter are well below the so-called neutrino-floor.
We have also briefly discussed the LHC phenomenology in the relevant parameter space. The requirement of explaining the observed deviation in (g −2) leads to a preference for decays into final states containing muons. In particular, in case of a scalar dark matter candidate, we expect muons plus missing transverse energy as dominating signal at the LHC. This signal is also expected in case of supersymmetric models due to the decays of the so-called smuons. However, in our case final states with other leptons or jets in combination with missing transverse energy will be (strongly) suppressed. In case of fermionic dark matter, the mass differences are so small that the charged fermion will decay into a pion and the neutral fermion. The discovery of this final state will be very challenging at the LHC as the required mass imply a relatively low cross-section and, thus, it is likely that the LHC will not be able to observe these particles even in the high luminosity phase.
Finally, we have seen that the available parameter space gets severely constrained if one requires in addition an explanation of the observed baryon asymmetry of the Universe via leptogenesis. We have found that nearly all viable points in the parameter space feature a fermionic dark matter candidate. A detailed analysis of the features of the relevant part in the parameter space will be presented in a future work.
Acknowledgments
The work of M. Sarazin A New contributions to the electromagnetic dipole moment operator In this appendix, we collect the additional contributions to the Wilson coefficients of the dipole operator using the notation of Ref. [28]. The anomalous magnetic moment, as well as other observables like the electric dipole moment (EDM) and the charged lepton flavour violating (cLFV) decays, is directly connected to the electromagnetic dipole moment operator, i.e.
As already explained in Sec. 3, the diagonal of the Wilson coefficient c R is linked to the (g − 2) and the EDM by The real part of the Wilson coefficient will contribute to the anomalous magnetic moment (g −2), whereas the imaginary part is a contribution to the electronic part, the EDM. With this, the anomalous magnetic moment is defined as, On the other hand, cLFV decays can be computed directly from Eq. (A.1), for which their branching ratio is given by, valid for m i >> m j and where Γ i is the total decay width of i . In a general framework, we can consider a coupling between new fermions Ψ, new scalars Φ, with the Standard Model lepton i . In this scenario, we have the Lagrangian, The associated Wilson coefficients are given by, where Q is the electric charge of the fermion in the loop. The functions f, g,f ,g are defined as, In order to compute c R , we need to determine the vertices Γ i L and Γ i R . In our model, as described in Sec. 2, there are two contributions depicted in Fig. 1. The Wilson coefficient c R is then the sum of these two contributions.
For the first diagram (left) in Fig. 1, after EWSB the fields in Eq. (A.5) correspond to Ψ ≡ χ + and Φ ≡ φ 0 k , with couplings We note here that for simplicity we assumed a sum over the new fields Ψ and Φ in Eq. (A.6).
Here, a sum over the index k = 1, 2, 3 must be performed when computing c R , taking into account that several scalars participate in the diagram, i.e. one should replace M Φ by m φ 0 k . Also, given our definition of the neutral scalar basis and their mixing as defined in Eq. (2.5), where for conciseness we included the pseudo-scalar A 0 , here the third φ 0 eigenstate should be considered as φ 0 3 ≡ A 0 . For the second diagram (right) in Fig. 1, after EWSB Ψ ≡ χ 0 k and Φ ≡ η − , with couplings Again, a sum over the index k = 1, 2, 3, 4 must be performed in Eq.
B Details on the calculation of the baryon asymmetry
We collect here details of the calculation of the baryon asymmetry of the Universe which has been presented in Sec. 6. We largely follow the convention used in [6].
B.1 Boltzmann equations for leptogenesis
As a starting point, we write down the Boltzmann equations for thermal leptogenesis, which track the number densities per co-moving volume N F i and N B−L of the singlet fermions and the B − L asymmetry. Here, we have defined the dimensionless variables z i = M i /T where T is the temperature. As the variables are related by z 2 = z 1 (M 2 /M 1 ), we solve the above equations in terms of z 1 . Here we take the contributions of both singlet fermions into account as we do not necessarily have a large mass hierarchy between the two states, see e.g. Refs. [7,67]. The quantities K 1,2 (z i ) refer to the modified Bessel functions of the second kind and N eq.
where M Pl = 1.22 × 10 19 GeV is the Planck mass. The quantity g * refers to the effective number of relativistic degrees of freedom. With the additional particle content, we have g * = 122.25 compared to the SM value of g SM * = 106.75 [68]. We solve the set of coupled Boltzmann equations (B.1) and (B.2) numerically with the following initial conditions at T = 10 5 M 2 implying z 1 1 The sphaleron conversion factor enters the calculation of the baryon asymmetry which is given according to Ref. [65] as where n D is the number of scalar SU (2) L doublets in the model.
B.2 Computation of i
Here we collect the expressions for the various diagrams that contribute to the CP asymmetry parameter i : Evaluation of the Dirac traces and the required Passarino Veltman reduction of the loop integrals was carried out using FeynCalc [69][70][71]. The imaginary part of the B 0 function is given by where λ(x, y, z) = x 2 + y 2 + z 2 − 2xy − 2yz − 2zx is the Källén function and Θ(x) is the Heaviside-step function, which enforces the fact that the imaginary part exists only when particles in the loop can go on-shell. The imaginary part of the C 0 function was calculated numerically using PackageX [72]. We indicate all possible sums over the particles in the loop and have summed over the final leptonic states, i.e. we do not consider flavour effects. Finally, we have defined Im (g j F i ) * (y SM jk ) * (g k R ) * y 2i 1 + M 2 | 11,991 | 2023-01-20T00:00:00.000 | [
"Physics"
] |
Sustainable Repellent Coatings Based on Renewable Drying and Nondrying Oils
Contamination of surfaces can cause loss of performance in a variety of applications. Bioinspired coatings based on the lotus or pitcher plants provide surface topographies that create superhydrophobic or slippery features with self‐cleaning properties. However, typical fabrication procedures often involve potentially toxic chemicals, perfluorinated compounds, nondegradable polymers, and energy‐intensive methods, with negative consequences for the environment. Here, a sustainable coating process based on renewable materials to prepare superhydrophobic and liquid‐infused coatings with minimal environmental impact is presented. A scalable spray coating protocol is used. Synthetic liquid and polymeric materials are substituted with natural drying oils, i.e., oils that react with ambient oxygen and cure to solid materials, as polymeric binder in which silica particles are partially embedded. The self‐cleaning characteristics against aqueous contaminations are investigated as a function of the drying oil used as binder. The assessment of the mechanical stability reveals the advantage of an underlying “primer layer” of the pure oil. Furthermore, it is demonstrated that oils from renewable sources can act as lubricants for the creation of slippery surfaces. The efficiency of such sustainable slippery coatings in reducing concrete adhesion points toward their applicability in real world scenarios.
chemicals were found to accumulate in blood, kidneys, and liver and show long half-life times in humans. [47][48][49][50] In addition, binder materials are often composed of nondegradable, synthetic polymers [51][52][53] and fabrication processes can involve energy intensive treatments and annealing steps. [53][54][55][56] Efforts to produce more environmentally benign coatings are made [57] by, for example, using waterborne methods that can be conducted at ambient conditions to chemically modify materials, [29] employing natural lubricants from renewable resources, [58,59] or utilizing biodegradable and recyclable matrices as the basis. [60] However, there is still the need for methods that combine the scalable fabrication, efficient performance and sustainability of both, the process, and the coating.
We recently developed a scalable fabrication method aimed at a reduced environmental impact. [61] Our method is based on aqueous dispersions of a nontoxic polymeric binder and commercially available, hydrophobic fumed silica, which are processed from aqueous dispersions by spray coating at room temperature. Upon drying, the coating self-organizes into a hierarchical surface structure that shows superhydrophobic behavior without the need of any further surface functionalization. Additionally, the textured surface can be converted to a SLIPS coating by infiltration of a lubricant in a second spray coating step. Despite the environmentally friendly process conditions, this approach relies on synthetic polymethacrylates as binder and silicone oil as lubricant.
Here, we aim to provide a coating with improved sustainability by replacing all synthetic materials with renewable and abundant resources. Natural drying oils are promising candidates as substitutes for the synthetic polymeric binder. Such oils are characterized by a high content of (poly-)unsaturated fatty acids and can cure into a solidified network at ambient conditions in an oxidative polymerization process called autoxidation. [62,63] Natural nondrying oils, in contrast, provide a renewable alternative as lubricant to the commonly used synthetic silicone oil. Together, these replacements enable the formation of superhydrophobic and liquid-infused repellent coatings based on sustainable materials, providing a versatile coating system to be used with small environmental impact.
Preparation of the Coating System Based on Drying Oils
Our coating system employs commercially available hydrophobic fumed silica particles to create the required combination of topography and low surface energy needed to form Cassie-Baxter wetting states with water. [61] A polymeric binder is necessary to provide stable adhesion of these silica particles to the surface. In addition, this binder must be hydrophobic to ensure compatibility with the hydrophobic particles but should exhibit a higher surface energy than the silica particles (γ = 15.5 mJ m −2 ) [64] to promote the formation of the hierarchical micro-and nanostructures where the silica particles protrude out of the polymer film. [61] For our sustainable coating, we use natural drying oils as the binder component. In general, a higher degree of unsaturated bonds and the presence of conjugated bonds reduces the curing time. [63,65] Nevertheless, autoxidation at ambient conditions is a rather slow process taking several days. [62,66,67] In general, there are three possibilities to shorten the curing time of drying oils: chemical catalysis by addition of drying agents (i.e., siccatives), [63] physically increasing the reaction rate using elevated temperatures or UV light, [62] or pre-polymerizing the oil. [68] Most commercially available siccatives contain cobalt carboxylates, which are under toxicological investigation and suspected to be carcinogenic and thus not suitable for an environmentally friendly approach. [69] A natural alternative used as siccative is colophony, a solid resin obtained from coniferous trees. It can be combined with oils upon heating, leading to the formation of a chemically crosslinked hybrid compound. [70] The heat treatment additionally pre-polymerizes the oil molecules and hence leads to enhanced and faster crosslinking during the curing process. [70][71][72][73] Here, we focus on three drying oils differing in their molecular composition, degree of unsaturated bonds and hence their crosslinking abilities; namely tung oil (having a surface tension of γ = 33.1 mJ m −2 ), [74] linseed oil (γ = 31.3 mJ m −2 ), [74] and a mixture of linseed oil and colophony. The latter is well known to lower the surface energy of water [75] and increase the hydrophobicity of blends with polymers. [76] Therefore, we assume a decrease of the surface energy for our mixture compared to pure linseed oil. Tung oil has a high content of conjugated triple unsaturated α-eleostearic acids and therefore exhibits shorter curing times than linseed oil, which mainly consists of nonconjugated di-unsaturated linoleic and triple unsaturated linolenic fatty acids. [63] Due to the long curing times, up to several days, at ambient conditions, we chose to accelerate the curing by an UV light treatment with a wavelength of λ = 365 nm (curing times of the different oils used as binders at ambient conditions and under UV light can be found in Table S1, Supporting Information). However, we note that the curing at ambient conditions is generally also possible to further reduce energy consumption in the coating process. Figure 1 schematically illustrates the coating process. First, an aqueous dispersion containing emulsion droplets of binder and silica particles is formed (Figure 1a). To obtain the desired surface structure with a rough topography formed by the protruding silica particles, we tested weight ratios between 1:1 and to 1:2 of binder and silica particles. Lower silica content prevents the formation of a surface topography while higher contents compromise the adhesion to the substrate. [61] To disperse the silica particles in the binder, i.e., the drying oil, in the required high concentrations, the addition of an auxiliary solvent was needed. We chose tert-butyl acetate, an environmentally benign [77,78] solvent that shows a similar evaporation behavior as the continuous phase water. Therefore, the homogenous drying and distribution of the surface coating and the formation of a porous network created by the silica particles and the polymeric binder at ambient conditions is promoted. It is important to ensure there is no solvent or water left before curing the drying oils to produce a sufficiently crosslinked polymeric network. [79] The formed aqueous emulsion is subsequently spray-coated in an automated setup, providing a scalable process to homogeneously coat large area substrates (Figure 1b). After the deposition onto glass substrates in one or www.advmatinterfaces.de multiple spray coating steps, the auxiliary solvent and the continuous water phase evaporate, producing the targeted surface morphology. The UV irradiation enhances the curing of the natural drying oil, thus consolidating the structure to form a SHS. In an additional spray coating step, the structures can be infiltrated with a lubricant (i.e., nondrying oils) and SLIPSs are generated.
Preparation of Superhydrophobic Surfaces
We first evaluated the formation and properties of SHSs prepared with the three natural drying oils as binder materials, using different mixtures of silica and drying oil ( Figure S1, Supporting Information). The best results were formed with an oil to silica ratio of 1:2 ( Figure 2).
The spray coating process ( Figure 2a) formed a homogeneous surface coating on the glass substrate ( Figure 2b). We used water contact angle (CA) and contact angle hysteresis (CAH) measurements to assess the repellency properties of the formed coatings as a function of the applied oil and the number of spray cycles. High CAs around 150° formed immediately upon the first spray coating cycle, regardless of the used oil ( Figure 2c). The low standard deviations of the CAs indicate a uniform hydrophobic behavior across the entire sample. Coatings with linseed oil and tung oil also showed low CAHs below 5° directly from the first spray cycle, indicative of superhydrophobic properties. The coatings prepared Figure 1. Schematic illustration of the sustainable fabrication process of SHSs and SLIPSs. a) Preparation of the dispersion-based coating system with water as the continuous phase, containing binder, i.e., drying oils, structuring components, i.e., hydrophobic fumed silica, and an environmentally benign auxiliary solvent, i.e., tert-butyl acetate. First, the drying oil is dissolved in the solvent. Afterward, fumed silica particles are added, and the mixture is dispersed in water. b) Formation of the coating by spray coating. After the spray coating step, water and solvent evaporate, and the coating is cured by an UV-light treatment to form SHSs. An additional spraying step with a lubricant, i.e., silicone oil or nondrying oils, prepares SLIPSs.
Figure 2.
Fabrication of SHSs. a) Schematic illustration of the SHS deposited via spray coating. b) Exemplary macroscopic image of a coating derived from a system containing tung oil as polymeric binder and an oil to silica ratio of 1:2 with 3 spray cycles, showing that the surface of the glass substrate is homogenously covered by the coating. c) Wetting properties of the coatings based on the different drying oils as polymeric binder, exemplary shown for an oil to silica ratio of 1:2.
www.advmatinterfaces.de
with the linseed oil-colophony mixture showed the desired SHS properties with low CAH only after the third spray cycle. Noteworthily, after the first and second spray cycle the coatings already exhibited very high CAs but still retained a large CAH, pointing to the presence of defects in the coatings (as shown in Figure S2, Supporting Information). Such "sticky hydrophobic surfaces" resemble the natural example of the rose petal [80] and are less efficient for self-cleaning but can be used for example as bio scaffolds promoting cell proliferation and increasing biocompatibility. [81] In general, concerning the SHSs based on the lotus effect, the lowest CAH values could be achieved by conducting three or four spray cycles for all systems. Further, we analyzed the morphology of the coatings in detail. Scanning electron microscopy (SEM) images (top views and cross section images) as well as atomic force microscopy (AFM) height images (Figure 3a-c) revealed the rough and porous surface structures and confirm the presence of a hierarchical texture, enabling Cassie-Baxter wetting and therefore efficient water-repellency. Nevertheless, as stated above a hierarchical topography does not always enable Cassie-Baxter wetting, which should only be attributed to coatings exhibiting low CAHs. As Figure 3d shows, the layer thickness linearly depended on the number of spray cycles. This linear trend derived from SEM cross section measurements was further confirmed with profilometry measurements. Hence, an increasing number of spray cycles leads to more uniformly coated areas without exposed substrate areas and thus less potential pinning points, explaining the reduced CAH values. Note, however, that also within these more uniform coatings the required hierarchical topography consisting Figure 3. Microscopic analysis of the coatings. a) Representative optical microscope and SEM images of the coatings using tung oil as a natural binder, after one, three, and six spray cycles. All scale bars correspond to 50 µm or 1 µm, respectively. b) Exemplary AFM height profile showing the rough surface structure. c) Representative side view SEM image of a tung oil containing coating derived in three spray cycles showing the hierarchical topography and porous structure of the coating. d) Thickness of the prepared coatings as a function of the number of spray cycles measured for the entire range for the tung oil coating and for three spray cycles for the linseed oil and linseed oil colophony mixture coatings.
www.advmatinterfaces.de
of micro-and nanoscale features prevails, leading to a stable Cassie-Baxter wetting state characterized by low CAH. On the other hand, larger numbers of spray cycles can result in cracks (Figure 3a), which exposes the underlying substrate and may thus serve as potential pinning points. The representative sideview image in Figure 3c reveals even more clearly the just described morphology and porosity of a coating.
The microscopic analysis also revealed why coatings formed with the mixture of linseed oil and colophony required more cycles to form SHSs. Due to the pre-polymerization of the oil mixture and the increased viscosity, the silica particles cannot be uniformly dispersed and form larger particle agglomerates which are deposited on the surface ( Figure S2, Supporting Information). Therefore, the coatings exhibited less uniform areas with insufficient coverage coexisting with regions of large amounts of solid material. Based on these results and their reproducible nature ( Figure S3, Supporting Information), we chose a number of three spray cycles for all further experiments.
Stability of the Coatings
The nanostructures enabling superhydrophobicity are rather fragile and can be easily damaged, leading to a compromised performance. We investigated the mechanical properties of the coatings formed with the different drying oils using two stability tests, a scotch tape test to evaluate surface adhesion and a linear abrasion test to assess the overall integrity of the coatings (Figures 4 and 5). Additionally, we evaluated the influence of a tung oil primer layer, i.e., an underlying layer of pure tung oil that was not fully cured upon application of the coating, which we assumed would promote the substrate adhesion of the formed coating.
Over three consecutive tape tests the CAs decreased for all samples while the CAH increased (Figure 4a,b). All tested samples retained their repellent properties after the first tape test. However, after the second test, low CAH only persisted for the sample with a primer layer, indicating enhanced stability compared to the other coatings. The CAs of the linseed oil coating showed the strongest decrease. The SHSs with the linseed oil-colophony mixture and with a tung oil primer layer retained comparably high CA values above 120° for all tape test iterations. Figure 4c shows representative optical microscopy images of the tung-oil containing coatings with and without the tung oil primer layer. After three consecutive tape tests, the coating without primer layer was mostly removed, while the coating with a primer layer remained largely intact. These microscopic images further underline the enhanced stability of the coating containing a tung oil primer layer. Nevertheless, even this optimized coating adheres less well to the substrate compared to a similar coating formed with poly(butyl methacrylate) (pBMA) as a synthetic binder material (which maintained repellence up to three tape tests). [61] The results of the linear abrasion test ( Figure 5) corroborate the results of the tape peel test. While all samples can still be classified as superhydrophobic after the first abrasion cycle (showing large CAs and low CAHs), the CAHs and their standard deviation increase significantly with further abrasion cycles. Again, the coating with linseed oil shows the weakest performance while the coating with the linseed oilcolophony mixture and the tung oil one with a primer layer still show comparably high CAs and low CAHs after five abrasion cycles.
From the results of the mechanical tests, we conclude that colophony used as a natural siccative as well as the introduction of a primer layer efficiently enhances the substrate adhesion and overall integrity of the fabricated coatings. Generally, the tung oil-based coating with a primer layer seems superior to the linseed oil-colophony mixture as the tung oil-based coating shows a better reproducibility ( Figure S3, Supporting Information) and a significantly shorter curing time (Table S1, Supporting Information). In addition, the necessary heat treatment to produce the linseed oil/colophony blend and the longer UV exposure upon curing increases energy consumption, further disfavoring this binder. In the direct comparison of microscopic images of the tung oil coating with and without primer layer during the stability tests (Figures 4c and 5c) we see that significantly more material remains on the surface of coatings with a primer layer. We assume that the superior mechanical stability is due to the silica particles being embedded in the not yet cured oil layer during deposition. Furthermore, the attachment Figure 4. Tape peel test with the different drying oil binders. a) Mean CAs and b) mean CAHs before and after one, two, or three tape tests. c) Representative microscopic images of the tung oil-containing coating with and without primer layer before stability testing and after three tape tests. All scale bars correspond to 50 µm. Note that the darker spots in the images are the dried oil droplets from the primer layer.
www.advmatinterfaces.de
and presumably crosslinking of the oil contained in the coating system to the primer oil layer is facilitated, enhancing substrate adhesion.
Preparation of Slippery Liquid-Infused Porous Surfaces
Taking advantage of the porous topography of the SHSs, we produced SLIPSs coatings by infusion with a fluid lubricant in a second spray coating step (Figure 6a). We investigated the repellency properties for different natural nondrying oils used as sustainable alternatives to synthetic lubricants and compare their performance to the synthetic, state of the art lubricant silicone oil. All investigated SHSs could be successfully infiltrated and produced fully transparent SLIPS coatings (Figure 6b). The time-lapse images of Figure 6c, taken from Movie S1 (Supporting Information) show a 30 µL water droplet sliding down a SLIPS coating prepared with sunflower oil as lubricant. We www.advmatinterfaces.de characterized the coatings by means of CA and CAH measurements ( Figure 6d). All prepared coatings exhibited low CAHs below 10°. The higher CAs observed for the reference SLIPS coating with synthetic silicone oil are due to the lower surface energy of silicone oil [82] (γ = 18.9 mJ m −2 ) compared to the natural oils [83] (γ ≈ 33 − 35 mJ m −2 ). However, the coconut and olive oil mixture bares the risk of solidifying at lower temperatures due to the melting point of virgin coconut oil of ≈25 °C, [84] making it less suitable for outdoor applications. Comparing the nondrying oils in terms of their hydrophobicity, sunflower oil-infiltrated coatings show the highest water CA (Figure 6d). Thus, we hypothesize that sunflower oil can be retained in the porous structure most efficiently and should show the best long-term stability when subjected to impinging water droplets. Hence, we chose silicone oil and sunflower oil as natural alternative for further experiments.
Accelerated Aging Test
In addition to the mechanical properties, the long-term stability over time and upon exposure to environmental conditions is important for any real-life applications. Following a cyclical climate change test specification PV 1200 by Volkswagen AG, [85] we exposed the SLIPS coatings infiltrated with silicone and sunflower oil based on tung oil with an oil primer layer to rapidly changing environmental conditions (Figure 7). These conditions mimic an accelerated aging in an outdoor environment, with each cycle simulating summer (+80 °C) and winter periods (−40 °C). The SLIPSs infiltrated with silicone oil remained slippery even after six aging cycles. The CAHs increased slightly over time due to depletion of the lubricant but sliding angles of a 10 µL water droplet below 10° persisted, confirming the retained repellent properties. The coatings infused with sunflower oil remained slippery only during the first two aging cycles. While this value indicates substantial stability, it is much lower compared to the synthetic silicone oil-based analogue. We attribute this difference to the reactivity of the natural oils. Similar to the drying oils, sunflower oil can also undergo a thermal oxidation process due to the double bonds present in the molecular structure, albeit at a much lower rate. [86,87] In the course of the oxidation process, the oil becomes sticky and solidifies over time. Thus, the fluid nature of the coating is lost, causing pinning of water droplets. However, it should be considered that the accelerated aging process uses prolonged and reoccurring heating phases with temperatures up to 80 °C, which promote this thermal degradation. In everyday applications, such temperatures are unlikely to occur. This may therefore lead to a better performance compared to the prediction by the accelerated aging test. Compared to a similar coating system based on pBMA infiltrated with silicone oil, the long-term stability of both investigated renewable SLIPS systems is reduced by four or eight aging cycles. [61]
Application as Antiadhesion Coating
To evaluate the applicability of the produced coatings in selfcleaning applications, we performed a cement adhesion study. Cement soiling of shoe soles leads to risks for employees at construction sites. Additionally, cement attacks the used materials leading to brittleness. Therefore, the equipment in contact with the aggressive cement needs to be replaced regularly. Preventing the adhesion on rubber materials would reduce material consumption but is challenging due to the rubber material of shoe soles and the strong adhesion and complex drying mechanisms of cement. We measured the adhesion strength of cured cement on glass slides coated with the different repellent coatings (Figure 8a). The presence of an SHS coating reduced cement adhesion from 25.3 ± 3.7 to 9.7 ± 1.7 kPa. On SLIPSs, however, only negligible adhesion within the measurement error was found. While cement seemingly, or at least partially, penetrates the structures to adhere to the surface, the presence of the fluid lubricant efficiently prevents contact of the cement to the underlying surface.
Additionally, we investigated the cement adhesion and removal from shoe soles. We coated shoe soles with a 1:2 tung oil coating infiltrated with either silicone oil or sunflower oil and covered the entire sole with cement. Upon hitting the coated soles against an uncoated reference, we qualitatively monitored the amount of cement being removed after each hit. Figure 8b shows time-lapsed images taken from Movies S2 and S3, which can be found in the Supporting Information. The coating infiltrated with silicone oil was free of cement after only three hits while a significant amount of cement remained on the uncoated reference. For the shoe sole infiltrated with sunflower oil a large amount of cement was removed after three hits. After five hits the shoe sole was essentially free of cement, whereas the reference remained covered in large areas. We assume that the differences between silicone oil and sunflower oil infiltrated SLIPSs is due to small defects in the coating where the sunflower oil partly de-wetted. Nevertheless, these results highlight not only the performance of our coating system against complex contaminations but also its versatility since the shoe soles exhibit a complex surface geometry and a rubber material that is difficult to coat with many conventional processes. Furthermore, the efficient removal of cement also indicates stability against high pH value (cement during curing has a pH > 12). The reduced adhesion of the cured cement indicates that the natural oils used as lubricant are not decomposed by the basic environment. The coatings were also able to repel highly acidic liquids (HCl, 1 n, pH = 0) upon short contacts and could Figure 7. Repellency performance upon simulated aging according to test specification PV 1200 by the Volkswagen AG, [85] carried out for SLIPSs based on a tung oil coating with primer layer using silicone oil as a synthetic lubricant and sunflower oil as the renewable alternative.
Conclusion
We developed environmentally friendly water-repellent coatings based on renewable materials, which are fabricated in a simple and scalable spray deposition process. We successfully implemented drying oils, i.e., oils that cure to solid polymeric networks in an oxidation process with ambient oxygen, as a polymeric binder component to replace previously used synthetic poly methacrylates. Surface coatings with hierarchical topo graphy at the micro-and nanoscale with superhydrophobic properties were formed from the dispersions containing the drying oil and fumed silica particles.
The infiltration with silicone oil efficiently converted the SHSs into SLIPSs. We identified sunflower oil as a renewable alternative for the synthetic lubricant. In all cases, SHSs and SLIPSs with comparable water-repellency were obtained with our sustainable approach. Nevertheless, the mechanical robustness and long-term stability were more pronounced for the synthetic materials.
By replacing all involved materials with renewable alternatives, our approach provides an important step in the realization of fully sustainable coatings. Cement adhesion tests underline that sustainable coatings can provide efficient performance in the repellency of adhesive contaminations on complex materials and topographies, opening pathways toward the replacement of synthetic materials with renewable alternatives in the design of self-cleaning surfaces.
Experimental Section
All chemicals were purchased from commercial suppliers and used as received without further purification if not stated differently. Experiments were performed at ambient conditions unless stated otherwise.
Oils and Mixture: Tung oil (100% pure tung oil, Uulki) and linseed oil (virgin linseed oil, cold pressed, Carl Roth) were used as purchased. The mixture of linseed oil and colophony (gum rosin, natural resin, Sigma-Aldrich) was prepared following a publication of Tirat, et al. [72] for a compound with 20wt% colophony.
Coating System: The coating systems with linseed oil, tung oil, and the linseed oil compound with 20wt% colophony serving as polymeric binder, were prepared adapting a protocol by Walter et al. [61] 6 g TBAc (tert-butyl acetate, ≥99%, Sigma-Aldrich) was added to the respective amount of oil (0.30 g, 0.24 g, or 0.20 g, depending on the oil:silica ratio) and the mixture was gently stirred (150 rpm) for a few minutes. Subsequently, the desired amount of silica particles (0.20 g, 0.36 g, or 0.40 g, for the 1:1, 1:1.5, and 1:2 oil:silica mixture, respectively) was added, and the mixture forming the hydrophobic phase was stirred again at slow speed (75 rpm) until the particles were dispersed homogeneously. Next, 24 g of a 3.2 g L −1 sodium dodecyl sulfate (SDS, ≥99%, Carl Roth) solution was added to the as-prepared hydrophobic phase. The mixture was ultrasonicated six times for 30 s with a pause of 10 s at an amplitude of 90% under ice-cooling to form the aqueous dispersion used for the coating process.
Superhydrophobic Surfaces: To produce SHSs, the coating systems were deposited as prepared onto soda-lime glass substrates (microscope slides, Menzel-Gläser, Thermo Scientific) via spray coating with an airbrush (Evolution Airbrush, 0.4 mm nozzle, Harder & Steenbeck) in a custom-built automated setup. The glass substrates were cleaned prior to coating by immersing them consecutively in acetone, ethanol, and ultrapure water under sonication for 5 min each and activated by O 2 plasma treatment (Femto, Diener electric, 0.2 mbar, 4 sccm O 2 , 100 W) for 5 min. The number of spray cycles was varied with 30 s pause in-between for repeatedly sprayed samples. For any spray coating step the instrument was placed 15 cm from the sample. For samples with a tung oil primer layer one layer of pure tung oil was applied with the automated spray coating setup. The oil layer was cured under UV light (λ = 365 nm) for 1 h. Afterward, the coating system containing tung oil was applied in three spray cycles. Following the last spray cycle, all samples were put in a horizontal position to let the water and TBAc evaporate. Samples were cured under UV light directly after spray coating for at least the duration determined for pure oil films in preliminary studies (see Table S1, Supporting Information). Afterward, surfactants were removed by immersing the samples in ultrapure water in up to twelve washing steps.
Slippery Liquid-Infused Porous Surfaces: SHSs were prepared as stated above and infiltrated with silicone oil (silicone oil M10 low viscous, www.advmatinterfaces.de 10 cSt, Carl Roth), sunflower oil (100% sunflower oil, EDEKA), peanut oil (cooking oil, Mazola), olive oil (virgin olive oil, REWE), or a 1:1 mixture of olive oil and coconut oil (virgin coconut oil, REWE Bio) using the automated spray coating setup. Coatings derived from spraying the coating systems three times, as well as samples with the tung oil primer layer, were chosen for preparing SLIPSs. The nondrying oils were applied in three spray cycles.
Characterization: The static CA of a 5 µL water droplet was measured by sessile drop method (DSA100, Krüss) and analyzed using tangent 1 fitting with Drop Shape Analysis [DSA4] 2.0(a). For measuring the CAH, an initial water drop volume of 3 µL was increased by 5 µL at a constant dosing rate of 50 µL min −1 and then decreased at the same rate until the droplet was completely withdrawn from the surface by the syringe. Images were recorded at a rate of 5 fps. All plotted values are the mean of at least five independent measurements in different locations on a sample's surface.
The morphology of the fabricated samples was characterized by optical microscopy (Ergolux, Leitz equipped with a DCC3260C microscope camera from Thorlabs), AFM (NanoWizard 3, JPK), and SEM (GeminiSEM 500, ZEISS). AFM was operated in tapping mode with a scan rate of 0.2 Hz. The used soft cantilever (HQ:NSC18/AI BS, Mikromasch) had a resonance frequency of 75 kHz and a force constant of 2.8 N m −1 . The SEM images were taken using the SE2 detector, an aperture of 15 µm, and an acceleration voltage of 0.5 keV.
Layer thicknesses was determined by cross section SEM images at an angle of 90° using the same settings mentioned above and confirmed by means of profilometry (DektakXT controlled with Vision64, Bruker). The profilometer was operated in standard scan mode with a range of 6.5 µm, the stylus force was set to 3 mg, and a scan length of 4000 µm was chosen. The layer thickness was averaged over a representative area.
Stability Assessment: For the tape test Scotch tape (Scotch Magic Tape 810, 3m, synthetic acrylic adhesive, adhesion strength to steel 2.5 N cm −1 ) was applied to the SHS samples, carefully pressed onto the substrate to ensure no air is trapped and the tape is in contact with the coating, and then peeled off at a shallow angle. A total of three tape tests were performed on each sample.
In the linear abrasion test, samples were placed on a table and a lintfree tissue (precision wipes 7552, KIMTECH Science) with a constant weight of 30 g on top was pulled across the surface in one direction. Pulling the tissue over the surface once is considered one abrasion cycle.
The test of the long-term stability was carried out in a climate test chamber (VCL 7006, Haereus Vötsch) according to test specification PV 1200 from the Volkswagen AG. [80] A total of 6 test cycles were carried out. Each cycle simulates summer (+80 °C) and winter conditions (−40 °C). Three samples of the tested species were prepared and taken out after 2, 4, or 6 aging cycles, respectively.
Cement Adhesion: The cement (Universalzement, SAKRET) was mixed with water in a ratio of 3:1. The wet cement was filled in cylindrical molds with an inner diameter and height of 2 cm on the sample surface to keep the contact area constant. The cement was cured for 24 h. Subsequently the adhesion strength was measured by recording the force needed to laterally push the cement from the surface at a constant speed of 5 mm s −1 with a force gauge (digital, FK 500, Kern and Sohn GmbH). The adhesion force was averaged from testing three samples for each substrate. The SLIPSs and SHSs tested were based on the tung oil 1:2 coating system comprising a primer layer. Note that for the uncoated glass the experiment had to be repeated several times until the cement could be removed and the shown values are those measured in the first attempt. Mean adhesion strengths were recorded in multiple test repetition.
Furthermore, shoe soles kindly provided by the Uvex Safety Group were coated with the tung oil 1:2 coating system and were infiltrated with either silicone or sunflower oil. Beforehand the soles were cleaned with ethanol and ultrapure water respectively to remove any dust residues. The cement was directly applied to the coated soles and the soles were covered completely. The cement was left to cure for 24 h. The shoe soles were hit together with an uncoated reference and images were taken after each hit.
Acid Resistance: The repellence of SLIPS coatings against hydrochloric acid (HCl, 1 N, Carl Roth) was evaluated by dropping 30 µL droplets on the samples and checking the slipperiness. Additionally, the coatings were submersed into 1 n HCl for a prolonged time and evaluated after 48 and 96 h.
Statistical Analysis: All shown data were used without any preprocessing. The results of the contact angle and contact angle hysteresis measurements as well as the layer thickness and adhesion strength evaluation are presented as mean values ± standard deviation. For each sample type 5 independent CA and CAH measurements were evaluated, and, in case of the abrasion test 7 individual measurements. For the tape peel test, three individual measurements were used. The results of the aging study (exposure to extreme climate) were evaluated with 10 individual measurements. Layer thickness was evaluated using five SEM images at different locations and measuring the thickness at five spots in each picture using ImageJ. Adhesion strength measurements were performed with three samples of each type. The data analysis was done using the software OriginPro and ImageJ.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 7,800.8 | 2022-12-16T00:00:00.000 | [
"Materials Science"
] |
Performance Improvement of a Fiber-Reinforced Polymer Bar for a Reinforced Sea Sand and Seawater Concrete Beam in the Serviceability Limit State
Fiber-reinforced polymer (FRP) has supreme resistance to corrosion and can be designed with optic fibers. FRP can be an alternative to steel reinforcement for concrete structures, and can serve as a sensor for smart concrete structures. Due to poor cracking control and bond performance, the limit of flexural capacity in the serviceability limit state has not been determined, which has obstructed the wider application of FRP bars in smart structures. In this study, in order to overcome these shortcomings, a new engineering cementitious composite (ECC) with superior tensile strain capacity was used to replace the cover around the FRP bars in the tensile zone. To investigate the anti-cracking performance of the new composite beam, seven simply supported beams were designed. In the preliminary investigation, the longitudinal FRP bars in these beams were designed without optic fibers to focus on the mechanical behavior. The beams were tested under four-point load and measured using the digital sensor technique, digital image correlation (DIC). The test results showed that introducing a new ECC layer on the tensile side improves the cracking control and flexural behavior (load capacity and deformability) of a FRP-reinforced sea sand and seawater concrete (SSC) beam, especially in the serviceability limit state. We demonstrate the new composite beam can steadily and fully improve the tensile capacity of FRP bars, which is the basis of using FRP bars as sensors.
Introduction
Fiber-reinforced polymers (FRPs) possess good mechanical properties, long-term durability, and excellent resistance to corrosion [1][2][3]. Due to the shortage of commercial concrete material supply, sea sand and seawater concrete (SSC) has come to be an alternative to normal concrete (NC). An FRP bar can meet the reinforcement demand for SSC to replace the steel reinforcement. An FRP bar can include optic fibers, which are considered one of the most popular sensing materials. The technique of embedding optic fibers into an FRP bar means the FRP bar itself can serve as a sensor for structural health monitoring as well as the structural reinforcement [4][5][6][7]. It enables intelligent control for construction and monitoring for maintenance, which is especially important for marine structures on islands.
Up to now, limited research on FRP-reinforced SSC structural members have concluded that the behavior of FRP bar-reinforced SSC structural members are similar to those of FRP bar-reinforced NC, as NC and SSC have no significant differences in their macro mechanical behavior. One of major differences between FRP bar-reinforced and steel-reinforced concrete structural members is the service load-to-ultimate load ratio for beams. This ratio in FRP-reinforced concrete beams is much lower than that in steel-reinforced concrete beams [8][9][10][11][12]. Consequently, for FRP bar-reinforced concrete beams, the design is primarily governed by serviceability limit state (SLS) [8,[13][14][15][16]. Supposing the applied service load is the same for both steel and FRP-reinforced beams, the beam reinforced by FRP bars needs to be designed with larger dimensions and a larger reinforcement ratio. Researchers found the service load is mostly governed by the maximum crack width [17], compared to the maximum deflection. It reflects the cracking features in FRP bar-reinforced concrete beams is faster increases in crack width. It causes a problem with moisture permeability. As the epoxy in FRP reinforcement is sensitive to moisture, when FRP bar-reinforced structures are exposed to a marine environment, it still brings durability issues for FRP bars. This cracking pattern is determined by the bond performance between FRP bars and concrete, which is comparatively poorer than that between steel bars and concrete [18][19][20]. The resulting larger crack widths also induces high strain concentration in the FRP bar near the crack mouth. Therefore, FRP reinforcement fails at a much lower strain than the rupture strain, which is determined using a direct tensile test. This phenomenon impairs the efficiency of reinforcement and the service behavior of embedded optical fibers under service load. To summarize, the poor bonding performance and control of cracking poses barriers for the use of FRP bars as reinforcements and sensors for smart structures.
In the past decade, efforts have been taken to improve cracking control in the reinforced beams. Except the prestressing technique, introducing high performance cementitious materials was proposed and investigated in some studies. In particular, engineered cementitious composites (ECC) were investigated for controlling the flexural crack width in steel reinforced concrete beams [21][22][23][24][25][26][27][28][29][30]. Moreno et al. [29] found the interaction between the rebar and ECC reduces the fracture strain of the ECC and bar. Therefore, a larger fracture strain is required for ECC in bar-reinforced ECC than ECC itself under direction tension. Yuan et al. [30] reported a pioneering investigation on the effect of ECC reinforcement on FRP bar-reinforced concrete beams, but focused on the ultimate performance. In 2016, a new category of ECC, named as ultra-high ductile cementitious composites (UHDCC), was developed by Yu et al. [31]. The tensile strain capacity of UHDCC ranges from 8% to 12%, and the uniaxial tensile strength ranges from 4 MPa to 20 MPa [31][32][33][34]. The pioneering study by Yu et al. showed the flexural performance of a UHDCC beam can behave like an ordinary reinforced concrete beam, with a reinforcement ratio of 1.51% and small cracking width [35,36]. The experimental studies above demonstrated the feasibility of using ECC for crack width control.
Considering the points above, a novel composite system, an FRP bar-reinforced SSC beam, is proposed in this study. A UHDCC layer was adopted to replace the concrete in the tensile zone and act as the matrix for the FRP bar. As the optic fibers do not affect the tensile behavior of FRP bars, they were not designed for the FRP bars in the mechanical investigation for simplicity. The crack control capacity of UHDCC was expected to improve the performance of the composite beam, especially in the serviceability limit state. To confirm the properties of the new composite beam, four-point bending tests were conducted on simply supported beams. Beams with and without UHDCC were designed to investigate the effect of the UHDCC layer on the flexural behavior in terms of characteristic load capacity, crack width, deformability, etc.
Concrete
In this study, an SSC was designed for the specimens. The mixture included cement, coarse aggregate, sea sand, and seawater. The coarse aggregate was well-graded crushed granite, as used in NC. The sea sand was brought from Ningbo costal area in the east of China, and the seawater was artificial water, which was a mixture of dissolved sea salts to meet the target salinity of 3%. The mix design was created referring to the "Specification for mix proportion design of ordinary concrete (JGJ55-2011)" [37] for a target strength of 30 MPa (150 mm cube strength). Experiments were conducted in a standard laboratory. The laboratory mix proportion was adjusted with a high-range water-reducer (HRWR, also called a super plasticizer) to meet the slump requirement for casting on-site (Table 1). SSC cubes (150 mm) were cast at the same time as the SSC beams, and all were cured outdoors together. The compressive strength was tested on the same day as beam loading, which was 70 days after the casting. The average strength was 30.49 MPa with a standard deviation of 2.59 MPa. The macro mechanical behavior showed minor differences compared to NC. The micro level difference between SSC and NC and its effect on the long-term behavior of FRP-reinforced beams are not within the scope of the study.
Ultra-High Ductile Cementitious Composites
The mixture of UHDCC included PO 52.5 cement, class II fly ash, fine sand, water, polyethylene (PE) fibers, and HRWR. In order to maintain the chlorine content in UHDCC close to that in SSC, the sea sand and seawater from SSC were used to replace the original ingredients. Sea sand was screened to the required fineness for UHDCC. The sands had the largest size smaller than 0.21 mm. The mix proportion of UHDCC is presented in Table 2. PE fibers with high strength and high Young' modulus were used to reinforce cementitious matrix ( Table 3). The volume fraction of the PE fiber was 2%. Five dog-bone shaped specimens were cast according to recommendation from the Japan Society of Civil Engineers (JSCE) [38]. All the specimens were cured outdoors for 70 days and loaded to failure under increasing tension ( Figure 1). Dog bone specimens exhibit strain-hardening characteristics (typical stress-strain curves in Figure 2) with multiple cracks. Since the UHDCC was mixed manually, the uniformity of the mixture could not be guaranteed, which caused the cracking behavior of specimen 1 to occur earlier than in specimens 2 and 3. The test results indicated that the cracking strength of UHDCC was 3.03 MPa with a corresponding strain of 0.20%. The cracking strength was close to that of SSC concrete. The peak tensile strength reached 6.25 MPa. The tensile strength was three to five times that of the conventional concrete. More importantly, the fracture strain was 7.79%, which far exceeds the elongation of the FRP bar. It is noted that the average crack width at the peak strength was less than 0.2 mm, creating a highly durable protection for longitudinal reinforcement in a wide variety of environmental exposure conditions.
Basalt Fiber-Reinforecd Polymer (BFRP) Bars
Ribbed BFRP bars with three different diameters were adopted in this study, as shown in Table 4. They were manufactured without optic fibers by Jiangsu Green Materials Vally New Material T&D Co., Ltd., China. The BFRP bars were basalt fibers bonded with vinyl resins. To determine the tensile behavior of the BFRP bar, steel sleeve grouting was used to reinforce both ends of the BFRP bar for clamping ( Figure 3). The anchorage length of the steel sleeve followed the "Glass fiber reinforced plastics rebar for civil engineering (JG/T 406-2013)" guideline [39] to avoid pull-out failure. To enhance the friction between the BFRP bar and steel sleeves, the surface of the sleeve was cut thread. For each type (diameter) of BFRP bar, five identical specimens were prepared and tested using the tensile testing machine. Table 4 lists the ultimate tensile strength, ultimate strain, and modulus of elasticity. The elongation of BFRP bars was lower than that of UHDCC.
Design and Preparation of Beam Specimens
Seven specimens were designed for the four-point bending test to investigate the flexural performance of beams with different configurations. The dimensions of beams were uniform (i.e., b × h × L = 150 × 250 × 2100) ( Figure 4). All the longitudinal bars and stirrups were BFRP bars. Three SSC specimens were reinforced using BFRP bars with the diameters of 6, 8, and 12 mm, respectively. The other four SSC-UHDCC specimens were designed to have 60-mm-thick UHDCC layers at the bottom side instead of the original SSC part. Among them, the SSC-UHDCC-plain beam had no longitudinal bar inside the UHDCC layer (tensile region). It was designed to check the flexural reinforcement contribution from UHDCC itself. All the beams had the BFRP stirrup, 6 mm in diameter and 100 mm in spacing, in the shear-moment zone. It was designed according to the Chinese code "Technical code for infrastructure application of FRP composites (GB 50608-2010)" [40] to avoid shear failure. The details of specimens are presented in Table 5. Beam specimens were cast in the laboratory of Tongji University, (Shanghai, China). The reinforcement cage was fixed in the formwork with the tensile bars facing upward. Then, the SSC was cast for all seven beams. For SSC-UHDCC beams, the SSC was cast initially and finished at the height of the interface between the SSC and UHDCC. In order to produce a good interface bond between SSC and UHDCC, the SSC surface was artificially roughened to partially expose coarse aggregates ( Figure 5). The specimens with SSC sections were first cured for one week, and then UHDCC layers were cast for four SSC-UHDCC specimens. After the casting, all the specimens were cured outdoors.
Test Program
Four-point loading tests were conducted on all the beams after a 70-days curing period. A hydraulic jack was used to apply a concentrated load that was divided into two by a rigid girder with a span of 350 mm. The span between two supports was 1800 mm. The setup of instrumentation is illustrated in Figure 6. A load cell was attached to the hydraulic jack to record the load values during testing. The deflection of beam was recorded by five linear variable differential transformers (LVDTs) installed in the pure bending zone or at supports. For each beam, five strain gauges (50 mm gauge length) were pasted onto the lateral surface (side A in Figure 6a) of cross section at mid-span, and two strain gauges were attached to the surface of each BFRP tensile bar. The digital image correlation (DIC) method was used as the sensor to record the full field deformation and the crack development process during testing. DIC was shown to be a reliable method for reinforced concrete specimens in recent research [41][42][43][44][45]. Before DIC sensing, the pure bending moment (PBM) zone on the other side, (i.e., side B in Figure 6b), was set as the area of interest, which was sprayed with white paint and random black speckles. A digital single lens reflex camera recorded raw images every five seconds until the end of testing.
Loading Process
According to "Standard for test method of concrete structures (GB/T 50152-2012)" [46], the displacement-control loading program was used in this test (Figure 6c). The load interval was set to 1.8 kN for the experiments on the seven beams. During each step of loading, the cracks locations and development were marked on side A.
Test Observation
Cracking development was described based on the observation on the side A surface. Typical values, such as the cracking and peak load, were recorded from the load cell. The failure mode was determined by the observation of failure process.
SSC Beams
SSC-6 was designed to be an under-reinforced beam with a reinforcement ratio of 0.17%. The theoretical upper limit for under-reinforcement is 0.207%. One flexural crack appeared in the middle of beam and developed quickly to around half of the height of the beam section. Shortly after, another two cracks appeared near the PBM zone. When the middle crack stretched near the top of the beam section, one of the tensile BFRPs ruptured with a loud sound and the test was terminated. The peak load was 11.3 kN. The largest crack width originated from the middle flexural crack (red circle in Figure 7a). The concrete in the compressive zone remained intact. The average rupture strain was at 1.28%, which is only 50.79% of the strain of BFRP bars under direct tensile testing. The distribution of cracks is presented in Figure 7a.
The reinforcement ratio of SSC-8 was 0.3%, which was higher than the critical reinforcement ratio of under-reinforcement 0.223%, but lower than 1.5 times of the ratio (that is 1.5 × 0.233% = 0.334%). The failure of beam may be caused by either the rupture of tensile bars or the crush of the concrete [47]. The first flexural crack appeared in the PBM zone at a load of 3.53 kN. When the load increased to around 4.8 kN and 9 kN, another two flexural cracks appeared in the PBM zone, and the first crack stretched to over half the height of beam. One diagonal crack developed from the middle height of the beam. When the load reached 12.6 kN, the diagonal crack suddenly changed its direction horizontally toward the loading point. Simultaneously, another flexural crack was generated in another shear moment (SM) zone and changed its direction as the shear crack when the load was around 18 kN. The height of the compressive zone decreased dramatically and one of the tensile BFRP bars ruptured with a loud sound, and the beam failed. The average rupture strain was at 1.51%, which is 58.30% of the strain of BFRP bars under direct tensile testing. The crack with the largest width was the flexural crack in the middle (red circle in Figure 7b). The peak load was 27.58 kN. Accordingly, the failure mode is the flexural failure of under-reinforced beam. The distribution of cracks is presented in Figure 7b and the image at failure is presented in Figure 8. SSC-12 was designed to be an over-reinforced beam with a reinforcement ratio of 0.69%. The first crack was a flexural crack at the edge of PBM zone. The cracking load was 3.83 kN, and the second and third cracks appeared in the PBM zone at loads of around 10.8 kN and 14.4 kN, respectively. Following that, two diagonal cracks appeared in the SM zone. These cracks initiated in the middle height of beam section. Finally, one diagonal crack with the largest crack width developed at the loading point. Then local crush at the loading point occurred. The peak load was 47.05 kN. The failure of the beam was caused by insufficient shear capacity, which was not expected. This could have been caused by overestimation of bent stirrups capacity. The reason for this phenomenon was outlined by ISIS Canada (2001) [11]. The tensile strength at the bend was dramatically weaker due to stress concentration. However, the reduction effect was not well predicted by the code. The crack with the largest width is denoted by a red circle in Figure 7c. The distribution of cracks and image at failure are presented in Figure 7c.
UHDCC-SSC Composite Beams
Comparatively, all of the SSC-UHDCC beams reinforced with BFRP bars initially displayed micro cracks densely distributed in the middle of UHDCC layer. Then, several micro cracks merged into one macro crack at the interface, which further stretched toward the compressive zone of the beam (Figure 9a-c). The cracking load was recoded when the first macro crack initiated at the interface. The typical crack distribution in SSC and UHDCC are presented in Figure 10a,b, respectively. During the test of SSC-UHDCC-6, flexural cracks initiated early as the load reached 9.0 kN, which was considered the cracking load in this study. Afterward, several flexural-shear cracks were generated in the SM zone at a load of 16.2 kN. It is of interest to note that the macro cracks that occurred in the SSC part were smeared in numerous micro cracks in the UHDCC layer within both the PBM and SM zones. Several groups of micro cracks progressively merged into 10 macro cracks from the interface (Figure 9a). Detachment at the interface was observed in the original tip of one shear-flexural crack but without obvious horizontal slip (black circle in Figure 9a). Then, the flexural cracks progressively developed to 4/5 of height of beam and ceased to develop until the load of 21.6 kN. Afterward, shear-flexural cracks continued to extend toward the adjacent loading point until failure. Finally, the concrete around one loading point crushed. The peak load was 32.43 kN. The final crack distribution shows the widest crack was one flexural-shear crack (red circle in Figure 9a). In comparison with SSC-6, SSC-UHDCC-6 had a larger cracking load, more cracks, but much smaller crack spacing. Above all, the change in failure mode from flexural failure to shear failure is due to the enhanced flexural capacity higher than the shear capacity.
In SSC-UHDCC-8, groups of micro cracks progressively merged into eight macro cracks at the edge of interface (Figure 9b). Both flexural and shear-flexural cracks were observed early when the load reached the cracking load of 9.0 kN. Comparatively, more flexural cracks developed afterward. These cracks extended to 2/3 of height of beam and ceased to develop when the load was around 16.2 kN. Afterward, the extension of two flexural-shear cracks dominated the cracking development. The beam failed when the two cracks reached the adjacent loading points, accompanied by local concrete crushing at the loading point. The peak load was 39.55 kN. Detachment at the interface was also observed at the tip of two shear-flexural cracks with the largest opening width of 2.07 mm (black circle in Figure 9b). The widest crack is indicated by the red circle in Figure 9b. Similar to SSC-UHDCC-6, the failure mode of SSC-UHDCC-8 was shear failure, and fewer macro cracks occurred than in SSC-UHDCC-6. In comparison with SSC-8, the composite beam had a larger cracking load and more cracks, but smaller crack spacing.
In SSC-UHDCC-12, a similar micro cracking pattern was observed during the early loading process. The micro cracking region in the UHDCC layer was longer than in the other two composite beams. Groups of micro cracks progressively merged into 9 macro cracks from the interface (Figure 9c). Two flexural cracks appeared first at the load of 12.6 kN. Later, three flexural-shear cracks generated at a load of around 14.4 kN. At that time, detachment at the interface was also observed. The largest opening was around 2.92 mm (black circle in Figure 9c). After the major flexural crack stopped at 4/5 the height of the beam at the load of 21.6 kN, the flexural-shear cracks began to decline toward the adjacent loading points. Finally, the widest crack was one flexural-shear crack (red circle in Figure 9c). The beam failed with the observation of local concrete crushing at the loading point. In this beam, the peak load was 49.67 kN. Similar to SSC-12 and other FRP-reinforced SSC-UHDCC beams, the failure mode was shear failure. In comparison to SSC-12, SSC-UHDCC-12 had a larger cracking load and more cracks but smaller crack spacing.
In SSC-UHDCC-plain, dense micro cracks first appeared in the UHDCC layer. Then, only two flexural cracks generated in the PBM zone. The cracking load was 9.0 kN. Following that, two cracks quickly extended from the interface to the compressive zone of the beam, which caused concrete in tensile section to immediately lose tensile bearing capacity. Finally, the strength capacity of the UHDCC cover was not high enough to balance the tensile demand transferred from the cracked concrete, and the beam immediately lost load bearing capacity as an under-reinforced beam. The peak load was 12.49 kN, which was close to the load capacity of SSC-6. The widest crack is indicated by a red circle in Figure 9d. However, the beam did not split into two segments, which illustrates that the UHDCC layer with a higher fracture strain bonded well with the SSC.
From the tests, the development process of the cracks and critical load values were observed and recorded. From the cracking features of BFRP bar-reinforced composite beams, we found that in SSC-UHDCC beams, the number of cracks increased and the average crack spacing decreased. More energy was consumed during the loading of the SSC-UHDCC beam compared with the counterpart SSC beam ( Figure 10). Hence, the load and deformation capacity were enhanced due to the UHDCC layer. The load capacity of SSC-UHDCC-plain was close to that of SSC-6, demonstrating that the UHDCC layer can function as tensile reinforcement. The under-reinforcement failure mode of SSC-6 and SSC-8 was expected. The failure mode of SSC-12 and all the SSC-UHDCC beams were shear failure, as the shear capacity was overestimated and lower than the improved flexural capacity. In general, the UHDCC layer can enhance the flexural capacity of a beam reinforced with BFRP bars.
Discussion
LVDTs and DIC technology (see Figure 6) were simultaneously used to acquire the deformation responses of beams under loading. As shown in Figure 11, the displacement data obtained from the DIC agree well with those from LVDTs, demonstrating the reliability of DIC in reflecting the full-field deformation. Hence, the deflection is based on the DIC results. Figure 11. Comparison of deflection from digital image correlation (DIC) method and linear variable differential transformers (LVDTs) (SSC-12). Figure 12 shows the load-deflection curves of all tested beams. Based on the test data, analyses were conducted to explore the effect of UHDCC layer on the mechanical properties of SSC beams.
Cracking Load Capacity
All the cracking loads were based on the cracking observation on the side A surface and inferred from the recorded load step. The cracking loads of SSC-6, SSC-8, and SSC-12 were 2.5 kN, 3.60 kN, and 4.80 kN, respectively. The cracking loads of UHDCC-SSC-6, UHDCC-SSC-8, UHDCC-SSC-12, and UHDCC-SSC were 9.0 kN, 9.0 kN, 12.6 kN, and 9.0 kN, respectively. For two kinds of beams, the cracking load increased with the increase of reinforcement ratio. The cracking loads of UHDCC-SSC beams were much higher than those of SSC beams ( Figure 13). The enhancement ratio due to the UHDCC layer ranged from 150% to 260%, and slightly decreased with the increase of reinforcement ratio. Hence, the first advantage of introducing the UHDCC layer is the increase in the cracking load.
Service Load Capacity
For FRP-reinforced concrete beams, the SLS usually governs the final design. Many studies focused on how to define service load. To summarize, there are four criteria: (1) mid-span deflection [16], (2) service strain in FRP bars [17], (3) the maximum crack width [15,48], and (4) peak load (P m ) divided by the load factor 1.5 [49]. In the second criterion, the service strain in FRP bars is determined by the maximum cracking width. The value of 0.002 is recommended for FRP bars [17], which corresponds to the maximum crack width of 0.5 mm. In the third criterion, the maximum crack width is controlled to ensure adequate structural performance and sufficient durability of the structure [50]. The maximum crack width for beams with FRP bars is larger than for steel reinforcement, as FRP material has good anti-corrosion performance. ACI 440.1R-06 regulates the maximum crack width as 0.5 mm for interior exposures and 0.7 mm for exterior exposures [15], whereas JSCE recommends a maximum crack width of 0.50 mm for both interior and exterior exposure [48]. In this study, the threshold value for the maximum width was taken as 0.5 mm. Hence, the second and third criteria are fundamentally the same. All typical values from four criteria are summarized in Table 6. Firstly, the service load showed differences between the service strain and crack width criteria, demonstrating that a fixed service strain is not always consistent with the same crack width control. The service strain is influenced by the bond characteristics of bars, bar spacing, and bar size [17]. This phenomenon was also observed by El-Nemr et al. [51]. Accordingly, the value calculated by the crack width criterion was taken as the characteristic value for the two criteria for all specimens. It can be seen that the most conservative value for SLS was obtained from either the deflection or crack width criterion. For the demand at floor level, the service load for shear failure dominated beams (SSC-12, UHDCC-SSC-8, and UHDCC-SSC-12) is governed by the lower bound of deflection, whereas the others are governed by the crack width. For the beam at the roof level, the deflection limit is L/180 (L is the span) [16,52]. Then, all the beams' SLS capacities were governed by the maximum crack width, except for UHDCC-SSC-12. Note: ∆ is the mid-span deflection; ε s is the service strain in BFRP bar, set to 0.002; w m is the maximum crack width, set to 0.5 mm in SSC; and P m and P s are the maximum load and service load, respectively.
In general, the service load increases with the growth in the reinforcement ratio for both SSC beams and UHDC-SSC beams. At a fixed reinforcement ratio, the UHDCC layer efficiently contributes to the service load, as expected. The enhancement ratios of service load in UHDCC-SSC-6, UHDCC-SSC-8, and UHDCC-SSC-12 were 281% for roof level, 292% (332% for roof level), and 111% (36.7% for roof level) compared to the counterpart SSC beams, respectively. The percentage of service load to peak load also increased. Although the ratio is still below the ratio in reinforced concrete (RC) structures, the service load is enhanced without changing the beam dimensions or the FRP reinforcement ratio. This exhibits another main benefit of the UHDCC layer. Figure 14a illustrates the influence of UHDCC and the reinforcement ratio on the ultimate load bearing capacity of beams. In Figure 14a, the SSC-UHDCC-plain beam provides as much load capacity as SSC-6. In other words, due to the high tensile strain capacity and tensile stress of UHDCC, the 60 mm plain, UHDCC layer contributes a longitudinal BFRP bar of the reinforcement ratio as 0.17%. Therefore, compared with SSC-6 and SSC-8 beams, SSC-UHDCC-6 and SSC-UHDCC-8 increased their ultimate load capacity by 192% kN and 34% kN, respectively, as shown in Table 7. Unfortunately, both SSC-12 and SSC-UHDCC-12 suffered shear failure instead of flexural failure. The premature shear failure led to insufficient use of the UHDCC and BFRP bar. Consequently, a minor difference was found in the shape of the load-deflection curve. The ultimate load capacity only decreased by 8%. All the SSC-UHDCC beams experienced larger deflection than SSC beams at ultimate load (see Figure 14b). The UHDCC layer had a much more significant effect on the deformability of beam than on the ultimate load capacity. The enhancement ratios of the deflection at ultimate load for SSC-UHDCC-6, 8, and 12 beams were 437%, 45%, and 36%, respectively. Similar to the situation in load capacity, tremendous improvement in deformability was achieved for the less-reinforced beam (SSC-6 vs. SSC-UHDCC-6).
Ultimate Load Capacity and Corresponding Deflection in Under-Reinforcement Failure Mode
In summary, three characteristic loads and deformation are summarized in Table 7 for evaluating the effect of UHDCC layer. In general, the UHDCC layer not only serves as additional FRP bars in force strengthening, but also further improves the deformability of the SSC beams with the same equivalent reinforcement ratio.
Beams incorporated with UHDCC layers exhibited much better deformability. This brings up a question: How can the UHDCC layer enhance the deformability of a composite beam without changing the rupture elongation of the BFRP bar? To answer this, further analysis is conducted on the crack pattern of SCC and UHDCC matrix and the strain distribution in BFRP and UHDCC. Note: The number in parentheses refers to the enhancement ratio of SCC-UHDCC-* specimen to its counterpart SCC-* specimen.
Crack Pattern
A more detailed picture of cracking development in the SSC part of all the beams except SSC-6 was captured using the DIC method. The crack width was calculated by the method proposed by Hu and Wu [53]. The crack in SSC-6 emerged once the load was added and the crack width developed so quickly that the camera failed to capture the precise strain field using the DIC method, so only the biggest crack width of SSC-6 was obtained. Figure 15 illustrates the development of crack width in the SSC part during loading. The specific crack width and crack number in the SSC part are presented in Table 8. The number of cracks in the SCC part of the SSC-UHDCC beams significantly increased compared to the SSC beams, thus leading to fine cracks. The crack numbers in the UHDCC layer were at least one order of magnitude more than those in the adjacent SSC parts. As shown in Figure 10, one crack in SSC dispersed into more than 10 fine cracks in UHDCC. This indicates that the crack width in UHDCC is far smaller than in the adjacent SSC part, which was already smaller than that in SSC beams. In both SSC beams and SSC-UHDCC beams, the flexural crack width decreased with the increase in reinforcement ratio. When the SSC-UHDCC-plain beam was in the ultimate state, although there were 54 cracks in the UHDCC layer, there were only two cracks in the SSC part, close to that of SSC-6. This implies that the UHDCC layer without any longitudinal reinforcement does not trigger multiple cracks in the SSC part.
In general, the combination of UHDCC and FRP has enormous advantages in crack width control compared with beams solely reinforced with FRP bar or UHDCC. Crack number and width are highly related to the magnitude of the combined reinforcement ratio, which agrees with logical crack width control.
Small crack width and crack spacing can alleviate strain (also stress) fluctuation along longitudinal reinforcement. A sketch is presented in Figure 16 to illustrate the difference assuming two BFRP bars (red solid line) are embedded in the SSC and UHDCC layer in the PBM zone. In NC, the strain of the reinforcement near the crack is obviously larger than between adjacent cracks, since all the tension contribution is lost at the crack mouth. In UHDCC, the crack width and the crack spacing are much smaller, and the tension contribution of UHDCC still exists at the crack mouth; therefore, the strain fluctuation is smoothed. This explains why the SSC-UHDCC combination can effectively enhance the deformability of a beam. To quantitatively verify this point, we completed an analysis based on experimentally obtained strain fluctuation in the following section.
Strain in the PBM Zone
The strain of SCC and UHDCC was extracted from the results of DIC measurement. Figure 17 compares the strain distribution at the level of longitudinal bars of various beams at their ultimate limit state. The area of interest (i.e., the PBM zone of the beam) and the strain direction are illustrated in Figure 17a. The experimental analysis showed the strain distributions in the UHDCC layers had more peaks as evenly distributed cracks in the UHDCC. With the interaction with the FRP bars embedded in the UHDCC layer, the fluctuation was further smoothed, especially with a higher reinforcement ratio. The smooth strain distribution in the concrete reflects the extension in FRP bars fully generated in the PBM zone. Figure 17b compares the strain distributions for two beams at the same moment. The average strain was 0.00129 for SSC-6 and 0.0146 for SSC-UHDCC-6. As the cracked UHDCC layer can still contribute tension load, the larger strain value in the UHDCC layer could alleviate the tension in the FRP bars at the same load capacity. Therefore, this could prolong the function of FRP bars and enhance the flexural capacity.
For further demonstration, Figure 18 illustrates the strain in SSC near the interface and in UHDCC at the location of BFRP bar at a same moment. It is of interest that the local maximum values of SSC strain were larger and the local minimal values of SCC strain were smaller than those of UHDCC at the same x coordinate. This demonstrates the estimate in Figure 16. Based on the strain distribution in Figure 18, the average SCC strain near the interface was calculated to be 0.020, whereas the UHDCC strain was 0.024. The magnitude of strain in UHDCC, which is closer to the tensile side of beam, should be larger than in SCC. We found that the strain in the SSC tensile section in SSC-UHDCC-6 was much larger than in the SSC-6 beam, which further proves that introducing the UHDCC layer can effectively improve the deflection. To some extent, the strain distributions of SSC and UHDCC in Figures 17 and 18 reflect the strain in the BFRP bar. Due to the brittleness of NC, the tension in the cracked cross-sections of the SSC beam was totally provided by BFRP bars, whereas the tension between adjacent cracks was provided by both BFRP bars and concrete. Therefore, single and large strain fluctuations were observed in the PBM zone of SSC beams in the ultimate state. In contrast, UHDCC can smear one big crack into multiple fine cracks and maintain its tension contribution during cracking; therefore, strain fluctuations are effectively smoothed.
In summary, for the composite system proposed in this article, the UHDCC layer can act as longitudinal reinforcement. The flexural capacity (both serviceability limits and ultimate state) of the structural system is enhanced due to the tension contribution of UHDCC, and the deformability is improved due to the effective control of crack width by UHDCC, which also prolongs FRP bars' life before rupture.
Conclusions
This paper proposes a novel composite structural member composed of sea water and sea sand concrete, UHDCC, and a BFRP bar. UHDCC, a kind of ECC with large rupture strain, was designed to replace the SSC tensile cover to improve the cracking control and bond performance, and the flexural performance in BFRP bar-reinforced SSC beams. The test results demonstrated that the performance, including characteristic load capacity and deformability, crack pattern, and stressing in BFRP bars, could all be improved by introducing UHDCC to the tensile zone. The following conclusions were drawn from this investigation: (1) With assistance from the DIC sensing method, we discovered that, due to the supremely high rupture strain of UHDCC, the new composite beam can produce multiple and fine cracks in the tensile zone until failure. Its flexural crack width decreases in the concrete section and crack distribution is improved accordingly. Thus, UHDCC limits the extension of cracks and improves the deformability.
(2) The cracking load, service load, ultimate load, and corresponding deflection of UHDCC-SSC composite beams are higher than SSC beams with the same reinforcement ratio. The enhancements in cracking load, service load, and deformability were significant, without changing beam dimension or the reinforcement ratio. This demonstrates the contribution of the UHDCC layer.
(3) The multi-cracking characteristic of UHDCC helps to smooth the strain fluctuation in the FRP bar, thus improving the tension efficiency of FRP bars in the case of the beam experiencing flexural failure mechanism.
To summarize, the new composite beam has resolved the issue of efficient use of FRP bars as tensile reinforcements and the feasible use of FRP bars as sensors. This is largely due to the change in the cracking characteristics and bond performance. Further study is needed on the cracking mechanism and the association with the flexural deformability of FRP bar-reinforced (with optic fibers) concrete beams. | 8,743 | 2019-02-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Materials Science"
] |
Omega Index of Line and Total Graphs
A derived graph is a graph obtained from a given graph according to some predetermined rules. Two of the most frequently used derived graphs are the line graph and the total graph. Calculating some properties of a derived graph helps to calculate the same properties of the original graph. For this reason, the relations between a graph and its derived graphs are always welcomed. A recently introduced graph index which also acts as a graph invariant called omega is used to obtain such relations for line and total graphs. As an illustrative exercise, omega values and the number of faces of the line and total graphs of some frequently used graph classes are calculated.
Introduction
Let G be a simple graph with V(G) � v 1 , v 2 , . . . , v n and E(G) � e 1 , e 2 , . . . , e m as the vertex and edge sets. n and m are called the order and size of G, respectively, and are the most important graph parameters. If e � uv ∈ E(G), we say that u and v are adjacent and e is incident to u and v. e number of edges incident to a vertex v is called the degree of v and denoted by d G v, or by dv if there is no confusion. A vertex of degree 1 is named as the pendant vertex. e set of degrees, of all vertices where Δ is the biggest vertex degree in G, is called the degree sequence of the graph. A graph which is connected and has no cycles is called a tree. A graph is called acyclic, unicyclic, bicyclic, tricyclic, etc. according to the number of cycles it has as 0, 1, 2, 3, etc. As usual, the path, cycle, star, complete, complete bipartite, and tadpole graphs are denoted by P n , C n , S n , K n , K r,s , and T r,s , respectively. For other graph theoretical notions used in this paper, see, e.g., [1][2][3].
Given a graph G, the line graph L(G) of G is the graph whose vertex set is E(G) with two vertices of L(G) being adjacent iff corresponding edges in G are adjacent. For some applications of the line graph, see, e.g., [4,5]. Similarly, the total graph T(G) of G is the graph whose vertex set is V(G) ∪ E(G) with two vertices of T(G) being adjacent iff the corresponding elements of G are either adjacent or incident. Line graphs and total graphs are two examples of derived graphs. Minimal doubly resolving sets and strong metric dimension of the layer sun graph and the line graph of this graph are calculated in [4]. e classical meanness property of some graphs based on line graphs was considered in [5]. For some recent applications of the total graphs, see, e.g., [6][7][8].
According to definitions, the degree sequences of the line and total graphs are (2)
Omega Index and Fundamentals
In this paper, we study the line and total graphs in relation with omega index and the number of faces known as the cyclomatic number. Omega index is an additive quantity defined for a given degree sequence (1) or for a graph with It is shown that Ω(G) � 2(m − n) and therefore it is always an even number. It is shown that the omega characteristic gives us very powerful information about cyclicness and connectedness of all the realizations of a given degree sequence (see, e.g., [9]). In brief, it is shown that all realizations of a degree sequence D with Ω(D) ≤ − 4 must be disconnected; each connected realization of a degree sequence D with Ω(D) � −2 must be a tree; each connected realization of a degree sequence D with Ω(D) � 0 must be a unicyclic graph; each connected realization of a degree sequence D with Ω(D) � 2 must be a bicyclic graph, etc. Also, the number of faces of a graph or all the realizations of a given degree sequence is formulated as where c(G) is the number of components of G. For more properties of the omega index, see [10,11]. e effect of edge and vertex deletion on the omega index is studied in [9]. Next, we obtain the number of pendant vertices of a caterpillar tree which consists of a main path so that all vertices are having maximum distance 1 from the path.
where d G i is the degree of v i .
Proof. By counting, v 1 has only one nonpendant neighbor and hence d G 1 − 1 pendant neighbors, v k has only one nonpendant neighbor and hence d G k − 1 pendant neighbors, and similarly, for i � 2, 3, . . . , k − 1, the vertex v i has d G i − 2 pendant neighbors. erefore, thus giving the result.
Corollary 1. For any tree T, we have
For a caterpillar tree, the degree sequence of its line graph can be stated more deterministically.
Theorem 2. Let T be a caterpillar tree. Let the degrees of the nonpendant vertices of
Proof. By eorem 5 in [11], there exists a complete graph Proof. By definition, we have □ In the following result, we calculate the omega index of the line graph of G when G is k-cyclic by means of triangular numbers T n � n(n + 1)/2.
e following results are about the omega index of the line graph.
□ Now, we obtain some results on the omega index of the total graphs.
Proof. It follows by the definition of omega index.
In [11], the number of faces of the line graph of a tree T was given by where T n � n(n + 1)/2 is the n − th triangular number. In [?], this number for a tricyclic graph G was given by ese suggest the following generalization.
Theorem 4. Let G be a simple, connected, and k − cyclic graph with degree sequence (1). e number of faces of the line graph L(G) is
Proof. For acyclic part of G, the formula is given in equation (20). For each t − cycle C in G, L(G) has another t-cycle C ′ formed by joining the midpoints of the edges of C. Hence, the number of cycles in G must be added to Δ i�3 a i T i−2 giving the result. □ Inverse problems in mathematics are quite important due to their applications. In graph theory, the inverse problem is the one which deals with finding the values of a given topological graph index. Here, we solve a similar problem for the number of faces of the line graph.
Theorem 5. Let G be a connected graph. en, r(L(G)) can take any positive integer value.
Proof. By eorem 4, we have equation (22) for a simple, connected, k-cyclic graph G.
Corollary 8. Let T be a tree with no vertices of degree 3. en, r(L(T)) can take all positive integers except
(24) As a i ≥ 0 are integers, the result follows.
Corollary 10. We have
(26) e following variation of this result has useful applications related to cyclicness.
Corollary 11. We have
By Corollary 11, we have the following cases:
Omega and r of the Line Graphs of Some Special Graphs
Now, we consider the omega and r values of some frequently used graph classes. First, we give a new proof of the fact that the line graph of P n is P n−1 .
Lemma 2.
We have L P n � P n−1 .
Omega and r of the Total Graphs of Some Special Graphs
In this section, we calculate the omega and r values of some frequently used graphs. We first give an important property. Proof. Let v ∈ V(G). Let e be an edge incident to v. As G is connected, there is at least one adjacent vertex say u, to v. In the total graph T(G), the vertex v will be adjacent to u and e implying the result.
□ e proof alternatively follows from the fact that DS(T(G)) consists of integers in the form of either 2d i or e j + 2, where d i , e j ≥ 1.
Next, we give the relation between omega of G and omega of T(G).
Theorem 6. For a connected graph G, we have
Proof. Recall that DS(T(G)) consists of a i times 2d i for every v i ∈ V(G) and de j + 2's for every e j � u j v j ∈ E(G). As de j � du j + dv j − 2, we can deduce that DS(T(G)) consists of a i times 2d i for every v i ∈ V(G) and du j + dv j 's for every e j � u j v j ∈ E(G). Hence, � Ω(G) + M 1 (G).
Finally, we give the following result for the omega indices of the total graphs of some frequently used graph classes: e omega index of the total graphs of some well-known graph classes is as follows: Ω T P n � 4n − 8, Ω T C n � 4n, Ω T S n � (n + 1)(n − 2), Ω T K n � n(n + 1)(n − 2), Ω T K r,s � (r + s)(rs − 2) + 2rs, Ω T T r,s � 4(r + s) + 2. (37)
Conclusion
Derived graphs are graphs obtained from a given graph according to some rules. In this paper, two of the most frequently used derived graphs, the line and total graphs, are studied. Calculating some properties of a derived graph helps to calculate the same properties of the original graph.
Here, by means of omega index and several results in recent papers, new relations for line and total graphs are obtained. Also, omega values and the number of faces of the line and total graphs of some frequently used graph classes are calculated. In the future works, similar ideas can be applied to establish several relations for other derived graphs.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2,410.8 | 2021-09-27T00:00:00.000 | [
"Mathematics"
] |
Building clone-consistent ecosystem models
Many ecological studies employ general models that can feature an arbitrary number of populations. A critical requirement imposed on such models is clone consistency: If the individuals from two populations are indistinguishable, joining these populations into one shall not affect the outcome of the model. Otherwise a model produces different outcomes for the same scenario. Using functional analysis, we comprehensively characterize all clone-consistent models: We prove that they are necessarily composed from basic building blocks, namely linear combinations of parameters and abundances. These strong constraints enable a straightforward validation of model consistency. Although clone consistency can always be achieved with sufficient assumptions, we argue that it is important to explicitly name and consider the assumptions made: They may not be justified or limit the applicability of models and the generality of the results obtained with them. Moreover, our insights facilitate building new clone-consistent models, which we illustrate for a data-driven model of microbial communities. Finally, our insights point to new relevant forms of general models for theoretical ecology. Our framework thus provides a systematic way of comprehending ecological models, which can guide a wide range of studies.
Response to Reviewer 2
A basic requirement of any mathematical model is that it should provide consistent results if applied to the same situation. While this would have been of intuitive to the founders of theroretical ecology, it seems to be increasingly forgotten in more recent models that can accomodate a flexible number of species. In those models one could arbitrarily split a species into two model species and would expect the same result as if they were represented by one model species. But not all current models meet this essential requirement. The authors present a novel framework by which this requirement can be met in practice.
Many readers will feel that they know and use these concepts already, but a large enough fraction of modellers seem to be unaware of it tha this publication is valuable.
The manuscript leaves very little to criticise in terms of content or presentation and would be publishable in the present form.
We thank the reviewer for his positive and encouraging assessment. Of course this model violates the clone consistency conditions. The authors would argue that I have made a mistake and should have used loss terms of the form (∑ ) instead of 2 . This is in principle a valid point and many readers will profit from seeing it. However, for the sake of argument I want to defend the 2 terms. I stated above that I want to model stably coexisting populations. We know that asymptotically stable coexistence is only possible if the different species occupy distinct niches (modeled in this case by the square terms). In this case the clone consistenty argument formally does not apply. I cannot argue that I can arbitralily split a population in two and represent it by two model variables, because the resulting populations would occupy the same niche and are hence outside the scope of my model. The takeaway from is that one cannot decide whether a model is good or bad by looking at the equations alone, because whether clone consistency is even a criterion depends on the detailed premises of the model. The authors are right when they point out it would be bad to violate clone consistency accidentally without being aware of the additional assumptions that this implies. But, on the other hand, I would be equally bad to enforce clone consistency when it is at odds with the explicitely states assumptions. Doing so would prevent us from exploring some very valuable mathematical models.
The wording in some places suggests that the authors are aware of this point, but I fear that it would be lost on the casual or less informed reader will probably miss this. It would be good to mention it prominenently (e.g. when the authors present their introductory example) and perhaps hint in the abstract and/or author summary that sometimes clone inconsistecy is intentional and very much desired.
We agree with the reviewer that it is important prevent an overzealous application of our rules. We would like to emphasize that we already devote an entire subsection of Implications (ll. 362 f) to discuss the limitations of our approach and how it can point to assumptions such as the distinct niches. Indeed, the reviewer's first example is equivalent to the one discussed in Eq. 27 (formerly Eq. 26). As for the second example (May's random matrix model), we concur with the reviewer that applying our work in the suggested way would be profoundly wrong and pose a severe misinterpretation of our work. However, we think that addressing such an example in the manuscript would require extensive elaboration and could easily become more confusing than helpful, in particular since the model does conform with our criteria and the presumed clone inconsistency arises in a different way.
Following this reviewer suggestion, we now reference our discussion of this issue more clearly early in the manuscript, specifically we now write in the abstract: "Although clone consistency can always be achieved with sufficient assumptions, we argue that it is important to explicitly name and consider the assumptions made: They may not be justified or limit the applicability of models and the generality of the results obtained with them. " And in the author summary: "We further discuss that clone inconsistency, which occurs in several prominent models, reflects strong, often implicit, assumptions and it is important to check whether these are justified. "
Response to Reviewer 3
I consider the constructive framework developed by the authors and its translation into instructions on how to build consistent models, a valuable contribution to the field of ecological population models. However, I have some major concerns regarding the presentation of the work, its lack of context with respect to previous results in this area and the clarity and completeness of the interpretation of the results in the context of nonlinear dynamical systems. These major issues would need to be addressed for the article to proceed. They are detailed in the following together with several minor points. Numbers in parenthesis refer to the line number in the manuscript.
We thank the reviewer for their assessment and the detailed constructive criticism. We have addressed all concerns, as described in detail below.
Issue 1:
While the constructive framework appears to be plausible, the soundness of its justification suffers from unclear language, making it difficult to follow. I advise the authors to get feedback from colleagues with little familiarity with the work or even to work with a writing coach to improve the flow and readability of the text. The language should also be revised in terms of scientific writing standards. For example, I do not believe 'inevitable' to be the proper vocabulary to describe properties of mathematical objects. Furthermore, the authors should decide on a target audience and reflect this in their presentation. The presentation is uneven in terms of the required background knowledge in order to comprehend the statements.
We agree that this work needs to be easily accessible. The manuscript was already reviewed by several colleagues and others. Following the reviewer's suggestion, we obtained another round of feedback from an uninitiated colleague and made numerous changes to make the text easier to follow.
It is intentional that Methods is more mathematically challenging than the main manuscript. Our intent is that everybody who may want to apply our results can understand the main manuscript (without Methods). In contrast, Methods contains the mathematical details required to arrive at these results, which require more mathematical expertise. At the end of the introduction (ll. 112 f), we now explicitly state that Methods is intended for the mathematically inclined reader.
We removed two instances of the word inevitable (ll. 86 and 432) since they did not add much. However, we kept it in: "Thus, the model violates our consistency criteria and the observed clone inconsistency ( Fig. 1) is inevitable. " (ll. 228 f). Here, inevitable does not describe a mathematical property, but concisely stresses the strong connection between a modelling "mistake" and its consequences: A model not complying with our criteria will certainly exhibit inconsistencies like in Fig. 1. We could replace inevitable with synonyms like inescapable or unavoidable, but we do not consider these any better.
Issue 2:
While the authors provide accessible guidelines on how to construct a consistent model and how to check existent models for consistency, the manuscript fails to state and clarify the mathematical properties that allow models of the proposed form to be consistent. In the authors summary the authors claim to have 'investigated the mathematical properties of clone-consistent models', but I do not see where in the manuscript these mathematical properties are described. The authors do provide criteria that impact functions should fulfil, but never explain why they do so or actually show that they do.
As we understand the reviewer's concerns, we devoted entire sections to them, which the reviewer is clearly aware of, going by their accurate summary of our results. Here is our direct answer to the reviewer's concerns as we understand them: The mathematical properties of clone-consistent models are: • Impact functions (i.e., elementary ingredients of clone-consistent models) form a functional algebra ( ) generated by linear combinations and constant functions. If and only if a function is in that algebra it fulfils our criteria.
• Models of population growth additionally must be of the form described by Eqs. 10 and 11.
Consequences and more accessible formulations of these properties are described throughout the section: What models are clone-consistent? Being built from linear combinations and constant functions (via addition, multiplication, and function application or taking a limit, respectively) is what allows impact functions (or the models using them) to be consistent. We made changes throughout the section The functional algebra of impact functions to make these connections clearer.
In addition, how the properties of the impact functions translate to the properties of the whole model and especially the solutions of the associated differential equations is left unclear. Furthermore, the authors state not to be able to make statements about the cause of inconsistency (line 392). I am wondering how the cause of inconsistency is connected to the fact that a nonlinear differential equations is unlikely to show additivity in the systems state variables (in the sense that ( ) + ( ) is not equal to ( + )) unless it is built in a way that is has this property? Just as the actual cause of the inconsistency in the motivating example on lions is not that logarithms are not allowed in impact functions but that these system equations are not additive in We do not expect that clone consistency or the use of impact functions allows for drawing general conclusions about the respective models, including about their dynamics. For example, conclusions such as "Cloneconsistent models are less likely to exhibit oscillations" are unlikely to follow from properties of impact functions alone. Even if such conclusions were possible, they would not be straightforward to draw and go beyond the goals of our work. We now elaborate on this point in more detail in the first paragraph of General implications (ll. 430 f). In particular, we explicitly discuss that clone-consistent models can still be nonlinear: "Not only can a non-linear function still be applied to the linear combination (e.g., in Eq. 6), but any existing non-linear model can be made clone-consistent by sufficiently strong assumptions. The central question is whether these assumptions are biologically justified. "
Issue 3:
The manuscript fails to address how the findings relate to previous research in the area of nonlinear differential equation model reduction based on the aggregation of several state variables into a single one. If the goal is to preserve the system dynamics exactly this is for example termed 'perfect aggregation' in the field of ecological modelling, or 'exact lumping' in the field of biochemical reaction network modelling (just to mention two out of the many fields). The aggregation of variables via sums, as also considered by the authors, is a special case of what is called aggregation by a linear map or linear lumping in the context of the aforementioned fields, respectively. The following references provide an overview on this topic but are by no means exhaustive: Tóth, J., Li, G., Rabitz, H., & Tomlin, A. S. (1997). The effect of lumping and expanding on kinetic differential equations. SIAM Journal on Applied Mathematics, 57 (6), 1531-1556 and references therein, especially Iwasa, Y., Andreasen, V., & Levin, S. (1987). Aggregation in model ecosystems. I. Perfect aggregation. Ecological Modelling,, 287-302. [URL] and references therein.
I am aware that the framework proposed by the authors goes beyond the aggregation of identical species to also include consistency regarding the aggregation of identical modes of interaction only, but I still believe that the manuscript would profit from a discussion of the connection to the above-mentioned work.
We thank the reviewer for pointing out this prior work. These works consider the conditions under which models can be simplified when some dynamical variables are redundant (or near-redundant). A typical case for this is indeed that a model is clone-consistent and there are actual clones in the model. In contrast, our work considers the properties a model must have for this situation to arise in the first place. Therefore, these works and ours are related but not special cases of each other. An illustrative difference is that parameters that describe how populations affect each other play a crucial role in our work but not in the references mentioned.
Nevertheless, we agree that it is helpful to mention this previous work and explain the differences to our approach. In the revised manuscript, we thus describe the relation between our and the suggested works in the introduction, writing: "Finally, if a model is clone-consistent and clones actually exist, it can be simplified; this is called aggregating or lumping [citation of Tóth et al. and Iwasa et al.]. "
Examples Lack of reader guidance:
In general, the manuscript in several places lacks paragraphs to guide the reader. With that, I refer to paragraphs especially in the beginning of a section that prepare the reader for what results to expect in this section and which plot out the strategy on how the authors will get there. A general 'the paper is structured as follows …' paragraph mentioning what to find in which section would also be help-
ful. For example, the section 'What models are clone consistent' right away starts out with impact functions without any information on how these functions relate to the models that the authors set out to find. Other examples are 'The Functional Algebra of Impact Functions' (490) and 'Deriving a New Model for UTI strains -the Legwork' (598).
We agree that more guidance for the reader is helpful. Following the reviewer's suggestion, we rewrote the last paragraph of the introduction (ll. 103 f) to guide the reader along the manuscript's structure and added short introductions to many sections (ll. 116 ff, 273 ff, 512 ff, 635 f, 652 f) including those mentioned by the reviewer.
Figure 3 cannot be understood since needed information is only given later in the manuscript. Upper part of the figure: The reader cannot understand why question 5 is answered the way it is, because the checked model is only introduced and discussed later in the manuscript. This is especially a problem for question 5, since the answers to this question cannot be interpreted without the information given later in the manuscript. The answers to question 5 are confusing as well since it is not clear whether the answer is yes (no) or depending on the application.
We thank the reviewer for pointing this out. The purpose of Figure 3 is to explain the general procedure for validating clone consistency. To this end, we show a specific example with arbitrarily chosen answers to the binary questions (3, 5, and 6). We did not intend that the reader understands the specific example answers we used for Question 5 at this point and do not consider this necessary to understand the general scheme. We now label the answers to Question 5 as "arbitrary example" to make this clearer.
Finally, when the question of ecological plausibility is dealt with in the manuscript, the connection to question 5 (or Figure 3) is not indicated (line 294 ff).
Thanks for pointing this out. We now refer back to Figure 3 at this point (l. 325 f).
The authors also might want to reconsider the relevance of question 4, since later in the manuscript (line 340) it is stated that rewriting an equation in this way is always possible.
Good point. We changed Points 4 and 5 of the recipe to: "If not, cast them into this form by choosing parameters to be zero. " and "Are these choices biologically justified? (Depends on application.)" Lower part of Figure 3: The meaning of the footnote within the figure is unclear.
We tried to clarify this footnote, which now reads: "This part only applies to models of population growth as in Eq. 10".
In question 2 omega which was previously defined as a basic impact function is used as symbol for a function of a basic impact function.
We fixed this and now use in analogy to Eqs. 6 and 13.
The section 'Implications contains' contemplation on various aspects of which most content wise fit in neither of the subsections titled 'Checking Clone Consistency to Reveal Implicit Assumptions' and 'General Implications for Model Design'.
We restructured this section and improved the subsection titles.
Ambiguities making it hard for the reader to grasp the essential concept:
The models described by equations (2)
and (3) cannot be the 'same model' (line 68 and caption of Figure 1) since one of them has two state variables while the other on has three.
We now use general model instead of model to make clear that this refers to Eq. 1 (in which the number of populations is a parameter) and not a specific realisation of this general model (as described by Eqs. 2 and 3). We refrained from removing this formulation altogether since the fact that the simulations are based on the same general model is crucial here and this wording clearly communicates this.
Incomprehensive paragraphs:
The paragraph starting from line 217 on building models with aggregated phenomenological observables: We rewrote and restructured this entire paragraph, addressing all the criticisms of the reviewer, in detail:
• 223: 'each of the experimentally determined interaction parameter'. Not clear what is meant with interaction parameters
here. Each single measurement? Interaction parameter seems to have a different meaning here than in line 42.
We now clarify that is "the number of experimental interaction observables, i.e., the number of measurements per (ordered) pair of populations" (ll. 243 f).
• What is the reasoning behind the general Ansatz in equation (13)?
We now start off with an even more general ansatz, and the former Eq. 13 (now Eq. 14) becomes a specific case. We elaborate that this ansatz arises from "[c]ombining Eqs. 10, 11, 6, and 5" (l. 240).
The section on how to determine parameters and functions (224ff) is unclear.
We added examples to better explain this process (ll. 254 f) and also expect that the way we restructured the entire paragraph will help the reader to better connect this part to the context.
• 231: What are building blocks in this context?
We replaced building blocks with basic impact functions (ll. 243 f) to avoid this confusion. We also expect that the restructuring of the paragraph helps to clarify this.
• 234: 'Finally, for some applications, a sum or more complex way to combine the basic impact functions may be appropriate (as opposed to the product used in Eq. 13). In New Model, we provide an example for this approach. ' What approach? The product version or the sum?
This refers to the entire approach outlined in the paragraph (though a product is used in this particular case). We now explicitly refer to the "approach" (l. 260). Further, the restructuring of the paragraph avoids that this reference is mistaken.
Non-causal relations between sentences that imply such relations:
Line 47: 'These new experimental scenarios often call for new ecological models that can incorporate the respective data. One reason for this is that there is no single answer as to how multi-parameter o higher-order interactions should be measured [3,6,16,20,23]. ' We now clarified this sentence, writing in lines 49 ff: "These new experimental scenarios call for new ecological models that can incorporate the respective data. Existing models are often not suitable here since there is no uniform answer as to how multi-parameter or higher-order interactions should be measured [citations]. " Overall meaningless sentences like: 'Moreover, clone-inconsistent models are diverse for the same reason that non-linear functions are. ' (394) We expanded the respective paragraph and instead of this sentence, we now write: "For illustration, the diversity of clone-inconsistent models may be compared to that of all numbers not divisible by seven. " We do not consider this sentence meaningless, as it can help readers to appreciate the diversity of clone-inconsistent models and why it is difficult (if not impossible) to make general statements about them.
It is not clear to me why the authors come up with a new label
(clone consistency) for something that has been described before. The authors state mere similarity to earlier descriptions of the problem (line 59). If there is indeed a difference between earlier descriptions of the problem and 'clone consistency', I would like to request a discussion of those differences by the authors to justify new wording.
From the existing labels we know (all cited in the introduction), we consider invariance under relabeling to be the best candidate. However, this label is not established in the field (it is only used by Murrel et al (2004) as far as we know) and we considered it too wordy and grammatically inflexible for the heavy usage required in our manuscript. For example, instead of clone inconsistency we would have to write something like lack of invariance under relabeling or lack of relabeling invariance.
Regarding the other alternatives: • Kuang (2002) and Arditi and Michalski (1996) use invariant under identification of identical species, which we consider too cumbersome. This is corroborated by both papers referring to it as Criterion 1 instead.
• Drossel et al (2001) uses invariance under aggregation of identical species. Like the above, we consider this too cumbersome.
• Rossberg (2013), van Leeuwen et al. (2013), and Vallina (2014) use "common-sense" criterion, which is cumbersome, not descriptive, and bears the risk of coming off as arrogant in the context of our manuscript.
Hence, we prefer to keep the label clone consistency.
2. It is not clear if the proposed framework can also be used for models that model the mechanism of interaction, e.g. for consumer resource models. If so, how does the proposed framework integrate with commonly used impact functions?
While most of our examples focus on models where all the dynamical variables are populations, we never restrict ourselves to this case. In Implications, we discuss the application of our framework to consumerresource models and similar (ll. 412 f) and how impact functions relate to mechanisms (ll. 384 ff, 424 ff, and 445 ff.). The extension of our framework in this direction is straightforward: one simply uses impact functions to describe the impact of an ecosystem on a resource instead of a population. We now included references to consumer-resource models and similar at appropriate points in the manuscript (ll. 128 f and 210). Also see the next point.
The authors should unambiguously state in the introduction the latest for which types of models this framework can be used, so readers can decide early on whether the paper is relevant for their research.
To clarify this point, we now write in the introduction (ll. 112 f): "we discuss how our framework applies to all models involving multiple populations". We understand that this is rather vague. However, the more precise answer "every model that contains impact functions" is not understandable to the reader at this point. Also, while there are models to which our framework applies in general, but for which there is no danger of clone inconsistency (e.g., mechanistic models featuring interaction mechanisms that are exclusive to a pair of species/resources), we again cannot reasonably discern these cases without explaining central parts of our framework. We agree with the reviewer and that was not the intended meaning. To avoid this misunderstanding, we now write: "One sanity check for such models is to virtually split a population […]" (ll. 21 f).
The authors should clarify how their results for the system translate to properties of fixed points
We presume that this refers to the case study. As we elaborate at the end of the Supplement S2, the fixed points of the two models are equal, if we assume that the growth term does not become zero and < 1, which is the predominant case. If > 1, things become complicated due to case distinctions in the existing model (Eq. 21) though a certain similarity of fixed points can be expected. We now briefly reference this in the main manuscript (ll. 358 f).
We do not expect a more detailed analysis of this point to be worthwhile, in particular since it is not central to our manuscript. Also see our response on the general possibility of deducing dynamical properties from clone (in)consistency above.
As indicated to the comment on reader guidance, the methods
part would also benefit from more supportive text to help the reader understand what to expect from the section.
We now explicitly refer to the subsections of Methods in the main manuscript to point the reader directly to the right subsection and briefly describe the contents of the Methods section at the end of the introduction (ll. 113 f). With the exception of The functional algebra of impact functions and Proof: generates (which are clearly linked), the subsections of Methods have no overarching narrative. Thus, an introductory paragraph would mostly contain subsection titles. Instead, we reworked the first paragraph of each subsection in Methods to better guide the reader.
The subsection about notation indicates a comprehensive symbolic language that is used rigorously. This is not the case e.g.: 1. the use of mathfrak symbol sequences in Definition 1.
We now clarify that the notation overview does not fully extend to Proof: generates (ll. 509 f), as it requires a considerable amount of special notation that is only relevant to that subsection and would only confuse readers who do not engage with it. | 6,239.8 | 2019-08-05T00:00:00.000 | [
"Economics"
] |
Science and Religion from the Perspective of Post-Modernism---Knowledge System and Religious System
All knowledge belongs to narrative knowledge, and also local knowledge. Post-modernism can be defined as “elimination of grand narrative”, so in the world of post-modernism, validity of knowledge has become an issue. Local knowledge emphasizes that generation and justification of knowledge has its local features, so it has no way to surpass the dilemma of cultural relativism. Theoretically, in this article, the author believes that the standpoint of “moderate relativism” is advisable, which, not only sufficiently respects cultural diversification and particularity of each nation, but emphasizes possibility and necessity of culturally mutual communication of all nations, which attains mutual understanding and recognition during participation and share.
Lyotard: knowledge status of Post-modern society
The French philosopher Lyotard, who is well-known as "Father of Post-modernism" gave the sub-title to his masterpiece "Post-Modern State" "Report on Knowledge".In the introduction, he declared at the very beginning: The target of this book is the knowledge status in the most developed society, and I decide to apply "Post-modern" to nominate this status.This word is currently extremely popular among sociologists and critics in the American continent.Human being apply it to denote the cultural context for the time being: multiple revolutions since the end of the 19 th century, and game rules from science, literature to art have all be replaced.This book tries to place the above revolutions into the scope of narrative crisis for review.
From this quotation, we can at least obtain three pieces of information: (1) "Post-modern" is both a cultural context and a general description of knowledge status in the most developed society; (2) Science, literature and art are different language games, which respectively abide by their own game rules, and, furthermore, these game rules are in frequent change; (3) Due to changes of these rules, the "narrative crisis" happens in the Post-modern society.It is obvious that, thinking source of Post-modernism is closely related to the knowledge status of a developed society.Accurately speaking, what Lyotard cares about is the validity issue of our knowledge in the changing Post-modern society.
Validity of knowledge has become an issue, which highlights obvious differences of Post-modernism and Modernism in terms of knowledge characteristics.In ancient traditional society and the modern society, validity of knowledge was either regarded as natural or shielded.However, it is only in the post-modern society that validity of knowledge is highlighted as a keen issue.Max Weber and Habermas both mentioned the "legitimation crisis" of Late Capitalism, while Lyotard revealed cultural predicaments of Post-modern society from the particular perspective of validity of knowledge.In order to have a better understanding of Lyotard's description of knowledge condition of Post-modern society, we should at first straighten out the concept and classification of knowledge.
Validity of two varieties of knowledge
Knowledge is constructed by narration, but not any narration is knowledge.Only accurate or appropriate knowledge can be named "knowledge".Narration can be divided into several varieties, including indicative and descriptive narration, and also order, stipulative, imperative, and exclamatory narration, etc.Therefore, knowledge can be correspondingly classified into two varieties: scientific knowledge and narrative knowledge.Scientific knowledge is composed of by indicative and descriptive narration, and it has true and false differentiation in indicating and describing its targets.However, narrative knowledge is connected with tradition and culture, which altogether constitute a set of fixed pragmatic rules on social association.Lyotard borrowed the concept of language game by the later Wittgenstein, which believes that different narration or knowledge constitute language games of different varieties.Game is like a contest, and each speech is a different "trick", which reflects and stipulates different social and cultural contexts of speakers.Here, whether "game" or "narration", they are literally understood and applied, but not in the meaning of metaphor.
Whether scientific knowledge or narrative knowledge, they should both seek for a way that validates themselves, but the approach to seeking for this way is totally different.Validity standard of scientific knowledge roots in proof, falsification consensus, or appointment.Specifically speaking, there are two items in validity standard of scientific knowledge, namely, internal logic consistency and external experiential validity, which was the knowledge standard of logic positivism popular among western academic field in the early 1900s.Whether the Vienna school or Karl Raimund Popper school, they were both convinced about these two items.However, narrative knowledge seeks for its validity standard in its culture and tradition.Here, Lyotard's definition of narrative knowledge originated from description of original society by anthropology, and he took Karshnarvas who like to tell stories as an example: What we would like to talk about lies in folk narrative pragmatics of narration.For example, a Karshnarva who likes to tell a story always starts his narration in the same way, "The following is a story of …, which is the same as what I have always heard.Now I would like to take turns to tell as story to you, and please listen carefully."And his way of ending the story is also unalterable, "… the story ends by now.The one who tells the story to you is … (name of a Karshnarva), and the buckra who listens to the story is … (name of a Spanish or Portuguese)".This is a typical occasion of narration, and Lyotard analyzed it with great interest.Of course, we may make a little expansion on the analysis.An occasion is composed of by three factors: speaker, receiver and protagonists in a story ("denotation" in a language).The reason why the narrator can occupy the position of a speaker is that this story has been "heard" by him.And the narrator used to be in the position of a receiver, namely, validity of narrative knowledge originates from particular cultural heritage.This also implies that, by listening to this story, the receiver is also able to acquire similar authority, that is, validity of narrative knowledge is transferrable.Furthermore, there is one more important factor, namely, the speaker might at the same time be the protagonist of the story, who is denoted in the story, just because he has a name of Karshnarva, (which is a sign of clan consanguinity heritage), while the actual receiver (the buckra) at this time doesn't have this authority.Therefore, the key whether the actual receiver can obtain the authority of retelling (speaker) by listening to the story lies in whether the receiver recognizes validity of narrative knowledge.If the buckra receivers judge the story by the validity standard of scientific knowledge, and negate validity of narration, then they will actually lose the authority of acting as the speaker (retelling) in the tradition.If we regard language as a game which contains social relationships, then "A set of pragmatic rules that constitute social relationships will get transferred together with narration".
Here we are confronted with conflict of validity issues in two kinds of knowledge.From the angle of scientific knowledge, narrative knowledge is by no means a kind of knowledge, because these narrations have never been demonstrated.Through transferrable pragmatics, narrative knowledge gets itself trusted without debate and proposing of evidence.Scientific knowledge classifies it into a thought condition which is constituted by public opinions, convention, authority, prejudice, ignorance, and escapism, etc: barbarism, originality, lag, underdevelopment and dissimilation …, while from the angle of narrative knowledge, scientific knowledge is not only incapable of acting as validity standard of other knowledge, like overrated logical positivists, but has never discovered its validity standard.Whether the internal logic consistency or external experiential validity, they are continually animadverted and doubted even inside scientific philosophy.
Lyotard opposed to regarding scientific knowledge (primarily natural scientific knowledge) as the unique knowledge, and to the tendency of excluding narrative knowledge.He believed that these two kinds of knowledge were different, and they differed from each other in the respects of functions, expressing way, and validation methods, so we couldn't negate the status and effect of narrative knowledge just due to differences between these two.He also believed that scientific knowledge needs to validate itself in virtue of narrative knowledge, which enables scientific knowledge to obtain its deserved meaning and value.Scientific knowledge still needs to resort to narrative knowledge, tradition, culture and authority to confirm its validity.Both Kuhn and Feyerabend have revealed close association between science and culture of its era.In this respect, all knowledge belongs to narrative knowledge, and scientific knowledge is merely a special variety of narrative knowledge.As one kind of language games, development of new "trick" of scientific knowledge is just like correction of Kepler on the planet orbit theory of Copernicus; or it just alters the entire game by inventing new rules, just like relation between Einstein's theory of relativity and the classical physics of Newton, which is the scientific revolution defined by Kuhn.
Post-modernism: elimination of "grand narrative"
It has been mentioned previously that, the purpose of Lyotard is to study knowledge status of a developed society, on the basis of which he developed his Post-modernism theory.He believed that, in order to validate itself, scientific knowledge of the modern society is accustomed to apply the metadiscourse with an integrative function.Once this metadiscourse is discovered, all sorts of knowledge forms can be integrated into a comprehensive narrative framework, which is "the dream of modernism": I use the word "modern" to refer to the science that has to resort to a certain type of metadiscourse with a large narrative framework to validate itself, such as Spiritual Dialectics, Meaning Hermeneutics, liberation of rational subject or labor subject, or creation of fortune.
Post-modernism can thereby be defined as "doubt about meta-narrative framework".In the post-modern society, with deconstruction of "grand narrative", validity of knowledge was in crisis.The way to resolve the issue lies in the "deconstruction" per se.In a post-modern society, we had to face diversity of discourse and different language games --all decided by ourselves, not obtaining validity from the outside world, and furthermore, various narrative frameworks are not mutually reductive.
Narrative function is losing its subject, its great hero, its great adventure and target.It disappears under cover of narrative language factors ---narrative, referential, clairvoyant, and descriptive, etc, …Whatsoever, we needn't establish stable language combination, and the features of language combination we establish are not necessarily communicative.
It goes without saying that, this is doubt and clearing up about rational spirit and subject spirit of human beings since the Enlightenment Era.In the field of scientific philosophy, Lyotard refused to explain science as expression of all actual knowledge.He advocated "narrative understanding" in science, namely, describing it as a perfect "small narrative" in the particular context appropriate to it.This doesn't necessarily mean that it is impossible to acquire knowledge.On the contrary, "it refines our sensitivity towards differences, and strengthens our ability to put up with incommensurability".Feature of Post-modernism is not only diverse accretion of different words, but harmonious coexistence of distinguished cultures from a positive perspective.
Feyerabend: a scientific philosophy of Post-modernism
It has been mentioned previously that, the two validity standards of logic positivism about scientific knowledge are confronted with difficulties, which originate from self-development of scientific philosophy: (1) Falsificationism of Poppe's counter-induction and an experientially-observed thinking that negates the existence of "theory-free"; (2) Kuhn's thinking that metaphysics hypothesis and "incommensurability" were contained in the scientific normal formulas; (3) proposing of "incompleteness theorem" by Kurt Gödel.The above (1) and (2) collapse validity of external experiences in the scientific theory, while (3) suspects the consistency of internal logic in the scientific theory.With development of these thinking trends, demarcation between science and non-science has become obscure, and scientific philosophy has gradually started to lose its traditional theoretical field.By the time of Feyerabend, an extreme scientific relativism or scientific anarchism had come into existence, which claimed "opposition of methods", and "valediction of sense", and which integrated scientific philosophy and Post-modernism.
The previous validity issue about scientific knowledge and delimitation issue of science and nonscience both concern similarities and differences between religions and witchcrafts in the society of science (Occidentally) and disindustrialization.According to Lyotard, the latter two beong to "narrative knowledge".Feyerabend believes that, since theorists always control an experiment based on their own favor, myths and witchcrafts can also endure confirmation and falsification.Those decisive adoptionists can always provide experiential proof for their theories, so that myths can also be based firmly on experiences just as the highly praised scientific theory.Myth is not merely outcome of imagination.
Myth is far from the imaginative thing that contradicts with the reality, but a thinking system maintained by several direct and persuasive experiences.Furthermore, through experiences, myths are more convincing than the complicated experimental results of the world prospect of the science nowadays on which they are based.
Feyerabend pointed out, similar to myths, once science becomes an ideology, it will be degenerated into a dictatorial religion.Nowadays, ideology of scientism has turned science into a sacred church.Myth is a bygone science and science is a myth today.This conclusion is not applied in its metaphorical meaning.If we look back to origin of science, we should discover that, science was not inborn "nobly": as is well known, chemistry originates from alchemy, and astronomy from the ancient astrology.
Astronomy benefits from pythagoras and preference of Plato towards the circumference, and medical science from Herbalism, psychology, metaphysics, physiology of a witch, howdies, and guileful people and mountebanks… Science is enriched by methods and achievements of nonscience everywhere, but procedures which are often seen as the essential part of science are abandoned or replaced without any notice.
Here another example: Evans-Prichard, the British social anthropologist well known for his inspection in African culture, mentioned that, Azande witchcraft has a cosmology, which can handle easily what might be "uncertain" for a layman.If one person tries to kill or hurt another man with a witch craft, but without any effect, then Azande witchcraft could easily explain the reason underlying.Or if one asks for instructions of gods, something unknown "happens mistakenly" at this particular period; or a curse of a ceremony is not correctly fulfilled; or this person has a witchcraft stronger than that of the one who tries to hurt others.Then, Pritchard questions: if there exists such a case, then in what meaning can the Occidental science declares that they understand the world based on more truths than Azande?
Here we come back again to conclusion of Lyotard: all knowledge is narrative, while validity of narrative knowledge should resort to its own cultural heritage.Therefore, each kind of knowledge can only be "truth" within its own cultural tradition.Here contains mystery of witchcrafts and many original mysterious phenomena, which we will go into detailed analysis in the case study in Part 3.
Locality and validity of knowledge
Local knowledge was proposed by the American Interpretive Anthropology master, Clifford Geertz and the philosopher of science, Joseph Rouse.The background of Interpretive Anthropology was that profound theoretical crisis which happened in the field of Anthropology in 1960s.Due to the popularization of cultural relativism, people doubted about the objectiveness and effectiveness of the work by anthropologists: if mutual communication is not possible between different cultural normal formulas, then is it possible for the description of a cultural sign and its research method to explain the culture?Do they still have any guiding meaning upon the methodology?Whether on earth is it possible for communication between different cultures?Some scholars even doubted about the field work by anthropologists fundamentally, and proposed the interrogation of "Is cultural sign really reliable?"Furthermore, some noted cultural anthropologists at that time came to opposite conclusions for the field work done at the same place, which further made the academic reputation of a cultural sign in a shaky crisis.The Interpretative Anthropology of Geertz resorted to the concept of local knowledge and the method of "thick description" to try to save this crisis.The so-called local knowledge means that, generation and justification of knowledge is related to specific context, which here includes value concept of culture and subculture groups formed under a given historical condition, and the standpoint and field of view determined by given interest relations.All knowledge is local, and the so-called generally applied knowledge is no more than an imagination.Spread and utilization of knowledge is also not an "exemplification" of general knowledge, but a result of "laboratory transplant" and "standardization", namely, knowledge under one cultural background transplanted into another cultural background, and a re-production process of knowledge under the same condition.The so-called "thick description" just borrows the thought of the philosopher Gilbert Ryle.Taking the fact of a boy winking as an example, Ryle revealed the complicated features of social behavior and cultural symbols.Pure winking, winking to another person, winking of imitating others to mock them, practice winking in the presence of a mirror, and intentionally winking so as to let the third party mistakenly believe that there exists a special relationship between the other two that doesn't actually exist….All these behaviors with totally different implications are merely one action in the thin description like a camera ---nothing more than winking, and their respective rich social and cultural information can only be fully revealed in thick description.Even a simple winking is so complicated, let alone description of human behaviors with the characteristic of a highly complicated cultural sign.Geertz used the two concepts of local knowledge and thick description for two reasons.On the one hand, he followed the tradition of cultural relativism, admitted the local feature of knowledge, and placed emphasis on observation of a different culture "from the native's point of view", especially non-western culture.On the other hand, he used the method of description to reveal a certain objectiveness of the culture, so as to pioneer a way that would surpass extremely cultural relativism.Particularly, Geertz emphasized "to observe oneself from the angle of another one", referred to the non-western culture as a reference, and reflected upon the cultural reality of anthropologists themselves, so as to eliminate "Occident-centered view" and "Cultural Hegemonism".Geertz made great contributions to development of western cultural anthropology.
Local knowledge is obviously a narrative knowledge, whose validity originates from its cultural heritage.From the previous definition, it is concluded that, validity of local knowledge is not only reflected in the generation process, but in the justification process, while justification of knowledge is closely related to the validity issue of knowledge.If the "locality" of local knowledge is also displayed in the process of justification, then it undoubtedly equals to admitting that validity of local knowledge also has the feature of "locality".In other words, local knowledge is only valid to the particular cultural tradition that produces it.And then, we come back to the original topic again: how one member in a cultural tradition to understand local knowledge in another cultural tradition?In this way, Geertz, who was deeply influenced by western cultural relativism, could finally find no way out of the dilemma of cultural relativism.However, Geertz knew for sure that, paradox of a theory can sometimes only be resolved in practice.For these things, "the same as riding a bike, action is easier than words."Geertz practiced what he had preached, and observation of the social and cultural meaning of cockfighting in Bali in his book entitled <<Deep Game: Description of Cockfighting in Bali>> has been a typical model of Interpretive Anthropology.In this article, Geertz recorded the transition process from his being excluded to being accepted by residents in Bali just by a chance ---being dispersed by the police because of watching a cockfighting.Being recognized and accepted by this cultural system, he had the authority to retail narrative knowledge within the system, and his analysis of the social and cultural meaning of cockfighting in Bali then had its reliable authority.Here, Geertz tried not only to overcome the cultural relativism with his actual action, but led the way for our field work.
Case analysis
The basic conclusions we have got are as follows: all knowledge belongs to local knowledge, and also narrative knowledge.Validity of knowledge depends on the particular cultural tradition which produces knowledge, and, thereby, it is only valid within its own particular cultural tradition.The effective path to overcoming cultural relativism lies in participation in that cultural tradition through practice, so as to give certain "thick description" to that cultural tradition.In the following, we are going to briefly analyze three typical cases: "god-man communication", "miracle" and "alternative medicine".
"God-man communication"
At first, limbs of the conjurator begin to wobble, with an accelerating frequency.In a flash, he sways from head to foot, and his head whirls continually more and more fast for approximately a couple of minutes.Then, his long hair begins to flutter in the air, and his neck swings with an inconceivable angle… When the sound of the drum attains its summit, he begins to waver… his mouth slobbering, his head shaking.He leaps staggeringly with his tiptoe from one side to another in the house, and he makes a weird sound.By this time, he begins to cut his tongue with a sharp sword… He pierces a spike into his cheeks, beats himself with an acanthosphere, or climbs a knife ladder.This is a record of telepathizing occasion in a temple of Singapore.The weird body language of the conjurator is a cultural sign for communication with the Deity, while his later super performance is a proof that he has succeeded in communication with the deity.For instance, climbing of a knife ladder symbolizes elevation of the soul.However, here the supernormal capacity of the conjurator or the necromancer tends to surprise a large majority of anthropologists: How they manage to do this?In addition to magic or deceit, there is another cultural explanation on this phenomenon.In the analysis of the Azande witchcraft, we have mentioned this point, and believe that here conceals mystery of many mysterious phenomena.willis.W. Hamann has mentioned, people with different cultural background usually experience different realities, which is just like the case in which a deeply mesmerized subject has a totally different realistic world with the ordinary people.This is a sort of "cultural hypnosis".A person who lives in a cultural atmosphere of worship of gods and who deeply believes that he can communicate with gods, can't experience the pain of an ordinary person if he is in a state of extremely craziness, which is quite possible.Another instance is the "fire-walking training" popular in American: people living under a certain cultural background know that, sometimes walking on a burning charcoal fire barefooted might not burn one's skin.Thousands of well educated people, with only training of several hours, can manage to do this, which merely needs to train them to accept the psychological hint that fire wouldn't burn their skin.
"Miracle"
A man with a first name of Jin, is an old follower who believes in God for 25 years, and he now holds the position of Presbyter in a nearby village.According to him, he suffered from nephritis when he was young, and the disease tortured him almost to death.But in 1982, there was someone in the village who passed to him "Gospel" which was said to be able to cure his disease.Then he believed in God just with an attitude of trying.However, what was mysterious, later after he believed in God, his symptom began to mitigate, and afterwards, he gradually recovered.He said, "within these 25 years, I have never suffered again, and this is owing to the blessing endowed by God.Thank God."At present, he often witnesses with his personal experiences and passes "Gospel" to others.
The above case is chosen from a field work material of a Chinese scholar about the relief situation of Christian in Chinese rural areas.The Presbyter in the material is a typical example among Chinese adoptionists.I still prefer to regarding this as another typical case of "cultural hypnosis".Mitigation or recovery of his disease might be merely a coincidence, but as for the presbyter who has already been convinced of Christian, it is natural that he owes this "miracle" to bestow of God.Likewise, There appeared a wave of persecuting witches in Europe in the 16 th century.Robin.Briggs recorded that, an elderly woman named Bobby in a small town in England was unfortunately charged as a witch.After excruciation, she began to make a confession, admitting she had conjured many of her enemies.With development of the case, Bobby began to deeply believe that she was a witch who had a magic, and she had murdered numerous men, women and children with drug powder.She also recalled her confederates whom she saw in the ghost-worship ceremony (an occasion for get-together of witches) at midnight, then she also pointed out three other elderly women, who got arrested and trialed soon.Here, whether persecutors or casualties, they were all mesmerized by the strong witchcraft culture at that time, and their imagination and the reality were confused as a whole.Maybe we think that it's self-contradictive that the primitives believed they were both humans and crows, which is unbelievable.However, just as Anthony Giddes pointed out: What is the logic of thought to believe that bread fragment in the Eucharist is the body of Jesus, and the liquor is his blood, or to believe that time swells with acceleration of speed in the Theory of Relativity?What people believe or not actually depends on what they "would like" to believe or not.As a matter of fact, difference between knowledge system and belief system is not so much as we imagine.
"Alternative medicine"
The following is an excerpt of recollection by the American doctor Lewis Mehl-Madrona on his own experience, which has changed his view about the overall traditional medicine.Wesley in this article is a Native American.Doctors in the Medical Center of Duluth University diagnosed that he had suffered from lymphoma, and disclosed that he could live for at most six months.Wesley turned to an Ojibway witchwoman named Carolyn to hold a ceremony for him.
Carolyn "mesmerized" Wesley with a psychological cure, which was the antecedent procedure in a ceremony.In the following four nights, she took us into a shanty: a bungalow made with sallows, which was covered with carpets, coverlids and paulins outside.Each night, we stayed here and prayed for approximately five hours for Wesley.In the daytime of the four days, Carolyn stayed alone with Wesley ---they two praying, telling stories, and burning herbs.She also met his family.By the fifth day, she walked out with Wesley, and told him that he should stoop for worship several times each day until when a glede appeared, which symbolized that he would recover.
In that afternoon, Wesley discovered a glede, and was in a trance.Carolyn conjured on the spot, and she scattered tobacco and corn flour onto the ground to purify it.She burned sage to repel the evil, she intonated the hymn, she asked for instructions from the Holy Spirit, and she also smoked a Holy pipe.Finally, Carolyn told Wesley that, the White Buffalo goddess ordered to her that he had already recovered.
When Wesley returned to the City, his doctor unexpectedly couldn't find any trace of his lymphoma ---but unable to give any explanation for disappearance of his lymphoma.They just recorded this case in his medical record as a rare and unexplainable spontaneous alleviation of a symptom.… The observation time limit of the doctor having passed, his doctors finally agreed with words of the Goddess.They declared that, Wesley had already totally recovered.
The author of this article Lewis Mehl-Madrona is a student trained in American Medical College, who has a doctor license recognized by the country.His own experience again pushed the demarcation issue of science and non-science to us in a sharp way which has been eliminated by Feyerabend.As one form of alternative medicine, does the traditional medicine have any effect?Is it scientific?At what meaning do we define science?Within the limit of western cultural tradition?As local knowledge, is it that it only has effect within its own internal cultural tradition?… In the frame of reference in western cultural tradition, there is another case of "alternative medicine", which is obviously traditional Chinese medical science.However, its efficiency in curing has been obvious to all.This kind of issue is not of little importance, but desiderates a definite answer, because it is concerned with the subsidization of the government and the society to research of "alternative medicine".Taking America as an example: in addition to formal National Institutes of Health (NIH), it also has established "Organization of Alternative Medicine" (OAM) and "National Center for Complementary and Alternative Medicine" (NCCAM).In 2001, The US Congress appropriated a fund of $ 5 million to OAM; in 2003, it again appropriated a fund of $ 113.4 million to NCCAM.After the foundation of PRC, support of the Chinese government enabled this ancient medical technique to survive in this modern society and play an important role.Validity of local knowledge sometimes needs to resort to support of the national authority, which is another exemplification of the relation between modern social knowledge and authority pointed out by Joseph Rouse.
Epilogue: "moderate relativism"
Culture is an intangible and lasting spiritual power.It exists in mind of each cultural holder, and is reflected in the social organization and living method of a nation.There exists a "cultural map" in the mind of each cultural holder, while the task of anthoropologists is to study this map.How to deal with "cultural others" reflects the social civilization development of mankind fundamentally.The self-important cultural absolutism is a hotbed for chauvinism and cultural hesemonism to attempt to measure cultures of all nations with an unchangeable standard and to regard all other cultural forms as a reflection of primitiveness, barbarousness and backwardness.This cultural absolutism might bring disastrous damage to civilization of human beings.However, due to the fact that the extreme cultural relativism emphasizes cultural particularity of each nation in the wrong perspective, it also objectively negates possibility of mutual communication between cultures.If we have to adopt a theoretical standpoint on this issue, then I believe that, the standpoint of a "moderate relativism" is feasible.This "moderate relativism" on the one hand opposes cultural absolutism, extremely respects diversity and particularity of ethnic culture of each nation and fully respects particular values and living ways of each nation; on the other hand, it differs from the extreme relativism, but emphasizes possibility and necessity of mutual communication between all national cultures.In mutual communication, they achieve mutual understanding and recognition.This sort of cultural recognition neither changes itself totally into a member of another cultural community, nor regards culture of the other nation as a "sample" or "fossil" for appreciation with a predominant attitude.Since "action is easier than words" in this kind of job, and theory goes in advance of practice, then we should start from the most ordinary and tough field work with a modest and equal attitude, listen to their heart, touch their emotional pulse, experience and blend into their culture.During this process, we participate in and share their living, and feel that they are just the same as us, who are all members of this global cultural family.This is a tough but valuable task: Culture of a nation is collection of a text and itself, while anthropologists try to translate them passing the shoulder of those people who they were supposed to belong to.A difficulty in such a situation is tremendous, …However, on whatever level, and however complicated, the guidance principle is the same: society, exactly like life, contains its own explanation.One can only learn how to get approached to them. | 7,198.8 | 2009-03-23T00:00:00.000 | [
"Philosophy"
] |
Three-Axis Vector Magnetometer with a Three-Dimensional Flux Concentrator
This research proposes a magnetic field sensor with spatial orientation ability. Through the assistance of a magnetic flux concentrator, out-of-plane magnetic flux can be concentrated and guided into the planar magnetic cores of a fluxgate sensor. A printed circuit board is used to construct the basic planar structure, on which the proposed three-dimensional magnetic flux concentrator and magnetic cores are assembled. This reduces the alignment error of the coils and improves the reliability of the sensor. Three-axis sensing is achieved by using the second harmonic signals from selected sensing coil pairs. The magnetometer exhibits a linear range to 130 μT. At an excitation frequency of 50 kHz, the measured sensitivities are 257.1, 468.8, and 258.8 V/T for the X-, Y-, and Z-axis sensing modes, respectively. This sensor utilizes only one sensing mechanism for the vector field, making it suitable for IoT applications, especially for assessing mechanical posture or position.
Introduction
With the rapid development of technology, intelligent IOT sensors that integrate the production process with virtual and real sceneries are widely studied.Smart sensing generally includes parameter measurement, data transmission, and data calculation.Take a magnetometer as an example.Common magnetic field sensing methods include the Hall effect, magnetoresistive method, and fluxgate method [1,2].Hall effect sensors are often used to measure the earth's magnetic field and motor position [3,4].Magnetoresistive and fluxgate sensors are often used in satellite navigation and medical treatment [5][6][7].The factors involved in developing a vectorial magnetometer are miniaturization, low power consumption, and cost considerations.To improve the performance of the sensor, various material technologies and manufacturing methods have been developed, including printed circuit board (PCB) and microelectronics technology [8][9][10].
The purpose of this study is to develop a tri-axis magnetometer that can measure a vector magnetic field in space.Several studies have been presented, using a combination of either same or different types of sensors [11,12].There is also an element that uses a single permalloy core and two excitation coils to achieve three-axis sensing [13].To measure the magnetic field perpendicular to the sensor, Silva [14] deposited a layer of magnetoresistive material on a micromachined v-groove surface for z-axis sensing.Hsieh [15] applied an inverted V-shaped flux conductor on a silicon substrate to assist the out-of-plane sensing.These sensors can be further integrated with in-plane sensors.
An integrated tri-axial sensor can either reduce the complexity of the component placement or working circuit.To increase magnetic flux in a specific direction, flux guide concentrators are introduced into magnetic sensors [15][16][17][18].Made of soft ferromagnetic materials with high permeability, these concentrators can effectively conduct the out-ofplane magnetic flux lines to the element plane for z-axis direction sensing.Zhao [19] designed a slope magnetic flux guide on a silicon substrate for out-of-plane magnetic sensing with a planar GMR sensor.Lu [20] proposed a tri-axial sensor with a flux guide tube for collecting an orthogonal magnetic flux.Considering manufacturing compatibility, this paper uses a three-dimensional flux concentrator to conduct a z-axis magnetic field to a planar fluxgate.It eliminates the requirement for multiple driving circuits commonly used in multi-sensor configuration, thereby reducing power consumption.The material of the flux concentrator and magnetic core is an amorphous Co-based alloy, VITROVAC 6025 Z. Measurement results show that the sensitivity in the z-axis is increased by applying a magnetic concentrator, and three-axis sensing is achieved with the proposed combination of coils and magnetic cores.
Design
The sensor's performance mainly depends on the magnetization state of the magnetic core and the conduction efficiency of the flux concentrator.To increase the induced electromotive force (EMF) in the coil, a nearby magnetic core can be used to conduct a suitable magnetic field.Typically, the magnetic flux density of a coil can be expressed by B coil = µH ex , where µ is the permeability and H ex is the magnetic field strength produced by an excitation coil.The excitation magnetic field generated by the excitation coil will affect the magnetization in the core.Moreover, there is angle difference between the direction of the magnetic flux passing through the induction coil and the area vector of the coil.Therefore, the aforementioned equation can be rewritten as where η is the effective ratio.
In this research, we propose a magnetometer with three-axis magnetic field sensing ability.The basic planar structure is composed of an excitation coil, four induction coils, a set of magnetic cores, and a three-dimensional magnetic flux concentrator, as shown in Figure 1.
Sensors 2024, 24, x FOR PEER REVIEW 2 of 16 sensing with a planar GMR sensor.Lu [20] proposed a tri-axial sensor with a flux guide tube for collecting an orthogonal magnetic flux.Considering manufacturing compatibility, this paper uses a three-dimensional flux concentrator to conduct a z-axis magnetic field to a planar fluxgate.It eliminates the requirement for multiple driving circuits commonly used in multi-sensor configuration, thereby reducing power consumption.The material of the flux concentrator and magnetic core is an amorphous Co-based alloy, VITROVAC 6025 Z. Measurement results show that the sensitivity in the z-axis is increased by applying a magnetic concentrator, and three-axis sensing is achieved with the proposed combination of coils and magnetic cores.
Design
The sensor's performance mainly depends on the magnetization state of the magnetic core and the conduction efficiency of the flux concentrator.To increase the induced electromotive force (EMF) in the coil, a nearby magnetic core can be used to conduct a suitable magnetic field.Typically, the magnetic flux density of a coil can be expressed by = , where is the permeability and is the magnetic field strength produced by an excitation coil.The excitation magnetic field generated by the excitation coil will affect the magnetization in the core.Moreover, there is angle difference between the direction of the magnetic flux passing through the induction coil and the area vector of the coil.Therefore, the aforementioned equation can be rewritten as where is the effective ratio.
In this research, we propose a magnetometer with three-axis magnetic field sensing ability.The basic planar structure is composed of an excitation coil, four induction coils, a set of magnetic cores, and a three-dimensional magnetic flux concentrator, as shown in Figure 1. Figure 2 illustrates how the magnetic field within a magnetic core aligns under the influence of various external magnetic fields.The red arrow in the diagram represents the magnetic field that is generated by the excitation coil, while the yellow arrow indicates the external magnetic field.If a magnetic field is applied along the x-axis of the core, the magnetic field lines within the core will become parallel to the x-axis.The external magnetic field and the magnetic field produced by the induction coil on the right side of the core are Sensors 2024, 24, 1659 3 of 16 in opposite directions.However, on the left side, the core's magnetic field aligns with the external magnetic field.Therefore, coil pair 1-3 can be chosen as an in-plane sensing coil pair.When the external magnetic field is in the z direction, it is guided downward along the magnetic flux concentrator.The external magnetic field at coil 1 is in the same direction as the excitation field, while the external magnetic field at coil 2 is in the opposite direction to the excitation field.Therefore, coil pair 1-2 can be selected as the sensing coil pair.
Sensors 2024, 24, x FOR PEER REVIEW 3 of 16 the external magnetic field.If a magnetic field is applied along the x-axis of the core, the magnetic field lines within the core will become parallel to the x-axis.The external magnetic field and the magnetic field produced by the induction coil on the right side of the core are in opposite directions.However, on the left side, the core's magnetic field aligns with the external magnetic field.Therefore, coil pair 1-3 can be chosen as an in-plane sensing coil pair.When the external magnetic field is in the z direction, it is guided downward along the magnetic flux concentrator.The external magnetic field at coil 1 is in the same direction as the excitation field, while the external magnetic field at coil 2 is in the opposite direction to the excitation field.Therefore, coil pair 1-2 can be selected as the sensing coil pair.When the external magnetic field is in the x direction, both magnetic fields at coil 1 align, resulting in an increase in magnetic flux.Conversely, the magnetic fluxes at coil 3 decrease.By using Faraday's law, the induced emf can be obtained from the time derivative of the magnetic flux.As shown in Figure 3, the induced EMF curves of coils 1 and 3 shift in opposite directions.Subtracting the two curves results in the generation of a second harmonic waveform.
align, resulting in an increase in magnetic flux.Conversely, the magnetic fluxes at coil 3 decrease.By using Faraday's law, the induced emf can be obtained from the time derivative of the magnetic flux.As shown in Figure 3, the induced EMF curves of coils 1 and 3 shift in opposite directions.Subtracting the two curves results in the generation of a second harmonic waveform.When the external magnetic field is in the z direction, the induced EMF curves of coils 1 and 2 change in opposite directions.Meanwhile, coils 1 and 3 undergo identical changes in induced EMF as both fields align in the same direction, resulting in a zero output.This prevents interference from the in-plane magnetic field.
The induction coils are combined into pairs for x-, y-, and z-direction sensing, as listed in Table 1.For example, when subjected to an x-direction magnetic field, coils 1 and 3 are selected as a sensing coil pair, and a second harmonic waveform can be generated by subtracting the waveform of coil 3 from the waveform of coil 1.Similarly, when subjected to a z-direction magnetic field, coils 1 and 2 are selected as a sensing coil pair.Therefore, selecting different coil pairs for sensing enables three-axis magnetic field sensing.When the external magnetic field is in the z direction, the induced EMF curves of coils 1 and 2 change in opposite directions.Meanwhile, coils 1 and 3 undergo identical changes in induced EMF as both fields align in the same direction, resulting in a zero output.This prevents interference from the in-plane magnetic field.
The induction coils are combined into pairs for x-, y-, and z-direction sensing, as listed in Table 1.For example, when subjected to an x-direction magnetic field, coils 1 and 3 are selected as a sensing coil pair, and a second harmonic waveform can be generated by subtracting the waveform of coil 3 from the waveform of coil 1.Similarly, when subjected to a z-direction magnetic field, coils 1 and 2 are selected as a sensing coil pair.Therefore, selecting different coil pairs for sensing enables three-axis magnetic field sensing.
Mode
Coil Pairs
Simulation
The magnetization of cores will be affected by the external magnetic field and the excitation magnetic field.As a result, the induced electromotive force of the corresponding induction coils will change.This research uses Ansys Maxwell software 2020 for magnetic field simulation.According to the specifications of the magnetic core, the saturation Sensors 2024, 24, 1659 5 of 16 magnetic flux density is set to 0.58 T. Figure 4 illustrates the simulation of the magnetic field generated by the excitation coil.Since the excitation coil is sandwiched between the horizonal and vertical cores, the magnetization directions of the cores are opposite.In this way, the excitation magnetic field required for three-axis sensing is created.Even if a magnetic conductor is added, the proposed excitation coils can still effectively build opposite magnetization at ends of the cores.
Simulation
The magnetization of cores will be affected by the external magnetic field and the excitation magnetic field.As a result, the induced electromotive force of the corresponding induction coils will change.This research uses Ansys Maxwell software 2020 for magnetic field simulation.According to the specifications of the magnetic core, the saturation magnetic flux density is set to 0.58 T. Figure 4 illustrates the simulation of the magnetic field generated by the excitation coil.Since the excitation coil is sandwiched between the horizonal and vertical cores, the magnetization directions of the cores are opposite.In this way, the excitation magnetic field required for three-axis sensing is created.Even if a magnetic conductor is added, the proposed excitation coils can still effectively build opposite magnetization at ends of the cores.Next, simulations are performed to show whether the magnetic conductor can conduct the out-of-plane magnetic flux to the element plane.Theoretically, the conduction efficiency of the flux concentrator will determine the z-axis sensing capability of the proposed sensor.As shown in Figure 5, the z-axis magnetic field is guided to the ends of the cores, and the magnetization directions of both cores are the same.
We can compare the magnetic field distribution changes with and without a flux concentrator, as illustrated in Figure 6.Without a flux concentrator, most of the z-axis magnetic field directly penetrates the core, and as a result, the magnetization of the magnetic core remains unaffected.However, when the proposed magnetic concentrator is added, it alters the magnetic field distribution.The flux concentrator can effectively be used to collect out-of-plane magnetic flux and redirect it towards planar cores located at the ends of the flux concentrator.This results in a symmetrical magnetization distribution along the center of the core.Next, simulations are performed to show whether the magnetic conductor can conduct the out-of-plane magnetic flux to the element plane.Theoretically, the conduction efficiency of the flux concentrator will determine the z-axis sensing capability of the proposed sensor.As shown in Figure 5, the z-axis magnetic field is guided to the ends of the cores, and the magnetization directions of both cores are the same.We can compare the magnetic field distribution changes with and without a flux concentrator, as illustrated in Figure 6.Without a flux concentrator, most of the z-axis magnetic field directly penetrates the core, and as a result, the magnetization of the magnetic core remains unaffected.However, when the proposed magnetic concentrator is added, it alters the magnetic field distribution.The flux concentrator can effectively be used to collect out-of-plane magnetic flux and redirect it towards planar cores located at the ends of the flux concentrator.This results in a symmetrical magnetization distribution along the center of the core.
Sensor Manufacturing
The fabricated magnetometer is shown in Figure 7, including the coils, cores, and flux concentrator.The excitation and induction coils are fabricated on a double-sided printed circuit board (PCB) to increase the stability and quality of the element.Subsequently, 3D printing is used to build the basic support for the flux concentrator.The parameters of the sensor are listed in Table 2. Considering the component area and power loss, 25-turn excitation coils and 15-turn induction coils are employed as the basic layout blocks.In the design of the flux concentrator, the angle between the hypotenuse and the element plane is 60 degrees, and the V-shaped structure is designed to be wider at the top and narrower at the bottom to improve the transmission effect.The magnetic flux concentrator Sensors 2024, 24, 1659 7 of 16 and magnetic cores are further assembled on the PCB, which simplifies the process steps.Moreover, a lock-in amplifier circuit is also built on a PCB.
circuit board (PCB) to increase the stability and quality of the element.Subsequently, 3D printing is used to build the basic support for the flux concentrator.The parameters of the sensor are listed in Table 2. Considering the component area and power loss, 25-turn excitation coils and 15-turn induction coils are employed as the basic layout blocks.In the design of the flux concentrator, the angle between the hypotenuse and the element plane is 60 degrees, and the V-shaped structure is designed to be wider at the top and narrower at the bottom to improve the transmission effect.The magnetic flux concentrator and magnetic cores are further assembled on the PCB, which simplifies the process steps.Moreover, a lock-in amplifier circuit is also built on a PCB.
Testing Setup
In Figure 8, the three-axis vector magnetometer is shown placed inside a Helmholtz coil for calibration.The magnetometer's sensing axis was aligned with the magnetic field produced by the Helmholtz coil.An excitation signal was given to the magnetometer by a signal generator and then fine-tuned to a specific current through the use of a power amplifier.The resulting waveforms from each coil when exposed to external magnetic fields were recorded.The subtraction of paired coil signals produced a second-harmonic frequency signal.
Testing Setup
In Figure 8, the three-axis vector magnetometer is shown placed inside a Helmholtz coil for calibration.The magnetometer's sensing axis was aligned with the magnetic field produced by the Helmholtz coil.An excitation signal was given to the magnetometer by a signal generator and then fine-tuned to a specific current through the use of a power amplifier.The resulting waveforms from each coil when exposed to external magnetic fields were recorded.The subtraction of paired coil signals produced a second-harmonic frequency signal.
A lock-in amplifier was used to extract the required DC signal.To address the issue of phase difference between the input and reference signals, the reference signal was split into two signals with a 90-degree phase offset.The modulator multiplied the signals, and the resulting signal was then passed through a low-pass filter to generate a DC voltage output.Throughout the measurement, the Helmholtz coil was placed in a magnetic shield to eliminate external stray magnetic fields.A lock-in amplifier was used to extract the required DC signal.To address the issue of phase difference between the input and reference signals, the reference signal was split into two signals with a 90-degree phase offset.The modulator multiplied the signals, and the resulting signal was then passed through a low-pass filter to generate a DC voltage output.Throughout the measurement, the Helmholtz coil was placed in a magnetic shield to eliminate external stray magnetic fields.
Measurement Results
For the magnetic field sensing, the parameters of the magnetic core and the exciting current obviously affect the sensitivity and linear range.Based on the Ref. [12], a magnetic core of 5 mm was used for the following measurements.Due to the limitation of the IC used in the lock-in amplifier circuit, 50 kHz was set as the upper limit of the frequency range.The exciting current was set to 700 mA to prevent overheating the element.
To ensure the effective magnetization of the magnetic core by the excitation magnetic field, initial testing was conducted without an assisted flux concentrator.It is observed from Figure 9 that the sensitivity of X-axis mode is 241.1 V/T and the sensitivity of Y-axis mode is 409.8V/T.
Measurement Results
For the magnetic field sensing, the parameters of the magnetic core and the exciting current obviously affect the sensitivity and linear range.Based on the Ref. [12], a magnetic core of 5 mm was used for the following measurements.Due to the limitation of the IC used in the lock-in amplifier circuit, 50 kHz was set as the upper limit of the frequency range.The exciting current was set to 700 mA to prevent overheating the element.
To ensure the effective magnetization of the magnetic core by the excitation magnetic field, initial testing was conducted without an assisted flux concentrator.It is observed from Figure 9 that the sensitivity of X-axis mode is 241.1 V/T and the sensitivity of Y-axis mode is 409.8V/T.
Three-Axial Magnetic Field Sensing
For precise spatial magnetic field measurements, the sensor needs to be able to measure on three axes.In Figures 10-12, the second harmonic waveforms of each in pair are depicted under the influence of three axial external magnetic fields.It is evident that the second-harmonic signal becomes more pronounced with the increasing strength of the external magnetic field.Furthermore, even in the presence of an external magnetic field, the second harmonic signal is not noticeable for the non-corresponding sensing axes.
Three-Axial Magnetic Field Sensing
For precise spatial magnetic field measurements, the sensor needs to be able to measure on three axes.In Figures 10-12, the second harmonic waveforms of each in pair are depicted under the influence of three axial external magnetic fields.It is evident that the secondharmonic signal becomes more pronounced with the increasing strength of the external magnetic field.Furthermore, even in the presence of an external magnetic field, the second harmonic signal is not noticeable for the non-corresponding sensing axes.
Three-Axial Magnetic Field Sensing
For precise spatial magnetic field measurements, the sensor needs to be able to measure on three axes.In Figures 10-12, the second harmonic waveforms of each in pair are depicted under the influence of three axial external magnetic fields.It is evident that the second-harmonic signal becomes more pronounced with the increasing strength of the external magnetic field.Furthermore, even in the presence of an external magnetic field, the second harmonic signal is not noticeable for the non-corresponding sensing axes.
Three-Axial Magnetic Field Sensing
For precise spatial magnetic field measurements, the sensor needs to be able to measure on three axes.In Figures 10-12, the second harmonic waveforms of each in pair are depicted under the influence of three axial external magnetic fields.It is evident that the second-harmonic signal becomes more pronounced with the increasing strength of the external magnetic field.Furthermore, even in the presence of an external magnetic field, the second harmonic signal is not noticeable for the non-corresponding sensing axes.The measured waveforms were processed by a lock-in amplifier circuit.The converted dc voltages with respect to axial magnetic fields ranging from 0 to 200 µT are plotted in Figure 13.For the output voltage, the region where the nonlinearity is less than 10% is defined as the linear range, as listed in Table 3.The average voltage to magnetic field conversion ratio in this range is the sensitivity.The measured sensitivities are 257.1 V/T The measured waveforms were processed by a lock-in amplifier circuit.The converted dc voltages with respect to axial magnetic fields ranging from 0 to 200 µT are plotted in Figure 13.For the output voltage, the region where the nonlinearity is less than 10% is defined as the linear range, as listed in Table 3.The average voltage to magnetic field conversion ratio in this range is the sensitivity.The measured sensitivities are 257.1 V/T for the X-axis sensing mode, 468.8 V/T for the Y-axis sensing mode, and 258.8 V/T for the Z-axis sensing mode.It is found that the sensitivity of the y-axis is higher than that of the x-axis.The main reason is that the y-axis core is closer to the induction coils, while the x-axis core is about 1.6 mm away from the induction coils.Moreover, it can be observed that beyond the linear range, the sensitivity gradually decreases with the increasing magnetic field.The measured waveforms were processed by a lock-in amplifier circuit.The converted dc voltages with respect to axial magnetic fields ranging from 0 to 200 µT are plotted in Figure 13.For the output voltage, the region where the nonlinearity is less than 10% is defined as the linear range, as listed in Table 3.The average voltage to magnetic field conversion ratio in this range is the sensitivity.The measured sensitivities are 257.1 V/T for the X-axis sensing mode, 468.8 V/T for the Y-axis sensing mode, and 258.8 V/T for the Z-axis sensing mode.It is found that the sensitivity of the y-axis is higher than that of the x-axis.The main reason is that the y-axis core is closer to the induction coils, while the xaxis core is about 1.6 mm away from the induction coils.Moreover, it can be observed that beyond the linear range, the sensitivity gradually decreases with the increasing magnetic field.For a three-axis sensor, a magnetic field applied to a single axis will be sensed by other axes, which is called coupling.To analyze the coupling between the axes, a single- For a three-axis sensor, a magnetic field applied to a single axis will be sensed by other axes, which is called coupling.To analyze the coupling between the axes, a single-axis magnetic field was applied and the output of three sensing modes were recorded.In Z mode, the x-axis and y-axis magnetic field couplings are 1.22% and 1.79%, respectively.
Directional Magnetic Field Sensing
For three-axis sensors, vector magnetic field sensing capabilities are required.During the measurement, the sensor was rotated along the x, y, and z axes of the sensor, as shown in Figure 14.The voltage measured at an external magnetic field of 100 µT is plotted against the rotation angle, as shown in Figure 15.The voltage waveforms in the magnetic fields of the y-z plane, x-z plane, and x-y plane exhibit a complete sine wave trend, differing by 90 degrees.The measurement results are consistent with the projection of the magnetic field on the two axes.This confirms the sensor's ability to sense direction.Angular measurement errors may result from misalignment of the core or concentrator and incorrect sensor positioning within the Helmholtz coil.In forthcoming studies, accuracy can be enhanced by incorporating mechanical fixtures during sensor assembly or calibration processes.
fering by 90 degrees.The measurement results are consistent with the projection of the magnetic field on the two axes.This confirms the sensor's ability to sense direction.Angular measurement errors may result from misalignment of the core or concentrator and incorrect sensor positioning within the Helmholtz coil.In forthcoming studies, accuracy can be enhanced by incorporating mechanical fixtures during sensor assembly or calibration processes.The normalized root mean square error (NRMSE) is used to calculate the similarity between the predicted values ( ) and the corresponding measured values ( ), as indicated in Equation ( 2).The decrease in NRMSE reflects the increasing similarity between the two curves.In y-z plane measurements, the NRMSE is 0.108 for Y mode and 0.154 for Z mode.In x-z plane measurements, the NRMSE is 0.075 for X mode and 0.065 for Z mode.In x-y plane measurements, the NRMSE is 0.115 for X mode and 0.039 for Y mode.Measurement errors can be caused by undesired coupling, core and flux concentrator misalignment, and sensor placement errors.Deviations are acceptable.The normalized root mean square error (NRMSE) is used to calculate the similarity between the predicted values ( ŷi ) and the corresponding measured values (y i ), as indicated in Equation ( 2).The decrease in NRMSE reflects the increasing similarity between the two curves.In y-z plane measurements, the NRMSE is 0.108 for Y mode and 0.154 for Z mode.In x-z plane measurements, the NRMSE is 0.075 for X mode and 0.065 for Z mode.In x-y plane measurements, the NRMSE is 0.115 for X mode and 0.039 for Y mode.Measurement errors can be caused by undesired coupling, core and flux concentrator misalignment, and sensor placement errors.Deviations are acceptable.
Noise Analysis
When subjected to an excitation magnetic field in the absence of external magnetic field, the power spectral density of the sensor's output voltage was analyzed.Figure 16 shows the square root of the noise power spectral density (PSD) in the frequency range of 0.5-10 Hz.
Demonstration
The sensor can be utilized to measure angular variations, exemplified in the assessment of walking posture (see Figure 17).During the demonstration, we roughly aligned the sensor's y-z plane with the Earth's magnetic field.In the initial position, or state 1, of the sensor, the z-axis was aligned perpendicular to the thigh, and the y-axis was aligned parallel to the thigh.In Figure 18, the output signals corresponding to states 1 through 5 during walking are presented.It was observed that X-mode sensing produced the least signal.In Z-mode sensing, the signal was highest in state 3 due to the better alignment between the sensor's z-axis and the Earth's magnetic field.Similarly, in Y-mode sensing, the highest output is observed in state 4 for the same reason.
Demonstration
The sensor can be utilized to measure angular variations, exemplified in the assessment of walking posture (see Figure 17).During the demonstration, we roughly aligned the sensor's y-z plane with the Earth's magnetic field.In the initial position, or state 1, of the sensor, the z-axis was aligned perpendicular to the thigh, and the y-axis was aligned parallel to the thigh.In Figure 18, the output signals corresponding to states 1 through 5 during walking are presented.It was observed that X-mode sensing produced the least signal.In Z-mode sensing, the signal was highest in state 3 due to the better alignment between the sensor's z-axis and the Earth's magnetic field.Similarly, in Y-mode sensing, the highest output is observed in state 4 for the same reason.
the sensor, the z-axis was aligned perpendicular to the thigh, and the y-axis was aligned parallel to the thigh.In Figure 18, the output signals corresponding to states 1 through 5 during walking are presented.It was observed that X-mode sensing produced the least signal.In Z-mode sensing, the signal was highest in state 3 due to the better alignment between the sensor's z-axis and the Earth's magnetic field.Similarly, in Y-mode sensing, the highest output is observed in state 4 for the same reason.ment of walking posture (see Figure 17).During the demonstration, we roughly aligned the sensor's y-z plane with the Earth's magnetic field.In the initial position, or state 1, of the sensor, the z-axis was aligned perpendicular to the thigh, and the y-axis was aligned parallel to the thigh.In Figure 18, the output signals corresponding to states 1 through 5 during walking are presented.It was observed that X-mode sensing produced the least signal.In Z-mode sensing, the signal was highest in state 3 due to the better alignment between the sensor's z-axis and the Earth's magnetic field.Similarly, in Y-mode sensing, the highest output is observed in state 4 for the same reason.
Conclusions
A magnetometer consisting of a planar fluxgate and a flux concentrator is proposed to achieve three-axis sensing.The three-axis sensor only adopts one sensing mechanism, which simplifies the circuit design.With the proposed flux concentrator, the out-of-plane magnetic flux can be effectively collected and transmitted to the planar core as an aid for z-axis sensing.Both in-plane and out-of-plane magnetic fields can be sensed by using dif-
Figure 1 .
Figure 1.Schematic of the proposed three-axis magnetometer.
Figure 2
Figure 2 illustrates how the magnetic field within a magnetic core aligns under the influence of various external magnetic fields.The red arrow in the diagram represents the magnetic field that is generated by the excitation coil, while the yellow arrow indicates
Figure 1 .
Figure 1.Schematic of the proposed three-axis magnetometer.
Figure 2 .
Figure 2. Schematic diagram of the magnetic field directions in the core under different external magnetic fields: (upper) Bx; (lower) Bz.
Figure 2 .
Figure 2. Schematic diagram of the magnetic field directions in the core under different external magnetic fields: (upper) Bx; (lower) Bz.
Figure 3 .
Figure 3.The magnetic flux and induced EMF of the coils under the influence of Bx (upper figure) and Bz (lower figure).
Figure 3 .
Figure 3.The magnetic flux and induced EMF of the coils under the influence of Bx (upper figure) and Bz (lower figure).
Figure 4 .
Figure 4.The direction of the excitation magnetic field in the cores.
Figure 4 .
Figure 4.The direction of the excitation magnetic field in the cores.
16 Figure 5 .
Figure 5.The direction of the magnetic field in the cores in response to the z-axis magnetic field.Figure 5.The direction of the magnetic field in the cores in response to the z-axis magnetic field.
Figure 5 .
Figure 5.The direction of the magnetic field in the cores in response to the z-axis magnetic field.Figure 5.The direction of the magnetic field in the cores in response to the z-axis magnetic field.
Figure 5 .
Figure 5.The direction of the magnetic field in the cores in response to the z-axis magnetic field.
Figure 6 .
Figure 6.The distribution of the out-of-plane magnetic field applied to a magnetometer without a flux concentrator (upper figure) and with a flux concentrator (lower figure).
Figure 6 .
Figure 6.The distribution of the out-of-plane magnetic field applied to a magnetometer without a flux concentrator (upper figure) and with a flux concentrator (lower figure).
Figure 7 .Table 2 .
Figure 7. Photo of the PCB-based magnetometer.Table 2. Parameters for the sensor.Parameters Values Excitation coil turns 25 × 1 Induction coil turns 15 × 4 Size 29.8 × 31.9 mm 2 Hight of the flux concentrator 34.9 mm Angle of the flux concentrator 60
Figure 7 .
Figure 7. Photo of the PCB-based magnetometer.
Figure 9 .
Figure 9. Measured voltages induced by two axial magnetic fields.
Figure 9 .
Figure 9. Measured voltages induced by two axial magnetic fields.
Figure 9 .
Figure 9. Measured voltages induced by two axial magnetic fields.
Figure 10 .
Figure 10.Measured second harmonic waveforms for the X-axis mode.
Figure 11 .
Figure 11.Measured second harmonic waveforms for the Y-axis mode.
Figure 10 .
Figure 10.Measured second harmonic waveforms for the X-axis mode.
Figure 9 .
Figure 9. Measured voltages induced by two axial magnetic fields.
Figure 10 .
Figure 10.Measured second harmonic waveforms for the X-axis mode.
Figure 11 .
Figure 11.Measured second harmonic waveforms for the Y-axis mode.Figure 11.Measured second harmonic waveforms for the Y-axis mode.
Figure 11 . 16 Figure 12 .
Figure 11.Measured second harmonic waveforms for the Y-axis mode.Figure 11.Measured second harmonic waveforms for the Y-axis mode.Sensors 2024, 24, x FOR PEER REVIEW 10 of 16
Figure 12 .
Figure 12.Measured second harmonic waveforms for the Z-axis mode.
Figure 12 .
Figure 12.Measured second harmonic waveforms for the Z-axis mode.
Figure 13 .
Figure 13.Measured voltages induced by three axial magnetic fields.
Figure 13 .
Figure 13.Measured voltages induced by three axial magnetic fields.
Figure 14 .
Figure 14.Illustrations of sensor angle measurements in magnetic fields in the y-z (top figure), x-z (middle figure), and x-y (bottom figure) planes.
Figure 14 .
Figure 14.Illustrations of sensor angle measurements in magnetic fields in the y-z (top figure), x-z (middle figure), and x-y (bottom figure) planes.
Figure 14 .
Figure 14.Illustrations of sensor angle measurements in magnetic fields in the y-z (top figure), x-z (middle figure), and x-y (bottom figure) planes.
Figure 15 .
Figure 15.Output voltage for the sensor in a magnetic field in the (top) y-z, (middle) x-z, and (bottom) x-y planes.The blue, orange, and grey lines refer to the predicted values of the X mode, Y mode, and Z mode, respectively.
Figure 15 .
Figure 15.Output voltage for the sensor in a magnetic field in the (top) y-z, (middle) x-z, and (bottom) x-y planes.The blue, orange, and grey lines refer to the predicted values of the X mode, Y mode, and Z mode, respectively.
Figure 16 .
Figure 16.The square root of the sensor's PSD.
Figure 16 .
Figure 16.The square root of the sensor's PSD.
Figure 17 .
Figure 17.Photographs of simulated walking poses captured from state 1 to state 5.Figure 17.Photographs of simulated walking poses captured from state 1 to state 5.
Figure 17 .
Figure 17.Photographs of simulated walking poses captured from state 1 to state 5.Figure 17.Photographs of simulated walking poses captured from state 1 to state 5.
Figure 17 . 16 Figure 18 .
Figure 17.Photographs of simulated walking poses captured from state 1 to state 5.
Figure 18 .
Figure 18.The top, middle, and bottom figures show the output voltages for states 1 to 5 in the X, Y, and Z sensing modes, respectively.
Table 1 .
Definition of coil pairs for different sensing modes.
Table 2 .
Parameters for the sensor.
Table 3 .
Sensor performance for different sensing modes.
Table 3 .
Sensor performance for different sensing modes.
The mean √ PSD of the X sensing mode is 213 nT/ √ Hz.The mean √ PSD of the Y sensing mode is 71 nT/ √ Hz.The mean √ PSD of the Z sensing mode is 249 nT/ √Hz.The noise levels of the X, Y, and Z modes are 1.04 µT, 0.37 µT, and 1.24 µT, respectively.It can be observed that the noise level in the Y mode is the smallest, and this trend is consistent with the higher sensitivity. | 8,229.2 | 2024-03-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Mitoxantrone Loaded Superparamagnetic Nanoparticles for Drug Targeting: A Versatile and Sensitive Method for Quantification of Drug Enrichment in Rabbit Tissues Using HPLC-UV
In medicine, superparamagnetic nanoparticles bound to chemotherapeutics are currently investigated for their feasibility in local tumor therapy. After intraarterial application, these particles can be accumulated in the targeted area by an external magnetic field to increase the drug concentration in the region of interest (Magnetic-Drug-Targeting). We here present an analytical method (HPLC-UV), to detect pure or ferrofluid-bound mitoxantrone in a complex matrix even in trace amounts in order to perform biodistribution studies. Mitoxantrone could be extracted in high yields from different tissues. Recovery of mitoxantrone in liver tissue (5000 ng/g) was 76 ± 2%. The limit of quantification of mitoxantrone standard was 10 ng/mL ±12%. Validation criteria such as linearity, precision, and stability were evaluated in ranges achieving the FDA requirements. As shown for pilot samples, biodistribution studies can easily be performed after application of pure or ferrofluid-bound mitoxantrone.
Introduction
Application of chemotherapeutic agents bound to magnetic nanoparticles is a promising approach for site specific drug deposition. Magnetic-Drug-Targeting (MDT) is intended to elevate the drug concentration in the region of interest, because the drug loaded particles are enriched by a focused external magnetic field to defined body compartments after application [1][2][3][4]. This method can lead to both higher potency of antitumor treatments and reduction of negative side effects. In our model, drug loading was realized with mitoxantrone (MTO) (Figure 1), an anthracendion derivative which inhibits the DNA and RNA synthesis and causes DNA strand breaks by intercalation [5,6]. This active component was adsorptively bound to the iron oxide nanoparticles. This agent is used extensively in clinical trials to treat fatal diseases including leukemia, lymphoma, cancers of the breast and prostate [7], and to treat multiple sclerosis [8,9].
A quantitative determination of the biodistribution of magnetic nanoparticles can be achieved by magnetorelax-ometry. The applicability of this technique for the quantification of ferrofluids (FF) in tissue has been demonstrated previously for an extracted tumor slice model [10], a specific artery model [11], and is even in process for the whole body distribution. MTO enrichment might be different, so we are interested in developing an easy multifunctional method to quantify the amount of MTO, especially if it is bound to coated nanoparticles.
Various methods have been described to extract MTO from blood plasma [12][13][14][15][16] and tissues [17] on its own. Moreover there are established protocols to determine MTO by HPLC measurements in context of drug delivery systems like liposome carriers [18] and nanosphere vehicles [19]. Protein precipitation with 0.5 M hydrochloric acid: acetonitrile (90 : 10, v/v), as described by Johnson et al. [18], leads to a recovery of pure MTO in porcine muscle tissue of more than 85% (Table 1). Nevertheless, measurement of MTO bound to iron oxide nanoparticles is more difficult, and extracting whole organs of white New Zealand rabbits this way was unfeasible. [19], which has been applied in the form of bovine serum albumin (BSA) nanospheres and revealed also promising results in our context. But in order to reliably capture even trace amounts of MTO, an optimization of the extraction mode has been necessary, as well as the development of a subsequent solid phase extraction (SPE) method. Here we present a newly developed method to determine MTO in different tissues and of different binding statuses.
The ferrofluids were synthesized according to a protocol by Hodenius [20].
HPLC. The HPLC analyses were performed by a Waters
Alliance model consisting of a separation module (2695 series) and a dual wavelength absorbance detector (2487 series). The eluate was monitored at 254 nm. The separation was carried out using a 3.0 × 100 mm X-Bridge Phenyl column (Waters, Germany) with a particle diameter of 3.5 μm; the guard column consisted of the same material and was 3.0 × 20 mm of size. The column temperature was 55 • C, the mobile phase was made up of buffer (80 nM sodium formate and formic acid, pH 3.0) and methanol (80 : 20 v/v). Flow was 1 mL/min, the injection volume was 50 μL at room temperature. All measurements were performed after an HPLC-method validation including selectivity, linearity, limit of quantification, precision, accuracy, recovery rate, and stability according to a validation protocol [21,22]. Prior to the sample measurements, a calibration curve with MTOconcentrations of 10 ng/mL-20 000 ng/mL was done.
Method Development/Optimization.
For the extraction experiments, we used porcine muscle tissue and liver tissue which were doped with defined amounts of pure MTO or MTO bound to FF. This occurred by adding dissolved MTO at defined values (see Tables 1 and 2) to tissue homogenates and incubating overnight at room temperature. Due to the complexity of liver tissue and the successful determination here and moreover in muscle tissue, this method could be suitable to perform a complete biodistribution of MTO in rabbits. The extraction was a result of an optimization process. Liver or muscle tissue, doped with MTO, was homogenized for 1 minute using an Ultra Turrax apparatus (Ika, Staufen, Germany) to lyse the cell membranes prior to sonication. Several extraction mixtures containing hydrochloric acid or phosphoric acid in combination with MeOH were tested for extraction efficiency and their compatibility with the SPE procedures (see Tables 1 and 3). For the extraction, simple shaking for 4 or 24 hours was compared to vortex mixing and 1 to 4-fold ultrasonic treatment with a duration of 1 hour each using an ultrasonic bath (US) (Bandelin Sonorex TK 52, Berlin, Germany, without temperature control) ( Table 2).
To sum up, each 0.5 g MTO-containing tissue was treated 4 times for 1 hour with ultrasound in the extraction mixture (500 μL water, 50 μL ascorbic acid (20%) in citrate buffer (pH 3.0) 200 μL methanole (MeOH), 200 μL formic acid, 100 μL, 20% trichloroacetic acid, 400 μL chloroform) followed by centrifugation for 10 minutes at 8000 × G (Jouan MR 23 I, Germany). The combined supernatant was concentrated via a Bond Elut Plexa (200 mg, 6 mL, Varian, Darmstadt, Germany) cartridge after conditioning with acetonitrile (MeCN) and 2% formic acid in water. The analyte was eluted with 5 mL 2% formic acid in MeCN and dried under airstream. The water resolved residue was measured by HPLC as described above to display the efficiency of the extraction.
Optimization of the Extraction.
In terms of drug delivery with nanoparticles, only traces of MTO are embedded in different tissues so that the whole organ has to be processed. The extraction of MTO using 1 N hydrochloric acid (HCl), (Table 1). When it was bound to FF, the yield decreased significantly. Since a onephase-extraction led to turbid, protein-rich supernatants, protein precipitation by trichloroacetic acid (TCA) as well as Carrez-clearing [24] were used. Unfortunately, both methods did not show promising results. Especially, Carrez-clearing even leads to a blue coprecipitate of MTO.
A new point of view offered the extraction mode of Lu et al. [19] with a two-phase-extraction system to separate interfering lipophilic substances in organs, especially liver tissue. This method gave also the possibility to precipitate the protein fraction and save MTO from decomposition. Nevertheless, we modified this method because more tissue and less MTO required an optimization. We used homogenization for 1 minute via Ultra Turrax (Ika) to dehisce the cell membranes prior to sonication. Simple vortexing and just one single extraction cycle obtained only 26% recovery ( Table 2). Ultrasonic extraction was superior to simple vortexing, and multistep-extraction even increased the efficiency. A 4-fold extraction each for 1 hour was nearly quantitative (88 ± 4%) ( Figure 2).
Of course, the accelerated solvent extraction (ASE) ( Table 2) method led to high extraction rates in short time but this is very expensive. Moreover, ASE tends to require also a useful SPE processing afterwards.
Combination of Extraction and Cartridge Procedure.
To quantify the MTO amount of a whole organ, a useful SPE method was necessary. Unfortunately all the extraction modes described in Table 1 lead to turbid, protein-rich supernatants clogging the SPE cartridge. Using the RP-18 Cartridge (Sep-Pak-Plus C18, 360 mg, Waters) and a standard elution procedure with water and MeOH, recovery of MTO was not sufficient although pretesting with pure MTO gave promising results (>90%) ( Table 3). A recovery rate of 76% by extraction of MTO in porcine liver tissue with phosphoric acid was adequate (Table 3). It is also possible to extract, fixate, and elute MTO bound to FF when using phosphoric acid or phosphate; but in the presence of liver tissue, the recovery of MTO connected to FF was scarce after the elution from the cartridge. In that case, extraction mixtures lead at the most to 17% recovery and almost no reproducibility. This was also true when hydrochloric acid was used for the extraction. The FF dissolved under these conditions and interestingly only traces of MTO could be detected, no matter what kind of conditioning mode was used for the SPE.
The choice of the right cartridge was essential. The widespread used RP-18 cartridge seemed to be not useful in our context. Most of the tested cartridges perform the fixation and elution of released MTO bound to FF, but in presence of liver tissue only Varian Bond Elut Plexa gave appropriate yields with more than 80% recovery (Table 4).
To sum up, the whole method entails a 1-hour ultrasonic extraction repeated 3× of tissue with the 2-phase extractionsolution followed by concentration and purification via Varian Bond Elut Plexa cartridge and the consecutive HPLC measurement (Figure 3).
Selectivity.
Chromatograms obtained from processed blank liver tissue did not show interfering peaks at the retention time of pure MTO measured by HPLC as seen in Figure 4. Selectivity was proved from 6 blank animals using their liver, lung, kidneys and muscle tissue. Each extraction was measured 5 times via HPLC. The results have to be observed in detail. Kidney, lung, and muscle tissue show a baseline chromatographic resolution of at least 1.5 from all other sample components and can be declared as fully separated. This criterion cannot be achieved for liver. The deficient baseline resolution could influence the analyte response. Experiments were performed to evaluate to what extend the impurity peak will affect the final assay result. 6 extraction solutions doted with MTO at the same amount, as the impurity peak is responding to (100 ng/mL), were concomitant analysed with identical amount of 100 ng/mL MTO in mobile phase solution. T-test of these 2 different population means gave the result that the impurity peak influences the respond significantly. To meet the specific criteria that nonseparable impurity peaks influence the result at most for 0.5% MTO [25], amounts more than 250 ng/mL can seriously be quantified. unlike the LLOQ. The deviation of each reading point meets the FDA specification for linearity of ±15% (LLOQ ±20%). For quantification of unknown samples, a linear equation without any weighting was used. Goodness of fit was not evaluated, because the correlation coefficient with more than 0.999 guarantees a sufficient linear correlation between MTO concentration and analytical response over the whole range ( Figure 5). The LLOQ was determined according to the guidelines of FDA for bioanalytical method validation and is defined as the 5-time response compared to blank samples. This applies to the level of 10 ng/mL [21].
Precision.
Intraassay precision is determined for 3 concentrations of pure MTO (5000; 10000 and 15000 ng/mL). Each level was measured 5 times. The Level for 5000 ng/mL gave 1.0% relative standard deviation, the according data for the 10000 ng/mL and 15000 ng/mL levels are 0.9% and 1.0%, respectively. Interassay precision was performed on 5 different days for the same values mentioned for intraassay. The relative standard deviations were: 1.4% for 5000 ng/mL, 8.6% for 10000 ng/mL and 2.4% for 15000 ng/mL.
Accuracy.
Intraassay accuracy is determined for 3 concentrations of unbound MTO (5000, 10000, and 15000 ng/mL). Each level was measured 5 times. The Level for 5000 ng/mL gave a deviation of the mean from the true value of 1.4%; for 10000 ng/mL measurements gave 1.7% and 1.8% for 15000 ng/mL, respectively and so meet the criteria which are 80-120% for the target concentration.
The same experiments have been performed as interassay accuracy. Measurements mentioned above were arranged on 5 different days. The accuracy values (deviation of the mean from the true value) were 2.1% for 5000 ng/mL, 5.6% for 10000 ng/mL, and 0.3% for 15000 ng/mL. The total recovery rate (extraction and SPE) for MTO in liver tissue for series A (5000 ng/g) was 76 ± 2% for series B (500 ng/g) 67 ± 5% and for series C (50 ng/g) 68 ± 4%. The experiments were performed for 3 independent samples keeping in mind that the recovery can be different when using other tissues than liver. For biodistribution, recovery depends on the total weight of organs. The bigger the organs are the more MTO can be extracted in comparison to smaller organs having the same drug concentration.
Pilot Experiments in order to Perform Biodistribution
Studies. We used the method, described above with 4 × 1 hour ultrasonic extraction (Figure 3) After 24 hours, the animals were sacrificed and organs harvested. Complete organs of liver, lung, kidneys, and a muscle tissue sample were processed and treated in a 100 mL glass flask (Duran, Germany) with the respective amount of extraction solution increased to the amount for the weight of the entire organ as described above followed by SPE.
Suitability of the Method to Quantify Ferrofluid-Bound
MTO. The application of this newly developed method has been proven suitable in a starting in vivo pilot study with New Zealand White rabbits. First experiments with ferrofluid-bound MTO showed that the highest enrichment could be found in the kidneys. The results exhibited the wide range of MTO amounts in different tissues and proved the suitability of the method for different kinds of biological materials (Table 5).
Discussion
To the best of our knowledge, none of the existing protocols, concerning the analysis of mitoxantrone in tissues, have been feasible for our approach of ferrofluid-bound MTO. Moreover, there is especially a lack of techniques to detect small amounts in large scaled tissues. With our method connecting efficient extraction procedures with enrichment strategies towards detectable extraction solutions, we are able to perform biodistribution studies. Monitoring MTO enrichment in different tissues in case of FF-bound drug delivery and overlapping the results with magnetorelaxometric measurements, complex biodistribution patterns after application of MTO-bound iron oxide nanoparticles become low hanging fruits. Current developments in chemotherapy increasingly base on nanoscaled delivery systems. Nanoparticles, liposomes, encapsulations, micelles, and other nanostructures carrying chemotherapeutics require newly developed or modulated analytical methods. With this protocol in hand further detailed analytical instructions can be developed for adsorptive drug delivery systems using well-established and prevalent applied HPLC.
Conclusion
Here we present an easy-to-perform and versatile applicable new method to determine even small amounts of mitoxantrone, regardless if it is in pure or ferrofluid-bound condition in tissues or other biological matrices. Established procedures have not been useful in our context to detect mitoxantrone, especially in presence of FF and contained in tissue samples weighing more than 100 g which especially is necessary for medical application studies in cancer research. We applied this method to evaluate the biodistribution of MTO in rabbit tissue. Within first experiments, we could easily perform the measurements of whole organs and showed the applicability of this new method and its widespread possibilities and analytical applications. | 3,591 | 2010-05-13T00:00:00.000 | [
"Biology"
] |
Searches for neutrino counterparts of gravitational waves from the LIGO/Virgo third observing run with KM3NeT
The KM3NeT neutrino telescope is currently being deployed at two different sites in the Mediterranean Sea. First searches for astrophysical neutrinos have been performed using data taken with the partial detector configuration already in operation. The paper presents the results of two independent searches for neutrinos from compact binary mergers detected during the third observing run of the LIGO and Virgo gravitational wave interferometers. The first search looks for a global increase in the detector counting rates that could be associated with inverse beta decay events generated by MeV-scale electron anti-neutrinos. The second one focuses on upgoing track-like events mainly induced by muon (anti-)neutrinos in the GeV–TeV energy range. Both searches yield no significant excess for the sources in the gravitational wave catalogs. For each source, upper limits on the neutrino flux and on the total energy emitted in neutrinos in the respective energy ranges have been set. Stacking analyses of binary black hole mergers and neutron star-black hole mergers have also been performed to constrain the characteristic neutrino emission from these categories.
1 Introduction The first detection of a gravitational wave (GW) signal from a binary compact merger [1] initiated in 2015 a new era in multi-messenger astronomy.The subsequent observation in 2017 of a GW signal from the binary neutron star merger event GW170817 and of prompt and afterglow electromagnetic emissions from the associated short gamma-ray burst [2] was the first and so far unique multi-messenger observation of its kind.Models exist of production of neutrinos from these compact mergers, especially for mergers involving neutron stars such as binary neutron star mergers (BNS) [3] or neutron star-black hole mergers (NSBH) [4], though some models also predict neutrino emissions from binary black hole mergers (BBH) [5].Although most of the studies focus on hadronic processes leading to high-energy neutrino production (E ν ≳ GeV), thermal neutrinos in the MeV regime may also be produced [6].
Searches for neutrinos associated with GW signals from compact binary mergers have already been performed with other neutrino telescopes across the globe e.g., ANTARES [7,8], IceCube [9][10][11], and Super-Kamiokande [12], without positive evidence of a common signal so far.
The KM3NeT detector, currently under construction, was taking data with a partial configuration during the third GW observation campaign in 2019-2020, allowing for a first search for neutrino counterparts.The article presents the dedicated analyses that have been developed for the search and the first results obtained with KM3NeT data, using the latest GW public catalogs as detailed below.
Two independent analyses have been performed, each of them optimized for the detection of a prompt signal in a short time window around the GW event, and for a specific neutrino energy range.Section 2 describes the search for neutrinos in the 5-30 MeV range using a similar method to the one used to detect Core-Collapse Supernovae (CCSN) [13], while section 3 presents the search for neutrinos with energies from GeV to TeV.
The results of both searches are presented in section 4. The observations are converted into constraints on the incoming neutrino flux and on the total energy radiated in neutrinos for an isotropic emission around the source, in the relevant energy ranges, assuming a quasi-thermal distribution for MeV neutrinos and a single power law for GeV-TeV neutrinos.Additionally, for the latter, a stacked analysis has been performed to constrain the typical emission from BBH and NSBH objects.Results and prospects for future observations are discussed in section 5.
The KM3NeT neutrino telescope
The KM3NeT Collaboration is building two large-volume neutrino detectors in the depths of the Mediterranean Sea [14].They rely on the detection of the Cherenkov light induced by charged particles produced in neutrino interactions, using about 200,000 three-inch photomultiplier tubes (PMTs).The PMTs are arranged in digital optical modules [15] (DOMs, with 31 PMTs each), deployed along vertical lines anchored at the sea bed, with 18 DOMs per line.
The KM3NeT/ORCA detector, located near Toulon (France), will be equipped with 115 such lines, with inter-line and inter-DOM spacings that are optimized for the detection of GeV-scale neutrinos and the study of atmospheric neutrino oscillations.The KM3NeT/ARCA detector is located near Capo Passero in Sicily (Italy) and will consist of two blocks of 115 lines, with larger spacings optimized for TeV-PeV astrophysical neutrinos.Detection lines are currently being deployed on both sites.At the time of the GW observations in 2019-2020, ORCA was taking data with two lines (ORCA2) before July 1, 2019, with four lines (ORCA4) during the period from July 1, 2019 to January 17, 2020, and then with six lines (ORCA6), as illustrated on the detector footprint in Figure 1.The ORCA2 configuration is not considered in the following, as it is not large enough to perform a proper astrophysical search.The KM3NeT/ARCA detector has no data available for physics analysis during the considered period.
KM3NeT data is organized in consecutive runs of a few hours, and two main categories of neutrino events can be identified within the data.As it will be detailed in section 2, MeV neutrinos produce individually a very faint signal such that they can only be detected through a global increase of the detector counting rate linked to many MeV neutrinos interacting simultaneously.For higher energies (GeV and above), the total amount of deposited Cherenkov light distributed over multiple DOMs is sufficient to define unambiguously an event.This event would eventually be associated with an individual neutrino candidate.
The gravitational wave catalogs
The paper focuses on candidate binary mergers detected during the third observing run (O3) of the LIGO and Virgo GW interferometers reported in the three catalogs: • GWTC-2 [16]: it reports 39 significant detections made during O3a, the first half of O3, running from April to September 2019.
• GWTC-2.1 [17]: this is an update of GWTC-2 with eight additional events detected during O3a but not reported in the previous catalog.
• GWTC-3 [18]: it reports 35 significant detections during O3b, the second half of O3, from November 2019 to March 2020.In addition, the catalog reports seven marginal candidates, out of which GW200105 162426 has been identified as an interesting NSBH candidate and is therefore included in the analysis, making the total number of selected events 36 for GWTC-3.
The data releases provided by the LIGO-Virgo Collaboration contain detailed information for each GW event including its timing t GW , the localization skymap P(Ω), and the full posterior samples with all relevant source parameters: direction Ω, luminosity distance D L , masses m 1,2 , and total radiated energy in GWs E GW (defined as the difference between the final object mass and the sum of the masses of the initial objects).The different categories of events (BBH, NSBH, BNS) are determined on the basis of the individual masses of the merging objects, with a chosen boundary at m = 3 M ⊙ separating between neutron stars (below) and black holes (above).Other parameters are used in the follow-up analyses detailed in the following sections.
Search for neutrinos in the 5-30 MeV energy range
In a DOM, a hit is recorded when the voltage of a PMT rises above a 0.3 photoelectron threshold.Every hit is recorded and digitized before being grouped in segments of 100 ms called timeslices.Most of the recorded hits originate from optical noise due to radioactive decays in seawater, mainly 40 K (around 7 kHz per PMT), bioluminescence which can cause localized increases up to the MHz range, and atmospheric muons, as characterized in [19,20].
In the 5-30 MeV energy range, KM3NeT is mainly sensitive to the inverse beta decay channel, where electron anti-neutrinos interact with free protons in the water to produce low-energy positrons.Those secondary particles emit Cherenkov light for only a few tens of centimeters.As the distance between optical modules is optimized for the detection of higher energy neutrinos (above few GeV), one such neutrino would only produce hits in a single DOM.Optical noise would also produce such a localized signal, making it indistinguishable from a single neutrino interaction.
Therefore, MeV neutrinos can only be detected as a global increase in the rate of coincidences between PMTs in single DOMs.The current method implemented to detect MeV neutrinos with the KM3NeT detector is optimized for the detection of a Galactic or near-Galactic CCSN, as described in [13,21].The method assumes a quasi-thermal neutrino distribution and an emission duration of around 500 ms, similar to what is expected for CCSN.
To reduce the contamination from optical noise, the concept of coincidence is defined.A coincidence consists of at least four hits within one DOM and with PMTs within a 90-degree opening angle, with all the hits in a time window of 10 ns.The coincidence level is then defined as the number of coincidences over the whole detector in a sliding window of 5 timeslices (with a total duration of 500 ms) and is estimated every 100 ms.This parameter is expected to follow a Poisson distribution, characterized by a parameter bc referred to as the "expected background" in the following.
The search focuses on prompt neutrino emission coincident with the GW event, with similar timing as expected for a CCSN [22].Existing models for prompt MeV neutrino emission from binary mergers have most of the signal in tens of milliseconds after the merger [6,23], though the signal may extend up to a few seconds.However, to determine the time window during which the temporal correlation search is performed, it is necessary to consider the time-of-flight difference ∆T flight between gravitational waves and MeV neutrinos (assuming the former travel at the speed of light): where D max is the estimated distance of the farthest GW source, v ν , m ν , p ν , and E ν are respectively the speed, mass, momentum, and energy of the neutrino.Given the current constraints on the neutrino mass [24,25] and the distances of GW events reported in considered catalogs, it is found that ∆T flight < 2 s.
The search window should be as short as possible to keep the trial factor (number of times the coincidence level is computed) low.A fixed time window of 2 s after the GW event, covering solely the bulk of the expected prompt signal and the maximum expected time-of-flight difference, is thus considered in the following.
The search consists of three steps: the selection of runs with sufficient quality, the characterization of the background, and the search for a time-correlated signal in the 2 s window.
Run selection
The characterization of the coincidence levels due to the expected background is needed to perform the analysis.For each GW event, all data from the run covering the GW time is used, in addition to the specific coincidence levels during the corresponding 2 s time window.For five of those GW events, data acquisition issues prevented data from being retrieved.In order to remove occasional anomalies such as sparking PMTs, which may result in multiple coincidences happening in a single DOM during 100 ms, a quality score is computed in association with every coincidence level.The quality score, as described in [21], checks the consistency between the number of coincidences and the number of DOMs detecting at least one coincidence.A low score would indicate that one or several DOMs are producing an anomalous number of coincidences, which is not compatible with the expected background or signal.One additional GW event was removed from the studied sample due to a low-quality score within the 2 s time window, taking the total number of disregarded GW events to six.The analysis described below focuses on the 55 remaining GW events.
Background characterization
In the sea bioluminescence may lead to a localized increase of the hit rates up to the MHz level, causing the need to veto PMT with rates above ∼ 100 kHz with the embedded electronics of the DOMs [26].This leads to a non-constant number of active PMTs over the whole detector, which also causes variation in the expected background.The typical timescale of those variations is a few hours.The relation between these quantities is shown in Figure 2, where every dot is the computed expected background averaged for the whole detector, for a given range of fraction of active PMTs, as obtained from ∼ 200 runs, uniformly distributed in the ORCA4 and ORCA6 data-taking periods.As expected, a smaller fraction of active PMTs leads to a smaller expected background.
For each run containing a GW event, the expected background is inferred from the observed fraction of active PMTs based on a linear fit as shown in Figure 2. The agreement between this expectation and the observed rate has been found to be sufficient for most of the runs containing a GW event, except the six ones between December 19, 2019 and January 25, 2020.The disagreement is due to a network issue between the ORCA detector and the shore station.Instead of using the linear fit, the expected background is directly taken from data for the six runs in question.As the fraction of active PMTs is relatively stable in the runs of interest, this expected background estimation is adequate.
Statistical analysis
As there is no event-by-event direction reconstruction of neutrinos at the MeV scale, the analysis consists only of a time coincidence search.For every GW event the 20 coincidence levels in the [t GW , t GW + 2 s] time window (every 100 ms in the search window) are retrieved, as shown on the left panel of Figure 3, and the maximum coincidence level c max is extracted.Pseudo-experiments are then generated using the expected background bc inferred from the observed fraction of active PMTs (Figure 4 p-value p and the 90% confidence level upper limit on the number of coincidences due to a neutrino signal µ 90% sig using the Feldman-Cousins [27] statistical approach.In order to translate this quantity into physical limits, the number of expected signal events µ sig,fulldet (E 0 , D L,0 ) is computed for a reference CCSN at a distance D L,0 with a neutrino fluence Φ 0 and a total released neutrino energy E 0 , in a full ORCA detector (115 × 18 DOMs), and with perfect efficiency (η = 1, where η is the ratio of the measured expected background to the one when all PMTs are active).By correcting for the number of active DOMs n active DOMs and for η, an upper limit is obtained on the total neutrino fluence Φ 90% and on the total energy emitted in MeV neutrinos by the source E iso,90% tot,ν : The reference values have been computed from refined simulations based on the work done in [13,21], assuming a quasi-thermal emission of electron anti-neutrinos: µ sig,fulldet (E 0 , D L,0 ) = 132.5,D L,0 = 10 kpc, Φ 0 = 8.2 × 10 10 cm −2 , and E 0 = 3 × 10 53 erg.
3 Search for neutrinos in the GeV-TeV energy range The search focuses on track-like events, mostly generated by muons produced in charged-current (CC) interactions of muon (anti-)neutrinos in the vicinity of the detector.Other event topologies are not investigated in this search.
The muon direction can be reconstructed by fitting the PMT hit patterns to the expected Cherenkov emission [28].Only tracks reconstructed as upgoing or close to horizontal (i.e., with a reconstructed zenith direction θ such as cos(θ) > −0.1) are selected, in order to significantly reduce the bulk of background events caused by downgoing atmospheric muons.After this selection, the remaining backgrounds affecting the search for cosmic neutrinos are atmospheric neutrinos and atmospheric muons wrongly reconstructed as upgoing.At this level, the muon contribution is still dominant as it represents more than 99% of the observed event rate.
To further reduce the background, only events in time coincidence and in a direction compatible with the GW localization are considered.The time correlation is performed by selecting events in a time window [t GW −500 s, t GW +500 s], a conservative estimate of the expected delay between the highenergy neutrino and the GW emission [29].This time window is much larger than the one employed in section 2 as there is no problem with trial factor for the GeV-TeV search and it is therefore possible to probe not only prompt neutrino emission but also precursor or delayed processes.The source is assumed to be located within the region R 90 containing 90% of the GW probability as built directly from the GW skymap P GW .Then, the space correlation criterion corresponds to considering only events reconstructed with direction ⃗ x within R + 90 defined as: This extension aims to cover the detector's angular resolution and corresponds approximately to the 90% containment angle.In the following, ∆ϕ is fixed to 30 • ; such a large value is due to the small size of ORCA4 and ORCA6 detectors which leads to a large tail in the angular error distribution, as illustrated in Figure 7 in section 3.2.It should be significantly reduced with the expansion of the detector.
The analysis pipeline consists of three steps: a pre-selection of data to be analyzed according to its quality (section 3.1), an optimized event selection (section 3.2), and a statistical analysis to extract observation significance or upper limits on the neutrino emission (section 3.3).
Run selection
Careful checks have been implemented to ensure data quality and data-taking stability around each GW event.Conservative cuts are applied to remove all runs with non-stable trigger rates, or with other issues in terms of data quality, acquisition, or calibration.It reduces the considered total livetime (entire period of data taking, also beyond O3) from 181 to 174 days for ORCA4 and from 366 to 343 days for ORCA6.
It excludes nine GW candidates for which a follow-up is not possible as the corresponding detector runs are not selected.Furthermore, two additional GW events (GW200224 222234 and GW200311 115853) are excluded as they have been constrained by GW observations as being fully above the KM3NeT horizon.A total of 50 GW sources remain, including 44 BBHs and 6 NSBHs.The number differs from the one reported for MeV neutrinos in section 2, the chosen quality criteria being different as analyses rely on separate data streams with distinct responses to data-taking conditions.
The average rate of neutrino candidate events in the upgoing and horizontal region, in 2-day intervals, is shown in Figure 5 for the two detector configurations superimposed on the time periods covered by the GW catalogs.The main cause of fluctuations in the rate of reconstructed events is the variability of the bioluminescence at the detector.This affects the number of active PMTs as discussed in section 2 which leads to fluctuations in the number of events and changes in the efficiency of track fitting.
Analysis pipeline
The number of events in the ON-zone region in the search time window and in the direction of the GW event is compared to the expected background from mis-reconstructed atmospheric muons and atmospheric neutrinos, as estimated from OFF-zone data.A Boosted Decision Tree (BDT, based on gradient boosting [30]) model is applied to select signal-like events from the dominant atmospheric muon background [31].It is trained with Monte Carlo simulations of ν µ CC interactions (with neutrino energies up to 5 TeV) generated with gSeaGen [32] and muons simulated with MUPAGE [33].The training uses a collection of 24 (14) features for ORCA4 (ORCA6), including low-level variables on the detected light as well as higher-level variables from track maximum likelihood fit results.The distribution of the final BDT scores is shown in Figure 6 for data and for Monte Carlo simulations.
The ON-zone region refers to events within a ±500 s time window centered on the GW event time, reconstructed as upgoing or horizontal tracks and in R + 90 .The OFF-zone events are track-like events reconstructed within the same region in local coordinates, but at times incompatible with the GW.The OFF-zone background sample consists of a subset of runs during the same data period (ORCA4 or ORCA6) and with similar data-taking conditions with the run containing the GW event, as evaluated based on the event rate R loose after a loose cut on the BDT score.Runs with rates the range [R ⋆ loose − δ, R ⋆ loose + δ] (where R ⋆ loose is the event rate for the run containing the GW event) are selected, and the value of δ is optimized for each GW event to ensure < 10% statistical uncertainties while having a representative background estimate.The remaining data of the run containing the GW time, outside the ±500 s time window, is also part of the background sample.The ratio between the livetimes of the ON-zone and the OFF-zone regions is denoted α ON/OFF .
A model rejection factor (MRF, [34]) minimization is used to optimize the cut on the BDT score, with the signal being defined as an all-flavor E −2 neutrino spectrum, and the background being estimated from the OFF-zone region scaled by α ON/OFF .The final cut may vary for each GW event so that the final expected background in the ON region depends on the detector conditions at the time of each GW.The detector effective area and acceptance after all cuts are estimated with the same E −2 signal Monte Carlo simulations.The sky is divided into pixels using the HEALPix method [35] and the direction-dependent acceptance A(Ω) is obtained for all pixels within the region R 90 .
The average effective areas, event distributions, median angular resolution (defined as the 50% containment angle), and angular error are shown in Figure 7 after score selection optimizations for a ν µ + νµ flux of 10 −4 E −2 GeV −1 cm −2 s −1 .It should be noted that in terms of angular resolution, ORCA4 seems to outperform ORCA6 at energies below 100 GeV as the optimized selection is stricter in this energy range for the 4-line configuration due to its smaller size so that only higher-quality events remain.It is reflected in the event distributions, as the rate of selected low-energy events is lower.When averaged over an E −2 spectrum, the median angular resolution for ORCA4 and ORCA6 are 1.85 • and 1.63 • , respectively.It corresponds roughly to containment angles in the energy region above 100 GeV on the bottom left plot of Figure 7, as the events at these energies are those contributing the most to the overall expected flux.
The numbers of ON-zone events N ON and OFF-zone events N OFF , are respectively the number of events in the ON-zone and OFF-zone regions with a BDT score above the optimized cut.The mean expected number of background events in the ON-zone region is then b = α ON/OFF × N OFF . (3. 2) The corresponding Poisson p-value p, the Poisson probability of observing at least N ON events with an expected background of b events (neglecting the related statistical uncertainty for this computation), can thus be estimated.
Limits on the incoming neutrino flux for individual GW events
The number of detected events after all cuts is compared to the background expectation from the OFF-zone region.In the absence of a significant excess, upper limits on the neutrino emission are extracted using the Bayesian framework JANG [36].
Upper limits on the flux normalization ϕ assuming an all-flavor time-integrated neutrino spectrum dN/dE = ϕ • (E/GeV) −2 are obtained, under the assumption of equipartition at Earth between neutrino flavors (ν e : ν µ : ν τ = 1 : 1 : 1).The corresponding likelihood is defined as: where b is the expected background and A(Ω) = a•f (Ω) is the direction-dependent detector acceptance, estimated with Monte Carlo simulations for each GW follow-up.A Poisson prior π(b) on the background with parameter λ = b/α ON/OFF encodes the information obtained from the measurements in the OFF-zone region.A 15% (10%) systematic uncertainty on the detector acceptance for ORCA4 (ORCA6) is reflected by defining a Gaussian prior on the acceptance normalization a.The GW localization skymap provided in the LIGO/Virgo catalogs is employed as prior knowledge on the source direction Ω: π(Ω) = P(Ω).Finally, a flat prior is considered for the parameter of interest ϕ.The posterior is then marginalized over nuisance parameters: where the integration is performed with Monte Carlo integration techniques and C is a normalization constant.The marginalization over the source direction is only performed over the intersection R vis 90 between the region R 90 containing 90% of the GW probability and the visible sky using the KM3NeT upgoing track sample.The 90% upper limit on the time-integrated flux normalization ϕ 90% is obtained by solving ϕ 90% 0 P (ϕ)dϕ = 0.90.
Limits on the total energy for individual GW events
Similarly, upper limits on the total energy emitted in neutrinos E iso tot,ν = 4πD 2 L Emax Emin E ×(dN/dE) dE, or on the ratio between the neutrino emission and GW emission f iso ν = E iso tot,ν /E GW , assuming an E −2 spectrum and isotropic emission, are also derived.The procedure is similar to the ones described above with the luminosity distance D L as an additional parameter (and the total radiated energy E GW as well for limits on f ν ).The integration bounds are fixed to E min = 1 GeV and E max = 100 PeV though the obtained results may be easily scaled for different choices of bounds (e.g.E iso tot,ν ∝ log(E max /E min ) for an E −2 spectrum).
Population studies
A stacking analysis of all BBH events is also performed by combining the individual follow-up results and constraining the typical E iso tot,ν (f iso ν ) from those objects, assuming they have the same total energy released in neutrinos (the same ratio between neutrino and GW emissions).To account for the current analysis being limited to neutrinos below the horizon (and not all-sky sensitive), stacking pseudo-experiments are performed which include each GW follow-up with a probability equal to the visibility of the corresponding R 90 region.This quantity is defined as the ratio between the integrated GW probabilities in R vis 90 and in R 90 : The quoted limit is the median value obtained from these pseudo-experiments.A similar population study is performed considering the 6 NSBH candidates in the catalogs.
Results
The final results for the two analyses described in section 2 and section 3 are presented in Table 1 and Table 2 respectively.No excess has been found in any of the samples and follow-ups.Therefore, only upper limits on the neutrino emission are computed and reported in the same table.For the GeV-TeV analysis, due to the low expected background rates, the computed p-values are always either 1 (if N ON = 0) or typically smaller than 0.5 (if N ON > 0) ; the values are then only provided in the latter case.
Table 1: Summary of O3 follow-up results of the MeV analysis.For each GW event, the third and fourth columns give the expected background bc and the maximum observed coincidence level c max during the 2 s window after the GW event.The next two columns report the False Alarm Rate (FAR, number of times per day one expects to observe c max coincidences originating only from background) and the p-value.The last two columns provide the obtained upper limits on the neutrino emission, in terms of the incoming fluence and the total energy emitted in neutrinos.Table 2: Summary of O3 follow-up results with the high-energy analysis.The second and third columns indicate the most probable merger type given the masses in the catalog and the GW localization visibility V at ORCA at the time of the merger.The next three columns report the mean expected number of background events b, the observed number of events in the ON-zone region N ON , and the corresponding Poisson p-value p (in case of non-zero observations), and the last three are the obtained upper limits on the neutrino emission, in terms of the incoming time-integrated flux, the total energy emitted in neutrinos, and the ratio between neutrino and GW emissions.Individual limits for the 5-30 MeV energy range
GW name Merger
Only four events have a p-value lower than 0.2, with a minimum of 0.02, which is fully compatible with the background expectation.For the considered 2 s time window, the upper limits on the neutrino fluence range between 10 10 and 10 11 cm −2 , and on the total energy emitted in neutrinos between 10 60 and 10 63 erg, as reported in Figure 8.Given that these limits are not very constraining with respect to the total available energy budget in the merger (≲ 10 55 -10 56 erg), stacking limits have not been estimated for this energy range.
lines to be deployed), already bring complementary information as the two ORCA configurations are sensitive in a lower energy range than ANTARES and they provide better differential limits in that region of the spectrum.The small size of the ORCA4 and ORCA6 configurations, combined with this difference in terms of energy range, lead to worse integrated limits when comparing to ANTARES or to IceCube high-energy limits [11], as illustrated on Figure 11., for GW190814 (for ANTARES and ORCA4) and GW200208 130117 (for ORCA6).The two GW events have very similar sky coverage and thus comparable limits.The differential upper limits (horizontal lines) were obtained by considering independently bins in true neutrino energy and computing the corresponding limit on the flux normalization assuming an E −2 spectrum only within each bin (and zero elsewhere).
In the MeV range, the obtained limits are of the same order as the ones reported by KamLAND [37], although one to two orders of magnitude worse than Super-Kamiokande's [12], as shown on Figure 11.
As of autumn 2023, 18 lines are operating for ORCA and 28 for ARCA, with more lines scheduled to be deployed later in 2023 and in the following years.During the following GW observation campaigns, especially O4 which has started in spring 2023, follow-ups will be performed with much larger detectors than discussed in this article, leading to improved sensitivities and an extended energy range coverage.More detailed neutrino emission models may also be explored, beyond the isotropic E −2 and quasi-thermal spectra investigated in the present study.
The ARCA configuration, which did not contribute to the present results for O3, is expected to participate for the first time in the follow-ups during O4.Its energy coverage at very high energy (≳ TeV) is complementary to ORCA, hence enhancing KM3NeT sensitivity and the discovery lever arm, especially for hard spectra.As the field of view of KM3NeT is very different from that of IceCube, even partial KM3NeT detectors will be able to contribute significantly to the searches, especially for sources localized in the Southern Sky.
For MeV neutrinos, the gain is directly proportional to the size of the detector, as outlined in Equation 2.2, and KM3NeT is expected to reach similar sensitivities as Super-Kamiokande by the end of the construction.
The KM3NeT telescope is also performing real-time follow-ups during O4, planning to release results as fast as possible to help constrain the localization of a potential joint source and guide electromagnetic observations.It will improve the chance of identifying the corresponding electromagnetic emission and thus eventually constrain source models, jet structure, and production mechanisms.Range of 90% upper limits on the total neutrino fluence for both analyses.For MeV-scale neutrinos, these are directly the limits reported in Table 1.For all reported results above 1 GeV, the fluence is computed integrating energies above 1 GeV (Φ = 1 GeV ϕE −2 dE), and the horizontal widths of the bands delimit the central energy range expected to contain 90% of the signal events (except for IceCube and Super-Kamiokande results where the full sensitive range is shown).The ANTARES limits are reported in [8].The IceCube results are extracted from [10], [38], and [11], from left to right.The Super-Kamiokande results are obtained from [12,39].
Furthermore, the increasing number of detected GW sources, especially binaries involving neutron stars, will enhance the capability of stacking analyses.Even in the absence of individually significant sources, some hints of neutrino emission may arise for a sub-population of these sources, as a slight deviation from background-only predictions.Though the underlying production mechanisms are very different, covering different energy ranges from MeV to PeV with KM3NeT may help reveal the nature of the sources or identify sub-populations.
Figure 1 .
Figure1.Footprint of the planned ORCA detector, with the ORCA4 and ORCA6 configurations highlighted in blue and red, respectively.
Figure 2 .
Figure 2. expected background of the coincidence level as a function of the fraction of active PMTs for ORCA4 (blue) and ORCA6 (red).The crosses indicate averaged values over the full periods, and the dashed lines are linear fits to these points.
FrequencyFigure 3 .
Figure 3. Timeline of the coincidence levels around GW101204 110529 (left) and distribution of the maximum coincidence level cmax for different values of the expected background bc (right).On the left, the solid black line indicates the GW event time, and the dashed black line is the end of the 2 s time window during which the search is made.
Figure 4 .
Figure 4. Time series (left) and distribution (right) of the fraction of active PMTs for the run covering GW191204 110529.On the left, the top plot shows the variability of the fraction of active PMTs during the run while the bottom plot is a zoom on the 2 s time window starting from the GW event time.On the right, the distribution of the fraction of active PMTs is shown in blue for every timeslice of the run and in orange for the 20 timeslices inside the 2 s time window.
Figure 5 .
Figure 5. Rate of reconstructed events averaged over intervals of two days, for the two detector configurations ORCA4 (blue points) and ORCA6 (red points) in the data set.The shaded regions indicate the O3a and O3b periods.
Figure 10 .
Figure 10.Comparison of ORCA effective areas at upgoing track selection level with ANTARES[8], for GW190814 (for ANTARES and ORCA4) and GW200208 130117 (for ORCA6).The two GW events have very similar sky coverage and thus comparable limits.The differential upper limits (horizontal lines) were obtained by considering independently bins in true neutrino energy and computing the corresponding limit on the flux normalization assuming an E −2 spectrum only within each bin (and zero elsewhere).
Figure 11 .
Figure 11.Range of 90% upper limits on the total neutrino fluence for both analyses.For MeV-scale neutrinos, these are directly the limits reported in Table1.For all reported results above 1 GeV, the fluence is computed integrating energies above 1 GeV (Φ = 1 GeV ϕE −2 dE), and the horizontal widths of the bands delimit the central energy range expected to contain 90% of the signal events (except for IceCube and Super-Kamiokande results where the full sensitive range is shown).The ANTARES limits are reported in[8].The IceCube results are extracted from[10],[38], and[11], from left to right.The Super-Kamiokande results are obtained from[12,39]. | 8,092 | 2023-11-07T00:00:00.000 | [
"Physics"
] |
An Elliptic Triptych
We clarify three aspects of non-compact elliptic genera. Firstly, we give a path integral derivation of the elliptic genus of the cigar conformal field theory from its non-linear sigma-model description. The result is a manifestly modular sum over a lattice. Secondly, we discuss supersymmetric quantum mechanics with a continuous spectrum. We regulate the theory and analyze the dependence on the temperature of the trace weighted by the fermion number. The dependence is dictated by the regulator. From a detailed analysis of the dependence on the infrared boundary conditions, we argue that in non-compact elliptic genera right-moving supersymmetry combined with modular covariance is anomalous. Thirdly, we further clarify the relation between the flat space elliptic genus and the infinite level limit of the cigar elliptic genus.
Introduction
Mock modular forms have an illustrious history in mathematics [1]. However, a systematic understanding of mock modular forms is recent [2] and evolving. Mock modular forms also appeared in physics in various guises [3][4][5]. A natural habitat for mock modular forms and their non-holomorphic modular completion was provided by the demonstration that they arise as elliptic genera of two-dimensional superconformal field theories with continuous spectrum [6]. As such the completed forms appear also as duality covariant counterparts to black hole entropy counting functions [7].
In this paper, we wish to clarify three aspects of non-compact elliptic genera. The first comment we make is on the compact form of the elliptic genus of the cigar derived by Eguchi and Sugawara in [8]. It is a modular covariant sum over lattice points which is an exponentially regulated Eisenstein series. Since it is manifestly modular covariant, one can wonder whether it has a simple direct path integral derivation. We demonstrate that a path integration of the non-linear sigma-model description of the cigar provides such a derivation. The second remark, in section 3, is based on an analysis of the temperature dependence of the weighted trace T r(−1) F e −βH in supersymmetric quantum mechanics with a continuous spectrum. Upon regularization, the trace becomes β-dependent in a manner that hinges upon the choice of regulator. We demonstrate this in detail, analyze the supersymmetric regulator and its path integral incarnation, and the role of infrared boundary conditions. We use it to lay bare the unresolvable tension between right-moving supersymmetry and modularity in the noncompact elliptic genus. In a third and final part, we clarify the relation between the flat space superconformal field theory and the infinite level limit of the cigar conformal field theory using their elliptic genera.
The Path Integral Lattice Sum
In this section, we wish to obtain a simpler path integral understanding of the compact formula for the elliptic genus of the cigar in terms of a lattice sum, derived in [8]. To that end, we provide a new derivation of the elliptic genus of the cigar, through its supersymmetric nonlinear sigma-model description. The latter has the advantage of being parameterized in terms of the physical degrees of freedom only.
The Guises of the Genus
The cigar elliptic genus is a partition sum in the Ramond-Ramond sector, weighted by left-and right-moving fermion numbers F L,R , as well as twisted by the left-moving R-charge Q. It was computed manifestly covariantly through a path integral over maps from the torus into the coset SL(2, R)/U(1) target space [6]. The result obtained in [6,9,10] was χ cig (τ, α) = k where the θ 1 functions arise from partition functions of fermions and bosons with twisted boundary conditions on the torus, the integers m, w are winding numbers for the maps from the torus onto the target space angular direction, and the angles s 1,2 are holonomies on the torus for the U(1) gauge field used to gauge an elliptic isometry of SL(2, R). The twist with respect to the left-moving R-charge is given by α. This modular Lagrangian result was put into a Hamiltonian form in which the elliptic genus could be read directly as a sum over right-moving ground states plus an integral over the differences of spectral densities for the continuous spectrum of bosonic and fermionic right-movers [6,10] . The difference of spectral densities is determined by the asymptotic supercharge [6,11,12]. In [8], a rewriting of the result (2.2) in terms of a lattice sum was obtained. The resulting expression for the cigar elliptic genus is This expression is also manifestly modular covariant, because it is written as a sum over a lattice Z + Zτ . Our goal in this section is to understand the formula (2.3) in a more direct manner than through the route laid out in [6,[8][9][10]. We recall that a key step in the derivation of the lattice sum (2.3) was to first compute the elliptic genus of the infinite cover of the Z k orbifold of the trumpet geometry [8,13].
The Infinite Cover of The Orbifolded Trumpet
We start our calculation from the cigar geometry [14][15][16] where the angle θ is identified modulo 2π. The metric and dilaton determine the couplings of a conformal two-dimensional non-linear sigma-model. The T-dual geometry is the Z k orbifold of the trumpet: where the angle θ is again identified modulo 2π. The infinite cover of the orbifold of the trumpet is the geometry in which we no longer impose any equivalence relation on the variable θ.
We perform the path integral on the cover as follows. Firstly, we consider the integral over the zero modes and the oscillator modes separately. We suppose that the oscillator contribution on the left is proportional to the free field result for a left-moving fermion of R-charge 1 and two uncharged bosonic fields. The factor 1/(4π 2 τ 2 ) is the result of the integral over momenta (at α ′ = 1). The right-moving oscillators cancel among each other. We want to focus on the remaining integral over zero modes, which contains the crucial information on the modularly completed Appell-Lerch sum [2]. The left-moving fermionic zero modes have been lifted by the R-charge twist. Thus, we can concentrate on the integration over the bosonic zero modes as well as the right-moving fermionic zero modes, with measure dρdθdψ ρ dψ θ . (2.7) The square root of the determinant in the diffeomorphism invariant measures has canceled between the bosons and the fermions. The relevant action is the N = (1, 1) supersymmetric extension of the non-linear sigma-model on the curved target space. 1 The term in the action that lifts the right moving fermion zero modes is [17] and more specifically, the term proportional to the Christoffel connection symbols 1 See e.g. formula (12.3.27) in [17].
This leads to a term in the action equal to We can descend this term once from the exponential in order to absorb the right-moving zero modes and obtain a non-zero result. We wish to introduce a twist in the worldsheet time direction for the target space angular direction θ because we insert a R-charge twist operator in the elliptic genus, and the field θ is charged under the R-symmetry [6,[8][9][10]. We thus must twist and we still have θ(σ 1 + 2π) = θ(σ 1 ). Since we study the infinite cover of the Z k orbifold of the trumpet, there are no winding sectors. We thus obtain the classical configuration We plug this classical solution (2.12) into the action for the infinite order orbifold of the trumpet, and descend a single insertion of (2.10) to lift the right-moving zero mode, use the Christoffel connection (2.9) and then find the zero mode integral We have represented the integral over the variable θ by a factor of 2πN ∞ where we think of N ∞ as the order of the cover, which goes to infinity. Putting this together with the oscillator factor (2.6) we proposed previously, we find (2.14) This precisely agrees with the elliptic genus of the infinite cover of the orbifolded trumpet calculated in [8]. 2
The Lattice Sum
Our next step is the path integral incarnation of the procedure of the derivation of the lattice sum formula in [8]. We undo the infinite order orbifold of the cigar, i.e. we undo the infinite order cover of the orbifolded trumpet. This will reproduce the lattice sum elliptic genus formula.
which lead to the classical solutions We then have the classical contribution to the action where λ = m − wτ . After tracking normalization factors, one finds that the action acquires another overall factor of 4πτ 2 /k (see e.g. [27]). The second effect we must take into account is that the left-moving R-charge corresponds to the left-moving momentum of the angle field. When we introduce a winding number w, we must properly take into account the contribution of the winding number to the left-moving momentum. This amounts to adding a factor of e −2πiαw/k to a contribution arising from winding number w. (Recall that the radius is R 2 /α ′ = 1/k.) We rewrite which leads to a total contribution to the exponent equal to − π kτ 2 (|λ| 2 + α(λ +λ) + α 2 + α(−λ +λ)) = − π kτ 2 (|λ| 2 + 2αλ + α 2 ) . The denominator in the final expression is obtained from a factor (λ + α)(λ + α) in the denominator that arises from the exponent (2.17) in the generalized zero mode integral (2.13) on the one hand, and a factor ofλ + α in the numerator from the z-derivative of the angular variable θ on the other hand (arising from the zero mode lifting term (2.10)). Multiplying these, we find the final formula which is the compact lattice sum form [8] of the cigar elliptic genus. We have given a direct derivation of the lattice sum form, using the non-linear sigma model description. This concludes the first panel of our triptych.
Supersymmetric Quantum Mechanics on a Half Line
In this section, we wish to render the fact that the non-holomorphic term in non-compact elliptic genera arises from a contribution due to the continuum of the right-moving supersymmetric quantum mechanics [6] even more manifest. For that purpose, we discuss to what extent the right-moving supersymmetric quantum mechanics can be regularized in a supersymmetric invariant way, or a modular covariant manner, but not both. That fact leads to the holomorphic anomaly [6]. The plan of this section is to first review how boundary conditions in ordinary quantum mechanics show up in its path integral formulation. We then extend this insight to supersymmetric quantum mechanics. We illustrate the essence of the phenomenon in the simplest of systems. We end with a discussion of how the regulator of the non-compact elliptic genus cannot be both modular and supersymmetric, which leads to an anomaly.
Quantum Mechanics on a Half Line
We are used to path integrals that map spaces with boundaries into closed manifolds. Less frequently, we are confronted with path integrals from closed spaces to spaces with boundaries.
It is the latter case that we study in the following in the very simple setting of quantum mechanics.
In particular, we discuss quantum mechanics on a half line, its path integral formulation, and pay particular attention to the path integral incarnation of the boundary conditions. The easiest way to proceed will be to relate the problem to quantum mechanics on the whole real line. What follows is a review of the results derived in e.g. [18][19][20], albeit from an original perspective.
Quantum Mechanics on the Line
Firstly, we rapidly review quantum mechanics on the real line. We work with a Hilbert space which consists of quadratically integrable functions on the line parameterized by a coordinate x. We have a Hamiltonian operator H of the form where V (x) is a potential. We can define a Feynman amplitude to go from an initial position x i to a final position x f in time t through the path integral where the action is equal to The Schrödinger equation for the wave-function of the particle reads and we work with normalized wave-functions Ψ. We can also write the amplitude in terms of an integral over energy eigenstates Ψ E : 5) and the amplitude satisfies the δ-function completeness relation at t = 0, as well as the Schrödinger equation (3.4) in the initial and final position variables x i and x f .
Quantum Mechanics on the Half Line
The subtleties of quantum mechanics on the open real half line x ≥ 0 have been understood for a long time [21]. Boundary conditions compatible with unitarity have been classified. The path integral formulation for quantum mechanics on the half line has resurfaced several times over the last decades [18][19][20], and is also well-understood. We review what is known. The half-line has a boundary, and we must have that the probability current vanishes at the boundary. This is guaranteed by the Robin boundary conditions When the constant c is zero, we have a Neumann boundary condition and when it is infinite, the boundary condition is in effect Dirichlet, Ψ(0) = 0. Suppose we are given a Hamiltonian H of the form (3.1) with a potential V (x) on the half line x > 0. We can extend the quantum mechanics on the half line to the whole real line by extending the potential in an even fashion, It is important to note that this constraint leaves the potential to take any value at the origin x = 0. We can then think of the quantum mechanics on the half line as a folded version of the quantum mechanics on the real line. 3 The even quantum mechanics that we constructed on the real line has a global symmetry group Z 2 . We can divide the quantum mechanics problem on the real line, including its Hilbert space, by the Z 2 operation, and find a well-defined quantum mechanics problem on the half line, which is the original problem we wished to discuss. An advantage of this way of thinking is that the measure for quantum mechanics on the whole line is canonical. It leads to the Green's function (3.5). Since the quantum mechanics that we constructed has a global Z 2 symmetry, we can classify eigenfunctions in terms of the representation they form under the Z 2 symmetry, namely, we can classify them into even and odd eigenfunctions of the Hamiltonian. We then obtain the whole line Green's function in the form that separates the even and odd energy eigenfunction contributions The Green's function is well-defined on the half-line and satisfies Dirichlet boundary conditions. We divide by a factor of two since we are projecting onto Z 2 invariant states. From the path integral perspective, the subtraction corresponds to a difference over paths that go from x i to x f and that go from x i to −x f , on the whole real line, with the canonical measure (divided by two). This prescription generates a measure on the half line which avoids the origin, since we subtract all paths that cross to the other side [18,19]. 4 If we represent the Z 2 action oppositely on the odd wave-functions, we arrive at the Green's function that satisfies Neumann boundary conditions: In this second option, we add paths to the final positions x f and −x f with their whole line weights (divided by two). This path integral represents a sum over paths that reflect an even or an odd number of times off the origin x = 0, and in particular, allows the particle to reach the end of the half line.
We clearly see that the naive folding operation projects the states of the quantum mechanics onto those states that are even, or those that are odd. 5 However, concentrating on these two possibilities only fails to fully exploit the loop hole that the even potential V (x) allows, which is an arbitrary value V (0) at the fixed point x = 0 of the folding operation. 6 We can make use of this freedom by taking as the total potential an even potential V (x), zero at x = 0, complemented with a δ-function: We take the wave-function on the whole line to be even and continuous, with a discontinuous first derivative at the origin. When we consider the one-sided derivative at zero, we find that the wave-function satisfies the Robin boundary condition [19] We have gone from a purely even continuous and differentiable wave-function on the real line that satisfies the Neumann boundary condition (at c = 0) to an even wave-function that satisfies mixed Robin boundary conditions, by influencing the wave-function near zero with a delta-function interaction. 7 It is intuitively clear, and argued in detail in [19] that it is harder to push an initial problem with Dirichlet boundary conditions at the origin towards a mixed boundary condition problem. In order to achieve this, one needs a very deep well [19]. For later purposes, we note in particular that an ordinary delta-function insertion at the origin will not influence an initial Dirichlet boundary value problem.
As an intuitive picture, we can imagine that the delta-function is generated by possible extra degrees of freedom that are localized at the origin, and whose interaction with the quantum mechanical degree of freedom we concentrate on induces the delta-function potential localized at the origin.
Thus far, we briefly reviewed the results of [18,19] on path integrals on the half line and discussed how they are consistent with folding. Next, we render these techniques compatible with supersymmetry.
Supersymmetric Quantum Mechanics on the Half Line
In this section, we extend our perspective on quantum mechanics on the half line to a quantum mechanical model with supersymmetry. We again start from a quantum mechanics on the whole of the real line, with extra fermionic degrees of freedom and supersymmetry. In a 5 These are states in the untwisted sector of an orbifold, projected onto invariants under the gauged discrete symmetry. 6 In string theory orbifolds, the fixed point hosts extra degrees of freedom which in that case are very strongly constrained by consistency. 7 The even wave-function on the side x > 0 corresponds to the linear combination Ψ(x) ∝ (Φ E,e (x) + c Φ E,o (x)) in terms of even and odd solutions to the problem on the real line without the delta-function interaction [19]. It is an invariant under the Z 2 action with discontinuous derivative at the origin. second stage, we fold the quantum mechanics onto the half line in a manner consistent with supersymmetry.
Supersymmetric Quantum Mechanics on the Line
We discuss the supersymmetric system with Euclidean action (see e.g. [22]) where W ′ (x) = ∂ x W (x). The action permits two supersymmetries with infinitesimal variations (3.14) We introduced the operator p = −i∂ x (3.15) and can represent the supercharges by When we trace over the fermionic degrees of freedom, we need to compute the fermionic determinant with anti-periodic boundary conditions. It evaluates to [22] Z anti−per after regularization. This is the path integral counterpart to the calculation of the Hamiltonians (3.14).
Supersymmetric Quantum Mechanics on the Half Line
We study the supersymmetric quantum mechanics on the half line by folding the supersymmetric quantum mechanics on the whole line. We wish for the folding Z 2 symmetry to preserve supersymmetry. Since the particle position x is odd under the Z 2 action (as is its derivative with respect to time, since we choose world line time to be invariant), we demand that the superpotential W (x) is odd under parity, and that the fermionic variables ψ and ψ * are odd as well. See equation (3.13). Thus, we have the Z 2 action (x, ψ, ψ * ) → (−x, −ψ, −ψ * ) , (3.18) and the superpotential W is odd. For the moment, we consider the superpotential to be continuous, and therefore zero at zero. We project onto states invariant under the Z 2 action (3.18). Thus, in any path integral, we will insert a projection operator P Z 2 that consists of where P is the parity operator that maps P : x → −x and (−1) F maps fermions to minus themselves. When we trace over the fermionic degrees of freedom with a (−1) F insertion, we must impose periodic boundary conditions on the fermions. The fermionic determinant in this case evaluates to [22] Z per which leads to the same Hamiltonians (3.14) for the two component system, and when we compare to equation (3.17) we find a minus sign up front in the path integral over the second component. As a consequence, for the first component of the two component system, from the insertion of the projection operator P Z 2 in equation (3.19), we will obtain a path integral measure while for the second component, we obtain a path integral measure Thus, from the discussion in subsection 3.1, the upper component, which we will call fermionic and indicate with a minus sign, will satisfy a Neumannn boundary condition at zero, while the bosonic component will satisfy the Dirichlet boundary condition. We carefully crafted our setup to be consistent with supersymmetry, and must therefore expect the boundary conditions we obtain to be consistent with supersymmetry as well. Indeed, the operator Q maps the derivative of the fermionic wave-function to the bosonic wave-function (when evaluated at the boundary, and using W (0) = 0). Thus, the operator Q maps the boundary conditions into one another. 9 The next case we wish to study is when the superpotential is well-defined on the halfline for x > 0, and approximates a non-zero constant as we tend towards x = 0. Since the superpotential is odd on the line, the distributional derivative of the superpotential will be a delta-function with coefficient twice the limit of the superpotential as it tends towards zero. If we call the latter value W 0 , then we have the equation (3.23) The derivative of the superpotential arises as a term in the component Hamiltonians (3.14). The δ-function interaction at the origin will result in a change in the Neumann (but not the Dirichlet) boundary conditions, as we saw in subsection 3.1. If we follow through the consequences, we find that the supersymmetric quantum mechanics on the half line that we obtain by folding now satisfies the boundary conditions These boundary conditions are consistent with supersymmetry.
An Interval
We have used the folding technique to obtain a supersymmetric or ordinary quantum mechanics problem on a half line. We can use the same technique to generate quantum mechanics problems on an interval. We perform a second folding by the reflection symmetry x → 2L − x where L is the length of the desired interval. The fermions also transform with a minus sign under the second Z 2 generator. Again, we can render the superpotential odd under the second flip, take into account a possible delta-function potential on the second end of the interval, and find boundary conditions consistent with supersymmetry on both ends. Our application of these ideas lies in regulating a weighted trace, and we proceed immediately to apply them in that particular context.
Infrared Regulators and the Weighted Trace
We wish to discuss the trace Z(β) = T r(−1) F e −βH (3.25) over the Hilbert space of states, weighted with a sign (−1) F corresponding to their fermion number F . It is well-known that this weighted trace is equal to the supersymmetric (Witten) index when the spectrum of the supersymmetric quantum mechanics is discrete [23]. It then reduces to the index which equals the number of bosonic minus the number of fermionic ground states. 10 When the spectrum of the supersymmetric quantum mechanics is continuous, the situation is considerably more complicated (see e.g. [11,24,25]), and the debate in the literature on this quantity may not have culminated in a clear pedagogical summary. We attempt to improve the state of affairs in this subsection. The origin of the difficulties is that the trace over a continuum of states is an ill-defined concept. An infinite set of states contributing a finite amount gives rise to a divergent sum. A proper definition requires a regulator. An infrared regulator will reduce the continuum to a discretuum and render the trace finite. The alternating sum can remain finite in the limit where we remove the regulator. There has been a discussion on whether and how the resulting weighted trace Z(β) depends on the inverse temperature β, and on the infrared regulator. To understand the main issues at stake, and to draw firm conclusions, it is sufficient to consider the example of a free supersymmetric particle on the half line.
The Free Supersymmetric Particle on the Half Line
Let us consider a supersymmetric quantum mechanics, based on the superpotential which is equal to a constant for x > 0, namely W (x > 0) = W 0 . We obtain the half line supersymmetric quantum mechanics by folding the problem on the whole line, and induce supersymmetric boundary conditions at the end of the half line. We recall the Hamiltonians with boundary conditions We can then solve for the wave-functions on the half line. The solutions for energy E = p 2 +W 2 0 are given by reflecting waves. The phase shift is set by the boundary condition. We have the wave-functions on the half line x ≥ 0 We find that the supercharge Q maps the wave-function Ψ − into Ψ + if we identify c − (p + iW 0 ) = c + . Thus, we have computed the space of eigenfunctions for bosons and fermions and how they are related.
The Weighted Trace
Our intermediate goal is to evaluate the weighted trace Z(β) in this model. To evaluate the trace, we need an infrared regulator. Moreover, the weighted trace depends on the infrared regulator, as we will demonstrate. In any case, we need to introduce an infrared regulator to make the trace well-defined. We cut off the space at large x = x IR . We need to impose boundary conditions at this second end, at x IR . As a result, the spectrum becomes discrete, and we will be able to perform the trace over states weighted by the corresponding fermion number. We consider two regulators in detail.
In a first regularization, we construct the supersymmetric quantum mechanics on the interval as we described previously. The result will be a Hamiltonian 29) and boundary conditions The reason that the boundary condition on both sides is the same despite the sign flip in the δ function coefficient in (3.29) is because we are evaluating either the derivative with a left or a right approach to the singular point. Because the Z 2 × Z 2 folding procedures commute with supersymmetry, the infrared regulated model preserves supersymmetry. Explicitly, we have a spectrum determined by the infrared boundary condition where n is an integer. All states are two-fold degenerate. The state with the lowest energy has energy equal to E = W 2 0 . The weighted trace reduces to a supersymmetric index and the Witten index is equal to zero.
A second regularization of the weighted trace proceeds as follows. We rather put Dirichlet boundary conditions at the infrared cut-off x IR for both component wave-functions. We can intuitively argue that we expect a normalizable wave-function to drop off at infinity, and that the Dirichlet boundary condition is a good approximation to this expectation. It has the added advantage of not introducing extra degrees of freedom at the end point which we imagine to be responsible for a delta-function potential. The disadvantage is that this infrared regulator breaks supersymmetry. The regulated weighted trace will now sum over bosonic and fermionic states determined by the respective conditions (see (3.28)) We define the phase shift of the fermionic wave-function. Then the solutions to the bosonic and fermionic boundary conditions are As the infrared cut-off is taken larger, the number of states per small dp interval will grow, to finally reach the continuum we started out with. To measure this growth, we can compute the bosonic and fermionic densities of states Thus, when we approximate the weighted trace at large infrared cut-off by the appropriate integral formula, we find [11] T r(−1) where the difference of densities of states is given by This second way of regularizing shows that the boundary condition we impose at the infrared end of our interval is crucial in determining the end result. When we put, as we did in the first case, a boundary condition consistent with supersymmetry, then the difference of spectral densities is zero for all values of the cut-off, and therefore also in the limit of infinite cut-off. When we put identical boundary conditions for fermions and bosons at the infrared endpoint, then the spectral densities differ by the phase shift in the continuum problem. It should now be clear that one can choose another mix of boundary conditions that will lead to yet another outcome for the spectral measure. Before a choice of regulator, the weighted trace is ill-defined. The final result depends on the regulator choice, even after we remove the regulator. We have illustrated this effect in two cases, but there is an infinite number of choices, and the β-dependence of the final result Z(β) is determined by the choice of regulator. We should rather think of the weighted trace Z(β, regulator) as a function of both the inverse temperature β and the regulator.
The first regulator is interesting, since it preserves supersymmetry. The second regulator, with identical boundary conditions for bosons and fermions is also interesting, it turns out. Although we computed the spectral density in our particular model of the free particle on a half line, the final result is universal in an appropriate sense. The relative phase shift of bosons and fermions at large x IR is determined by the asymptotic form of the supercharge Q alone. This can be seen from the fact that the fermionic wave function in the infrared is determined by the bosonic wave function in the infrared and the asymptotic supercharge. Thus, only the asymptotic value of the superpotential lim x→∞ W (x) = W 0 , which we assume to be constant, will enter the phase shift and spectral density formula [11]. Thus, the result for the β-dependent weighted trace is universal, given the regularization procedure. Both the universality and the caveat are crucial.
The final result for our free particle on the half line with Dirichlet infrared regulator becomes [11] Z(β, Dirichlet) = ∞ 0 dp 1 2π
Conclusion
Of course, we recuperated the standard wisdom that any supersymmetric regulator makes the weighted trace into a supersymmetric Witten index which is β-independent. However, another choice of infrared regulator can give rise to a β-dependent weighted trace, and the β-dependence is dictated by the regulator. It is quite striking that there are applications of supersymmetric quantum mechanics on a half line in which the infrared regulator is dictated by another symmetry of an overarching, higher dimensional model. In such circumstances, the weighted trace and its β-dependence become well-defined and useful concepts.
The Application to the Elliptic Genus
In the calculation of the cigar elliptic genus (2.1), there is a weighted trace over the rightmoving supersymmetric quantum mechanics. For each sector labeled by the right-moving momentumm on the asymptotic circle of the cigar, there is a supersymmetric quantum mechanics with superpotential W that asymptotes to W 0 =m [12]. The point is now that, as we saw, each of the right-moving supersymmetric quantum mechanics labeled by the right-moving momentum can be cut-off supersymmetrically using a δ-function potential with coefficient depending on the right-moving momentumm. The resulting elliptic genus would be equal to the mock modular Appell-Lerch sum. The cut-off depending on the right-moving momentum is not modular covariant though. The right-moving momentum is a combination of a winding number of torus maps, and the Poisson dual of the other winding number of torus maps, and as a result does not transform modular covariantly. The second alternative (and the one generically preferred in the context of a two-dimensional theory of gravity in which we wish to preserve large diffeomorphisms as a symmetry group) is to have a Dirichlet cut-off for all these supersymmetric quantum mechanics labeled by the right-moving momentum. This choice is covariant under modular transformations, but is not supersymmetric, as we have shown. The result of the second regularization is a modular completion of the mock modular form. We have thus shown that an anomaly arises in the combination of right-moving supersymmetry and modular covariance.
Our analysis of supersymmetric quantum mechanics is interesting in itself. It also provides the technical details of the reasoning in [6,10], and thus produces a second panel in our elliptic triptych. Moreover, our technical tinkering paints the background to continuum contributions to indices, or rather their continuous counterparts in two-dimensional theories [28] as well as in four-dimensional theories with eight supercharges [29,30]. In particular, it clarifies both the regulator dependence as well as the universality of the results on weighted traces in the presence of supersymmetry and a continuum.
A Flat Space Limit Conformal Field Theory
In [26], we studied the infinite level limit of the cigar elliptic genus. In this limit, the target space is flattened. One is tempted to interpret the resulting conformal field theory as a flat space supersymmetric conformal field theory at central charge c = 3. Still, the theory has features that distinguish it from a mundane flat space theory. In this third panel, we add remarks to the discussion provided in [26], to which we also refer for further context.
Flat Space Regulated
Firstly, we consider a flat space conformal field theory on R 2 , with two free bosonic scalar fields, and two free Majorana fermions, for a total central charge of c = 3, and with N = (2, 2) supersymmetry. We consider the Ramond-Ramond sector of the left-and right-moving fermions.
The ordinary bosonic partition function is divergent. There is an overall volume factor arising from the integral over bosonic zero modes which makes the partition function illdefined. We can regulate the divergence in various ways. One regulator would be to compactify the target space on a torus of volume V , and then take the radii of the torus to infinity. The result is that the partition function approximates (see e.g. [27]) where V /α ′ represents the volume divergence. Alternatively, we can compute the partition function through zeta-function regularization and the first Kronecker limit formula. See e.g. [31]. The result is identical. If we regulate the bosons in this manner, and leave the finite fermionic partition function unaltered, both the right-moving fermions and the left-moving fermions will provide a zero mode in the Ramond-Ramond sector partition sum. Thus, we will find that the regulated supersymmetric Witten index is zero for all finite values of the volume regulator V . The limit of the supersymmetric index will be zero under these circumstances. A different way of regularizing is to twist the phase of the complex boson Z = X 1 + iX 2 . In the path integral calculation of the complex boson partition function, this is implemented in a modular covariant way by demanding that the field configurations we integrate over pick up a phase as we go around a cycle of the torus. The phase is a character of the Z 2 homotopy group of the torus. If we parameterize the phases by e 2πium+2πivw (for winding numbers m, w on the two cycles of the torus), the result can be obtained either as the Ray-Singer analytic torsion [32] (to the power minus two) or by using the second Kronecker limit formula. The modular invariant result is where β = u − vτ is the complexified twist. Near zero twist, there is a second order divergence that is proportional to |β| −2 |η| −4 in accord with equation (4.1). The twist regulator breaks the translation invariance in space-time and preserves the rotational invariance. In fact, it uses the rotation invariance to twist the angular direction and to remove all bosonic zero modes.
(The idea is generic in that one can use twists by global symmetries to lift divergences in numerous contexts.) If we leave the fermions undisturbed, we again have the fermionic zero modes that give rise to a zero elliptic genus for the full conformal field theory.
The twist regulator suggests an interesting alternative. We can twist the bosons and preserve world sheet supersymmetry at the same time. The (tangent indexed) fermions naturally transform under the SO(2) rotating the two space-time directions, and if we twist with respect to the complete action of the space-time rotations, we twist the fermions as well. In that case, we find a partition function that equals one The two fermionic zero modes have canceled the quadratic volume divergence. The supersymmetric partition function (or Witten index) is now equal to one for all values of the twist, and therefore equals one in the limit where we remove the twist. Again, as in section 3, we see that the final result is regulator dependent (as is infinity times zero). We have two regulators that preserve world sheet supersymmetry as well modular invariance, and they give rise to index equal to zero, or to one.
Twist Two
We analyze how the above remarks influence our reading of the infinite level limit of the cigar elliptic genus [26]. First off, we further twist the left-moving fermions (only) by their left-moving R-charge, and wind up with the modular invariant flat space partition sum This chiral partition function suffers from a chiral anomaly. We have again decided (for now) on a modular invariant choice of phase. The regulating twist β has canceled the right-moving zero mode against the anti-holomorphic pole due to the infinite volume. The left-moving R-charge twist α (when non-equivalent to zero) has reintroduced the holomorphic pole in β, also associated to the divergent volume. When we take the limit β → 0, we therefore again find an infinite result. Once more, there are various ways to regularize the expression. One straightforward way to obtain the result in [26] is to perform a modular covariant minimal subtraction. We expand the expression (4.4) near β = 0, and subtract the pole. Given the dictum of a modular covariant transformation rule for the constant term (e.g. the desired modular covariant transformation rule for the elliptic genus [33]) one then obtains the result [26] Z ms,cov = − 1 2π The cigar elliptic genus manages to regulate the pole at β = 0 in a more subtle manner than the covariant minimal subtraction advocated above [6]. It goes as follows. One introduces an extra circle. Then, one couples the circle to the angular direction of the plane (or the cigar), and gauges a U(1) such as to identify the two circular directions. The net effect on the toroidal partition function is to incorporate the twist β into a modular covariant holonomy integral. The integral over the angle of the twist kills the divergent holomorphic pole, and renders the final result finite. The result is identical to the one obtained by covariant minimal subtraction (see [26] for the detailed derivation of this statement).
A Miniature
Finally, we wish to assemble a miniature triptych. Firstly, we revisit the path integral approach of section 2 and apply it to flat space. We T-dualize flat space, consider the infinite covering, and find instead of the zero mode factor (2.13) where we have introduced an infrared cut-off R on the radial integral. Thus, we find for the infinite cover of the T-dual of flat space the infrared regulated elliptic genus For flat space then, we find the same lattice sum (see equation (2.20)) as for the cigar elliptic genus, with the level k replaced by the infrared cut-off R 2 . Our second panel, in section 3, makes it manifest that we have implicitly used the same boundary conditions for bosons and fermions, since we considered a single measure, a hard infrared cut-off R, and no delta-function insertion. Hence we find the anti-holomorphicτ dependence in our result (4.7). Furthermore, our discussion in this section agrees with the fact that if we take the limit R → ∞ term by term, neglecting the exponential factor in (4.7), then we find a divergent result. Indeed, the lattice sum will be divergent.
Finally, we note that (at R = ∞) the genus can be regulated in the manner of the Weierstrass ζ-function (which is the regulated lattice sum of 1/α). If we take that ad hoc route, the result can be made holomorphic and non-modular, and equal to only the first term in (4.5), using the formula ζ(α, τ ) − G 2 (τ )α = ∂ α θ 1 (α, τ ) θ 1 (α, τ ) , (4.8) where G 2 is the second Eisenstein series (and multiplying in the prefactor θ 1 (α, τ )/η 3 )). On the other hand, if we infrared regulate with a radial cut-off as in (4.7), or using the cigar model in the large level limit, we obtain the modular covariant, non-holomorphic result (4.5) which equals the exponentially regulated Eisenstein series as proven in [8,26]. This final miniature illustrates how our conceptual triptych folds together seamlessly.
Conclusion
Our aim in this paper was to further explain conceptual features of completed mock modular non-compact elliptic genera [6] with elementary means. Using the supersymmetric cigar conformal field theory as an example, we provided a simple path integral derivation of the lattice sum formula [8] for the completed mock modular form. We derived the elliptic genus from the non-linear sigma-model 11 . We also laid bare the unresolvable tension between right-moving supersymmetry and modular covariance in defining the weighted trace with an infrared regulator, and we analyzed the quirks of the identification of the large level limit of the cigar model [26] with a flat space conformal field theory. We believe these conceptual pointers provide a looking glass with which to revisit higher dimensional elliptic genera, including the K3, the ALE [36] and the higher dimensional linear dilaton space genera [37]. The ubiquitous possibility to factor the appropriate powers of θ 1 /η 3 bodes well for this enterprise. For four-dimensional examples, for instance, we expect the doubling of the number of right-moving zero modes to be correlated to an elliptic Weierstrass ℘ factor in the result, et cetera. It will be interesting to study these generalizations. | 9,933 | 2017-06-08T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Robust Nonlinear Control Based on Disturbance Observer for a Small-Scale Unmanned Helicopter
A robust nonlinear controller based on disturbance observer for the trajectory tracking control of a smallscale unmanned helicopter with nonlinear structure under external disturbances and parameter uncertainties is designed. The control objective is to let the helicopter track a predefined trajectory. The proposed robust nonlinear controller is based on the backstepping sliding mode control technique which combines both the capabilities of backstepping control and sliding mode control. The control performance developed based on a time-varying disturbance observer. In order to obtain an efficient control law design, the nonlinear model of the helicopter is reformulated as an affine nonlinear system. The mathematical proof using Lyapunov stability theorem shows that the closed loop system is asymptotically stable in the presence of this controller. To verify the robustness and stability of the proposed controller, it is compared with conventional sliding mode controller. The chattering phenomenon is attenuated significantly and the tracking error is also alleviated. The simulation results confirm the desirable performance of proposed robust nonlinear controller.
Introduction
In the last years, small-scale unmanned helicopters with the capabilities of vertical taking-off and landing, hovering, low-altitude cruise, and low-velocity flight have attracted more attention.These features make them suitable in military and civilian applications.However control of small-scale unmanned helicopter is a challenging concept in both the theory and experimental implementation because of its strongly nonlinear structure, underactuated nature, strong coupling, and uncertainties caused by parameter uncertainties and external disturbances.Small-scale unmanned helicopters control methods consist of two categories of control methods: linear methods and nonlinear methods [1].Linear control methods are developed based on linear models, including PID [2] and ∞ [3].Although these linear methods are uncomplicated and reliable, they are only effective http://www.ispacs.com/journals/jnaa/2017/jnaa-00390/International Scientific Publications and Consulting Services when the system states are near the equilibrium points.Therefore, in order to overcome this major defect, many nonlinear techniques have been developed, such as feedback linearization [4], adaptive control [5], backstepping technique [6], sliding mode control [7][8].Among these methods, backstepping as a recursive technique based on Lyapunov stability analysis is effective for underactuated systems (e.g.unmanned helicopters).Variable structure control with sliding mode or as it known, sliding mode control (SMC) is a nonlinear control method that is well known for its robust characteristics.However, the conventional SMC has some disadvantages such as chattering phenomenon which would affect helicopter control efforts.In this paper a novel robust nonlinear control method based on disturbance observer is proposed for the small-scale unmanned helicopter to make the helicopter position track the desired reference trajectory in the presence of disturbance.Firstly, the helicopter model is reformulated as an affine nonlinear equation system, and then the affine nonlinear model is applied to design the controller.Then a disturbance observer is designed to estimate bounded time-varying disturbance.Using a backstepping sliding mode control technique, a robust and stable controller which considers nonlinear structure of system and all uncertainties is designed, this aids in controlling the output and tracking a given trajectory.Proposed controller stability analysis based on Lyapunov's direct stability theorem is described and proved.Moreover, the undesirable chattering phenomenon which leads to high wear of mechanical parts removed in this method.The paper is organized as follows: In section 2, the dynamical model of the small-scale unmanned helicopter is derived.Backstepping sliding mode control with disturbance observer designed in section 3. Simulation results are given in section 4 to illustrate the robust performance of the proposed controller.Finally, conclusions are drawn in section 5.
Small-Scale Unmanned Helicopter Model
The helicopter model is determined in two reference frames: the earth reference frame ℇ = { , , , } and the body fixed reference frame ℬ = { , , , }, the definitions of which are in accordance with [1]. .Skew symmetric, rotation and inertia matrix of helicopter model are described below.In which and are shorts for (.) and (.), respectively.Without loss of generality, we omit disturbance, , in continue until the simulation.
External force and torque that are the results of the main and tail rotors are detailed below, with respect to the Table 1 Collective pitches of main and tail rotors
Controller Design
In this section the concept of proposed robust nonlinear controller based on disturbance observer will be described, and applied to the small-scale unmanned helicopter's position control.
Small-Scale Unmanned Helicopter Description
In this paper, we consider small-scale unmanned helicopter model as a second order affine nonlinear system [11] And () is time-varying external disturbance.
Disturbance Observer Design
For the system (3.7), a time-varying disturbance observer designed as follow Where ̂ is the estimation of , ̂ is the estimation of ̇, and 1 and 2 are positive constants.Select the Lyapunov function as Where ̃= − ̂, and ̃= ̇− ̂.
Using the designed disturbance observer, we can estimate the disturbance , and the estimation of will be applied in the feedback control law.
Backstepping Sliding Mode Control
Let the desired output be and considering as the tracking error = − (3.15) Step Therefore, by defining = 1 + 2 + 3 , the closed loop control system using the proposed control law is stable by the concept of Lyapunov stability theorem.In order to eliminate the chattering phenomenon, the saturated function sat() is adopted instead of sgn() in (3.22).
Where is the boundary layer.The boundary layer near the sliding surface ensures that the system states remain on the sliding surface.
Simulation Results
The effectiveness of the proposed robust controller is confirmed by some simulations.The helicopter's parameters are introduced from [10].The helicopter is initially at {3, -2, 0, 0, 0, 0.5}.We choose 1 = 700, 2 = 400, the boundary layer = 0.10, and set the controller with 3 = 45 and 4 = 30.In order to check the flight control performance, we use a desired 8-shaped flight path, which is described below: if ≤ 10 sec Flight time length is 60 seconds, and the start point is considered to be an origin on the earth.For the ease of formulation, we omit the wind disturbances, but in simulation results, it is applied to system as below: In order to show the improvement due to the proposed robust nonlinear controller with disturbance observer, the simulation results of applying this method are compared with the related results of the conventional sliding mode controller.http://www.ispacs.com/journals/jnaa/2017/jnaa-00390/ International Scientific Publications and Consulting Services As it is seen in Figure 2, after applying the proposed robust nonlinear controller with disturbance observer, the helicopter reaches to the desired path and tracks it very well.By comparing of Figure 3 and Figure 4, it can be seen that tracking error in case of proposed controller is much less than tracking error in case of conventional SMC.Numerical indices for conventional sliding mode controller (CSMC), adaptive fuzzy sliding mode controller (AFSMC) [11], and proposed robust nonlinear controller with disturbance observer are compared in Table 2, that are defined in Table 3. Considering to Figure 6, it can be seen that after applying the conventional sliding mode controller to the helicopter, the intensive chattering appears in the control inputs while by considering Figure 5, the control inputs due to applying proposed robust nonlinear controller is free of chattering phenomenon.The existence of chattering phenomena can be an incitement for the dynamic modes of the helicopter.
Conclusion
A robust nonlinear backstepping sliding mode control approach with disturbance observer is proposed for trajectory tracking control of a small-scale unmanned helicopter in the presence of disturbance and by considering model uncertainties.First, the helicopter's dynamical model is rewritten as a second order affine nonlinear equation, and then it is used to design the controller.Then backstepping sliding mode control method as robust and stable controller based on disturbance observer presented.Stability of closed loop control system is proved using Lyapunov theory.The proposed controller's performance is compared with conventional sliding mode controller and simulation results verified the performance of the proposed disturbance observer based control method.An important property of the proposed controller is the chattering free control efforts.
Figure 1 :
Figure 1: Schematic of the small-scale unmanned helicopter
Figure 2 :
Figure 2: 3D trajectory tracking of proposed robust nonlinear controller with disturbance observer
Figure 3 :
Figure 3: Tracking errors of proposed robust nonlinear controller with disturbance observer
Figure 4 :
Figure 4: Tracking errors of conventional sliding mode controller
Figure 5 :Figure 6 :
Figure 5: Control inputs of proposed robust nonlinear controller with disturbance observer | 1,941.8 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Aspects of nonlinear effect on black hole superradiance
: Under some conditions, light boson (cid:12)elds grow exponentially around a rotating black hole, called the superradiance instability. We discuss e(cid:11)ects of nonlinear interactions of the boson on the instability. In particular, we focus on the e(cid:11)ect of the particle production and show that the growth of the boson cloud may be saturated much before the black hole spin is extracted by the boson cloud, while the nonlinear interactions also induce the boson emission. For application, we revisit the superradiant instability of the standard model photon, axion and hidden photon.
Introduction
There may exist light scalar fields in theories beyond the standard model [1] and many ideas are proposed to find signatures of such light particles including terrestrial experiments and astrophysical observations. One of the ideas is to see the effects of light particles on the black hole physics. As we briefly review below, light bosons around a rotating black hole (or Kerr black hole) may experience a so-called superradiant instability and the boson cloud may be formed. It can significantly affect the evolution of the black hole through the extraction of its mass and spin by the boson cloud, which can be severely constrained by observations. Theoretical and phenomenological aspects of black hole superradiance are found in refs. .
A massive scalar field φ with its mass µ around a black hole satisfies the Klein-Gordon equation (1.1) -1 -
JHEP01(2020)128
Under the Kerr metric, the solution to this equation of the form φ ∝ e −iωt+imϕ is found, where m is a quantum number corresponds to angular momentum around the rotating axis and ϕ is the azimuthal angle. The frequency ω = ω R + iω I is given by for GM BH µ 1 [6], where G is the Newton constant, M BH is the black hole mass,ã ≡ a/(GM BH ) is the dimensionless spin parameter in the range 0 ≤ã ≤ 1 with a being the parameter appearing in the Kerr metric, which is related to the black hole angular momentum J BH through a = J BH /M BH , r + = GM BH + (GM BH ) 2 − a 2 represents the event horizon and γ n m is a numerical constant for the principal quantum number n and orbital angular momentum quantum number . We have also defined a dimensionless quantity α ≡ GM BH µ for convenience. 1 It is seen that if a is larger than the critical value a crit = 2r + GM BH µ/m+O(α 2 ), ω I is positive and the instability happens. This is called the superradiant instability. The growth rate is maximized for the mode (n, , m) = (0, 1, 1), a 1 and α 0.42: in this case we have ω I ∼ 10 −7 µ [8]. For this reason, the typical time scale of the superradiant instability 2 is taken to be ω −1 I . Note that ω I is a very steep function of the combination GM BH µ and hence the instability soon becomes inefficient for smaller GM BH µ. Numerically we have For astrophysical black holes M M BH 10 10 M , for example, the target scalar mass is 10 −11 eV µ 10 −21 eV.
The above analysis shows that if there exists a light scalar or vector boson, it experiences a superradiant instability around the near-extremal Kerr black hole and the boson 1 The factor (ãm − 2µr+) in eq. (1.3) can be rewritten as 2r+(mΩH − µ) by using the angular velocity of black hole event horizon, ΩH = a/(r 2 + + a 2 ). 2 In reality, it roughly takes ln(φmax/µ) ∼ 100 times more for the boson cloud to be formed from the vacuum fluctuation and extract the angular momentum, where φmax is the maximal amplitude of the cloud. Thus we define the superradiance time scale as τSR = ln(φmax/µ) ω −1 I and use it in figures in the rest of this paper. cloud is formed. The instability continues until a significant fraction of black hole mass or spin is extracted by the boson cloud and a becomes smaller than a crit . Thus the measurement of the black hole spin can constrain the existence of such a light scalar or vector boson [19,20].
So far, it has been assumed that the boson is a free field, i.e., it has only a gravitational interaction. However, it is often the case that a boson has interactions with other fields. A representative model of a light scalar is the axion-like particle, whose potential often appears from some non-perturbative effect and looks like V (φ) ∼ µ 2 f 2 (1 − cos(φ/f )) with axion decay constant f . In this case, the axion has nonlinear self interactions. The vector boson also usually has gauge interactions with some other fields including charged matter or the Higgs boson. In general, if we neglect any nonlinear interaction, the total mass of the cloud when a substantial fraction of the black hole spin is extracted is M cloud ∼ãαM BH /m, which implies that the typical field amplitude in the boson cloud is where M Pl denotes the reduced Planck scale and V ∼ π(αµ) −3 is the effective volume of the boson cloud. This shows that the field amplitude is not very far from the Planck scale, Thus, it is reasonable to expect that nonlinear effects plays important roles before a significant fraction of the black hole spin is extracted. These nonlinear interactions can drastically modify phenomenological consequences of the superradiance. It would be a very complicated task to precisely solve the dynamics including the nonlinearity in general, but we can still obtain a reasonable estimate for the effect of the nonlinearity. In particular, we focus on the effect of the particle production on the superradiant instability and its phenomenological implications. The rest of the paper is organized as follows. In section 2, the rough picture of nonlinear effects on the superradiance is explained. In section 3 we consider some phenomenological implications of such nonlinear effects. The Schwinger pair production of the standard model photon around the primordial black hole and the particle creation by nonlinear self-interactions of axion-like particle, hidden photon and generically interacting scalar is discussed. Section 4 is devoted to conclusions and discussion.
Rough sketch
As explained above, if there is a bosonic particle, φ, and the mass is the same order as the horizon radius of a Kerr black hole, the rotational energy of the Kerr black hole is efficiently extracted by φ by the superradiant instability. Obtaining the rotational energy, the φ cloud emerges around the black hole. The amplitude of the cloud exponentially grows up. As a result, a substantial fraction of the black hole energy and angular momentum are transferred to the boson cloud.
However, as the boson cloud grows, the nonlinear interactions become important and may affect the superradiant exponential growth. The following points need to be taken into consideration in order to figure out how the nonlinearity affects the superradiance process.
2. Other particles interacting with φ may be produced.
3. The bound state spectrum between φ and the black hole may be changed.
In this paper, we mainly focus on the first two points. We may take the spectrum change, in particular the distortion on ω I , into account as the change of the effective mass, but in the following models we discuss, it turns out that the particle creation process becomes already effective before the effective mass significantly changes. Thus, in the following, we use the superradiance instability time scale in the linear regime to discuss the effect of the particle production on the superradiance. This assumption is consistent with the numerical simulations [12,17] The particle creation process leads to the energy leakage from the φ cloud surrounding the black hole. If the energy leakage rate by the particle creation becomes equivalent to the energy extraction rate by superradiance at a given amplitude, φ NL , above which the leakage rate is larger, the extracted energy can be considered to be dominantly converted into the created particle. Then, the cloud does not grow up and thus the amplitude of φ cannot be larger than φ NL . Once the exponential growth of the amplitude stops at φ NL , so does the energy/angular momentum extraction rate by the superradiant instability. In such cases, the energy/angular momentum loss rate of the black hole becomes saturated and constant in time. Compared with the free boson superradiant instability, where the loss rate exponentially grows up, the typical time scale needed for a substantial energy/angular momentum extraction becomes significantly lengthen.
Hence, in order to calculate the black hole spinning down time correctly, we need to estimate the particle creation rate in the boson cloud. In the next section, we derive the particle production rate for several models with nonlinear interactions. Before going into concrete setups, we below summarize some general aspects of the black hole evolution taking account of nonlinear effects.
Time evolution of black hole
Let us consider the system of a rotating black hole and the boson cloud surrounding it, which is formed by the superradiant instability. The mass and angular momentum of the rotating black hole are denoted by M BH and J BH and those of cloud consisting of light scalar/vector boson are denoted by M cloud and J cloud , respectively. The angular momentum of the could is given by J cloud = (m/µ)M cloud . The time evolution of the cloud is described byṀ whereṀ NL andJ NL represent the energy and angular momentum extraction rate due to nonlinear effects, respectively. 3 Similarly, the time evolution of the black hole is described whereṀ acc andJ acc represent the accretion from the surrounding matter, respectively. The total mass and angular momentum of the black hole-cloud system are: M tot = M BH +M cloud and J tot = J BH + J cloud . Their time evolution are thus governed bẏ The accretion rate depends on the environment around the black hole. However, there is an upper bound, the Eddington limit, at which the gravitational infall into the black hole and the radiation pressure from the falling matter is balanced. If this bound is saturated, the typical accretion time scale is which is independent of the black hole mass, where m p is the proton mass, σ T is the Thomson scattering cross section for the electron, is the radiative efficiency. For the near-extremal Kerr black hole, ∼ 0.3 [29]. If the Eddington limit is not saturated, the accretion time scale can be much longer.
First, let us suppose that there are no nonlinear effects and initially the superradiance is inefficient: GM BH µ 1. The total mass and angular momentum gradually increase due to the accretion and the superradiant instability becomes effective around the epoch GM BH µ ∼ O(1) for the lowest-excited mode with m = 1. The typical time scale of the superradiant instability, ω −1 I ∼ 10 3-7 µ −1 , is very short compared with the accretion time scale. Thus, the boson cloud forms rapidly, converting a significant fraction of black hole mass and spin into the cloud. Sincė 8) and the factor in the parenthesis is positive, the spin parameter a is decreasing through this process. The superradiant instability stops when the spin a becomes smaller than the critical value, a crit [3,19,20]. Typically, a black hole with dimensionless spin parameter a 1 loses its spin by O(1) fraction: ∆ã ∼ã. The mass of the cloud at this stage is M cloud ∼ µJ cloud ∼ µ∆J BH ∼ (∆ã)αM BH . After that, the mass and spin of black hole again increase due to the accretion and the superradiant instability becomes inefficient. Thus, there would appear forbidden region on the black hole Regge plane (M BH vs.ã), which can be compared with observations. It can give constraints on light scalar and vector bosons with mass range of 10 −20 eV-10 −11 eV [19,20].
The story drastically changes if one takes account of nonlinear effects. As discussed in the previous section, the growth of the boson cloud may first stop much before it extracts -5 -JHEP01(2020)128 the significant fraction of the black hole mass and spin. The decrease of the black hole mass and spin due to the superradiance at this point can be completely negligible, i.e., they are saturated at M cloud M BH and J cloud J BH , so that the observation may not directly constrain the existence of such light bosons. However, here the second effect may take an important role: the nonlinear interactions extract the boson cloud energy through the production of other particles or emission of high frequency modes, represented byṀ NL andJ NL . Therefore, if the accretion rate is smaller than the extraction rate, the system gradually loses mass and angular momentum. Although it may be much less efficient than the case without nonlinear interaction, it is constrained from observations in principle.
In the next section we show some concrete examples in which nonlinear effects play essential roles to discuss the phenomenological consequences of the superradiant instability.
Standard Model photon around primordial black hole
Here, we revisit the constraint on the primordial black hole (PBH) abundance from the superradiance of the standard model electromagnetic photon [16]. Since the photon obtains an effective mass in the ionized plasma, called the plasma frequency ω p , the superradiant instability may happen if there is a PBH with its mass satisfying GM BH ω p = α ( 0.5). 4 The plasma frequency is given by where X e is the ionized fraction of the hydrogen. For a given PBH mass M BH , the condition GM BH ω p = α is satisfied at the redshift Thus, the primordial black hole with M BH 0.2M may experience the superradiant instability before the recombination of the hydrogen: z 1100. If there are PBHs with mass of 10 −8 M M BH 0.2M , photon fields grow exponentially at the redshift 10 3 z 2 × 10 6 , and they can affect the cosmic microwave background (CMB) blackbody spectrum. Thus, PBH abundance with this mass range may be severely constrained. 5 In the following we mainly focus on the case of M BH 0.2M . Now, we include the effect of the nonlinearity, which may suppress the efficiency of the instability. We discuss the Schwinger pair production. The Schwinger effect is reviewed in 4 The initial spin of the PBHã may be typically percent level [30][31][32] if the PBH is formed at the radiationdominated era. In this case, the effect of superradiance around PBHs may be extremely small since a > acrit requires small GMBHµ that greatly suppresses ωI . If the PBH is formed at the matter-dominated era, on the other hand, the initial spin can be large [33]. Below we assume thatã is at least O(0.1). 5 The lifetime of the PBH through the Hawking radiation is τHR ∼ 3 × 10 72 sec (MBH/M ) 3 . For the PBH mass range of our interest, we can neglect the effect of Hawking radiation.
where z eq ∼ 3 × 10 3 is the redshift at the matter-radiation equality. The superradiant instability time scale is given by where we have introduced a factor fã that represents the efficiency of the superradiant instability, which is a steep function of the black hole spinã. For the ( , j) = (0, 1) mode, it takes fã ∼ 1 forã 1 and fã ∼ 10 3 forã 0.6 [26]. Thus, the instability time scale can be much shorter than the Hubble time scale for the PBH mass of our interest and the photon field grows rapidly. On the other hand, as explained in the appendix A, the photon energy density around the black hole is saturated at ρ max A ≡ A 2 max (πm 2 e /e) 2 due to the Schwinger effect, where A max ∼ 0.05. Thus, the time scale of losing O(1) fraction of the black hole spin is estimated as It can be much longer than the Hubble time scale. Still, however, the gradual energy extraction from the PBH happens. In one Hubble time, the fraction of energy extracted from one PBH is estimated as 6 for z M z eq , where we have substituted z = z M (3.2) assuming ω p GM BH = α. Therefore, for M BH 0.1M , we have f ext ãα and the energy extraction due to the superradiant instability is much less efficient than the estimate given in [16]. The extracted energy is liberated in the form of electron-positron pair and they are expected to be mildly relativistic. Thus, they affect the CMB spectrum through the so-called µ-or y-distortion. For the injection around 10 5 z 2 × 10 6 the distortion may be characterized by the µ parameter, which is given by µ 1.4δρ r /ρ r while for the injection around 10 3 z 10 5 it is characterized by the Compton y parameter, which is given by y = δρ r /(4ρ r ). The COBE FIRAS experiment puts upper bound on these parameters as µ < 9 × 10 −5 and y < 1.5 × 10 −5 [34]. In either case, the injection of the radiative energy δρ r /ρ r is severely 6 Since the plasma frequency changes by O(1) after one Hubble time and the instability time scale is a very steep function of ωp, we can approximate that the instability lasts for about one Hubble time during which ωpGMBH ∼ α. constrained. In the present case, we have where f PBH denotes the energy fraction of PBH in the total dark matter density. Therefore, for M BH 10 −3 M , the energy injection is too small to affect the CMB blackbody spectrum even if f PBH = 1. Only the mass range 10 −3 M M BH 0.2M can be constrained from the COBE FIRAS data. In future, the PIXIE experiment can reach the sensitivity µ ∼ 10 −8 and y ∼ 10 −9 [35] and hence they may be sensitive to the mass range 10 −5 M M BH 0.2M . Figure 1 summarizes the constraint on f PBH . We have taken (ã, fã) = (1, 1) in the left panel and (ã, fã) = (0.6, 10 3 ) in the right panel. The PBH abundance with this mass range is also constrained by Subaru HSC [36], MACHO [37], EROS [38] and OGLE [39] experiments at the level of f PBH 10 −3 -10 −1 . Thus the COBE FIRAS and PIXIE may give more stringent constraint, but one should notice that it crucially depends on the black hole spinã. If the typical size of the PBH spin parameter is O(0.01) or below, fã becomes extremely large and the CMB observation would not give a meaningful constraint.
Axion with cosine potential
Let us consider the axion-like particle φ with a potential where µ φ denotes the axion mass and f is the axion decay constant, which we assume to be smaller than the Planck scale: f M Pl . The axion mass range 10 −20 eV µ φ 10 −11 eV causes the superradiant instability for the astrophysical black holes with mass of 10 9 M M BH M . The early stage of the superradiant instability is the same as the free massive scalar. The typical time scale at this stage is Initially, the axion cloud exponentially develops, but the non-linearity becomes important when the axion field value becomes close to f . 7 The axion potential energy density is bounded as ρ φ < µ 2 φ f 2 and it implies that the total angular momenta of the axion cloud is bounded as J cloud /J BH 8πf 2 /(ãα 5 M 2 Pl ). Thus, for f α 5/2 M Pl , the axion cloud extracts only a tiny fraction of the mass and spin of the black hole within a superradiant time scale. However, the axion nonlinear self-interactions cause the emission of the axion particle. Due to this axion emission, the Kerr black hole gradually loses the mass and spin. The energy loss rate due to the axion emission rate in the massless approximation is estimated as (see appendix B for derivation) where t ret ≡ t − |x − x |, x is the infinite point in Ω direction and C is a numerical constant that is independent of the axion mass µ φ . We take C ∼ 10 −3 from numerical simulation [12]. Similarly the angular momentum extraction rate is roughlyJ NL ∼Ṁ NL /µ φ . 8 The spin loss time scale of the black hole is then given by (3.12) We have shown the time for the Kerr black hole to lose O(1) of the angular momentum in term of f in figure 2. As we have discussed, the exponential growth of the axion cloud efficiently extracts the angular momentum for f 10 17 GeV and the spin loss time is 7 The nonlinear effect on the axion superradiance due to the axion-photon coupling aF F was discussed in refs. [40][41][42]. Here we only focus on the non-linearity due to the axion self-interaction. 8 It had been pointed out that burst like phenomena called bosenova might happen repeatedly due to the attractive self interaction of the axion [10,12,17,18]. However, an improved numerical simulation is not so supportive as previous studies and the saturation of the axion field is seen [43]. Even if the bosenova happens, the estimation (3.11) can be used by modifying the constant C (∼ 10 −6 ). determined by just the superradiance time scale τ SR (see footnote 2). On the other hand, for much smaller f , the energy density of the cloud is saturated and the spin is decreased only linearly with time due to the axion emission. In this regime the spin loss time scale is mainly determined by τ NL . The efficiency may depend on C, but for some parameter regions, the spin loss rate by the particle emission is faster than the accretion time scale, eq. (2.7). Thus the observation of high spin black holes with these region may put a constraint on the corresponding axion mass even if the exponential growth ceases due to the nonlinearity. In particular, for 10 16 GeV f 10 17 GeV, the particle creation process time scale is as fast as the superradiance time scale for C ∼ 10 −3 . On the other hand, if the accretion time scale is shorter than the spin down time scale, the superradiance does not much affect the black hole evolution.
Hidden photon with Higgs mechanism
Next let us consider the black hole superradiance with the light hidden photon field. We assume that the hidden photon mass is generated by the Higgs mechanism. 9 The relevant Lagrangian is where Φ denotes the Higgs field, D µ Φ = ∂ µ Φ−igA µ Φ and g is the gauge coupling constant. The Higgs potential V (Φ) is arranged so that the Higgs field obtains a VEV |Φ| = v/ √ 2. By using the gauge U(1) degree of freedom, one can take the unitary gauge such that the Higgs field is expanded as Φ = (v + σ)/ √ 2 where σ is the radial fluctuation of the Higgs 9 The effect of Higgs interaction on the vector boson superradiance was briefly mentioned in ref. [23].
JHEP01(2020)128
and the Goldstone mode is gauged away. The Lagrangian is then given by (3.14) In the vacuum, σ = 0, the hidden photon has a mass of µ A = gv. In the limit of heavy σ, we can neglect the dynamics of σ and we are left with just a massive hidden photon theory, as also realized in the Stuckelberg mechanism. However, there are still nontrivial phenomenological effects by σ on the superradiant instability of the vector boson unless σ is infinitely heavy, as explained below. Since we are interested in the very light hidden photon with mass of µ A 10 −11 eV, we focus on the case of µ σ µ A where µ σ is the σ mass. On the other hand, assuming the perturbativity of the Higgs self coupling, we have an inequality µ σ v = µ A /g. Thus we need very small g for satisfying µ σ µ A . Let us suppose that µ A = gv satisfies the superradiant condition. Then, the vector field is amplified around the rotating black hole and σ gets an additional potential term of g 2 (v + σ) 2 A µ A µ /2 due to the finite density effect. If the physical Higgs mass is large enough, i.e., µ σ µ A , one can neglect the dynamics of σ and integrate out it. For concreteness, we take the Higgs potential as V (Φ) = λ(|Φ| 2 − v 2 /2) 2 . In this case, we have µ 2 σ = 2λv 2 . Taking account of the vector background, one can find the extrema of the effective potential at For X < X max , there are minima represented by the second solutions of (3.15) and σ tracks this temporal minimum of the potential. For X > X max these solutions disappear and σ = −v becomes a minimum, which means the symmetry restoration. In the following, we assume X < X max . The resulting effective "potential" of the vector field is obtained by substituting the second solution of (3.15) into the original potential and given by This nonlinear self-interactions of the vector boson has appeared after integrating out σ. This implies that there is an upper bound on the vector field X above which the backreaction to the Higgs field becomes important and the symmetry is restored. In a realistic setup, the superradiant instability is expected to effectively stops when the nonlinearity of the vector boson becomes important before the symmetry restoration happens, similarly to the case of axion. In any case, the upper bound is roughly estimated as X ∼ X max . Thus, the energy density of the vector boson cloud around a rotating black hole is bounded as ρ max A ∼ µ 2 A µ 2 σ /g 2 . Then, the ratio of the total angular momenta of the cloud and the black hole is
JHEP01(2020)128
where V denotes the effective volume of the cloud and we took V π(αµ A ) −3 (see appendix A). Therefore, for µ σ α 5/2 gM Pl , the total angular momenta of the vector cloud is much smaller than that of the central black hole and hence the superradiance cannot take a substantial fraction of the black hole energy and spin away.
Here is one remark. In the above discussion X −(A 0 ) 2 + (A i ) 2 is assumed to grow through the superradiance. It is justified as follows. In a pure massive vector field theory it is known that the vector field automatically satisfies the Lorenz condition D µ A µ = 0 through the equation of motion. One can show that the same is approximately true also in a theory with effective vector potential (3.17) for |X/X max | 1. Since the typical time variation scale of the vector cloud is µ −1 A while the spatially varying scale is (µ A α) −1 , we should have (A i ) 2 > (A 0 ) 2 to satisfy the Lorentz condition and hence we can take X ∼ (A i ) 2 .
As in the case of axion, still there may be energy extraction processes from the system of black hole and vector boson cloud. Note that the oscillating A µ field cannot induce particle production of σ in our setup since g 2 Xµ 2 A µ 4 σ , the condition we discuss in the next subsection, is not satisfied. However, there are effective self-interactions of the vector field as expressed in (3.17). It induces the emission of the vector boson and extracts the energy and angular momentum of the system. Similarly to the axion case, we can estimate the emission rate as (see appendix B for derivation) where C is a numerical constant independent of the vector boson mass µ A . A detailed numerical simulation is required to find a value of C. The energy/spin loss time scale of the black hole is then given by which significantly depends on the value of µ σ /g. We have again shown the relation between the spin loss time and the characteristic scale, µ σ /g, in figure 3. As we have discussed, the exponential growth eventually stops for µ σ /g 10 18 GeV. However, as is the case for the axion, in some parameter regions, the nonlinear particle emission process is faster than the accretion and the substantial spin of the black hole is extracted. Thus, such parameter regions can be constrained from observations.
Scalar with four-point interaction
Finally, let us discuss a scenario where the superradiant degree of freedom is a scalar boson, φ, which interacts with another scalar particle, χ. Ignoring the other interactions, the Lagrangian is In the following, we only consider the case of µ χ µ φ for simplicity. Note that the φ 2 χ 2 interaction necessarily introduces the effective self-interaction of φ as where Λ denotes the renormalization point. 10 Thus the φ potential becomes effectively quartic for |φ| φ NL ≡ 8πµ φ /g 2 . First, let us suppose that the φ potential is well approximated by the quadratic one: V = µ 2 φ φ 2 /2. Assuming that φ = φ 0 sin µ φ t inside the superradiance cloud, the equation of motion for χ k (t), the k-mode of χ field, is Although the background geometry is not flat, the Fourier decomposition is justified as far as we are interested in the short wavelength modes. This is the Mathieu equation that is analyzed in detail in the context of reheating after inflation [44][45][46][47][48]. Let us shortly review what happens in the limit where φ is spatially homogeneous. First note that in the small φ amplitude regime essentially no particle production happens since µ χ µ φ and the perturbative particle production is kinematically forbidden. For the large amplitude regime, gφ 0 µ φ µ 2 χ , there are χ modes satisfying 10 It should be understood that the mass of φ around the origin φ = 0 is renormalized to be µ φ after summing up the bare mass and that arises from (3.22). Similarly, the four point φ self coupling is renormalized to be zero around the origin.
-13 -JHEP01(2020)128 and the χ particle with such momenta k exponentially grows through the so-called broad parametric resonance [47,48]. In this regime O(1) particles per volume ∆V ∼ k −3 * are produced within time duration of ∆t ∼ µ −1 φ , where k 2 * ≡ gφ 0 µ φ − µ 2 χ . Ignoring the selfinteraction of φ at this stage is justified if For a while, we assume that this inequality is satisfied. Otherwise, the φ potential would be dominated by the effective quartic one V ∼ (g 4 φ 4 /64π 2 ) log(φ 2 ) before the particle production is switched on. Now let us adopt a similar analysis to the boson cloud around a black hole. At the first stage the interaction term is negligible and φ cloud begins to grow due to the superradiant instability. When the amplitude reaches around gφ 0 µ φ ∼ µ 2 χ , the χ particle production begins to be efficient.
It should be noticed that the produced χ particles are relativistic at the very instant of their production, which happens at the time interval ∆t ∼ k 2 + µ 2 χ /(gφ 0 µ φ ) around when φ passes through φ = 0, but they soon become non-relativistic as φ increases again. It implies that the most of the created particles cannot escape from the gravity of black hole, since a particle must be at least semi-relativistic during the time interval longer than GM BH ∼ µ −1 φ in order to escape. Therefore, through the particle production process, the χ cloud appears in association with superradiant φ cloud. Their time evolution is described bẏ whereṀ prod denotes the energy transfer rate due to particle production, which is basically an increasing function of φ 0 . 11 Thus, it is expected that the growth of φ cloud stops wheṅ M prod becomes comparable to the superradiant growth rate 2ω I M (φ) cloud . χ cloud still continues to grow and eventually the backreaction of χ to the φ potential becomes important, when the χ and φ energy density become comparable. Then, the χ particle production is terminated.
If the inequality (3.25) is inverted, the effective φ 4 potential becomes important before the particle production process becomes efficient. In this case, the superradiance is expected to stop at φ 0 φ NL . Therefore, in either case, the interaction term tends to make the superradiant instability inefficent. A precise estimation is difficult because of the nontrivial configuration of the both clouds and nonlinearity, but it is a reasonable expectation that the growth of the cloud stops when the particle production becomes efficient or the nonlinearity of the potential becomes effective.
Finally, we briefly comment on the case of interaction with a Fermion. The Lagrangian is (3.28) 11 After time average over one φ oscillation µ −1 φ , we may haveṀ prod ∼ gµ φ φ0k 3 * V.
JHEP01(2020)128
The broad parametric resonance again creates ψ particle out of the φ background. The difference is that the parametric resonance does not grow up exponentially unlike the scalar interaction because of the Pauli's exclusion principle [49][50][51]. However, the non-linear self interaction for φ generated by the ψ loop may make the superradiance inefficient.
Conclusions and discussion
Recently, the black hole superradiance phenomena have been drawing lots of attention as a probe of ultralight boson fields. The most previous studies focused on the case where the boson is a free field and there have been much progress on the understanding of the physics of superradiance and its phenomenological implications.
In this paper, we have discussed several effects of nonlinear interactions of the boson field on the superradiance and the resultant evolution of black holes. Although it is difficult to precisely calculate the evolution of the boson cloud and the black hole due to the nonlinearity, we can still make reasonable estimates for the nonlinear effects. One of the key effects is the saturation of the field amplitude at which the nonlinearity becomes important. It has two possible origins: the modification of the scalar potential itself and the effect of particle production. We partly relied on the fact that the numerical simulation of the selfinteracting axion cloud shows a saturation of the field value around which the nonlinear effect becomes important [12,43]. It is not clear what happens for the general form of the nonlinear scalar potential, but it is unlikely that the cloud continues to grow even if the particle production becomes very efficient. We need further studies on this point. The other key effect is that the nonlinear interactions lead to the emission of high momentum particles from the boson cloud, which extracts energy and angular momentum of the cloud. Even if the field amplitude is saturated as explained above, there is a gradual energy loss process.
Taking these effects into account, we have considered the evolution of the black hole and surrounding boson cloud for some concrete examples. The standard model photon can experience the superradiant instability since it has a plasma mass and it can satisfy the superradiant condition if there are PBHs in the early universe. However, it necessarily causes the Schwinger pair production as the photon field grows and there is an upper bound on the efficiency of the energy injection from PBHs to the plasma. We have shown that the constraint on the PBH abundance may be much weaker than the previous estimate.
The axion with cosine potential is also considered. This case is already studied numerically [12,43]. Our whole picture, i.e., the saturation of the field value and the extraction of energy and angular momentum, is roughly consistent with the numerical study.
We have also discussed the light hidden photon whose mass comes from the Higgs mechanism. It is shown that the growth of the hidden photon due to the superradiant instability modifies the Higgs potential so that the configuration of the Higgs expectation value around the black hole becomes nontrivial. In the limit of heavy Higgs, we effectively obtain a theory of the self-interacting hidden photon, which is a bit similar to the axion. Depending on the value of µ σ /g, the production of the hidden photon can be so inefficient that there are essentially no observable consequences. Conversely, we can constrain µ σ /g from observations.
JHEP01(2020)128
Our study may not go beyond rough order-of-magnitude estimations, but it shows several drastic effects on the boson cloud and black holes and their observational consequences. Since there are a priori no reasons to expect that these ultralight bosons are free massive fields, it is essential to understand nonlinear effects precisely for the purpose to prove a nature of ultralight bosons, although they require detailed numerical simulations. We leave these issues to future works.
A.1 Schwinger pair production rate
Here, we discuss a superradiant instability induced by a massive vector boson, A µ , with a current interaction between a Dirac Fermion with charge −1, ψ, which we call as the electron. The Lagrangian for A µ is where F µν ≡ ∂ µ A ν − ∂ ν A µ , µ A is the mass of A, e is the matter charge, J µ is the electron current and m e is the electron mass. The effective Lagrangian is obtained by integrating the electron fields out [52,53].
where D µ ≡ ip µ − ieA(x) µ , |x is the eigenvector of an infinite dimensional matrixx µ with an eigenvalue x µ corresponding to the spacetime coordinate andp is the matrix satisfying [x µ ,p ν ] = −iη µν . Ignoring the mass term, for the constant electromagnetic field F µν , we can calculate the matrix element exactly and obtains the so-called Euler-Heisenberg Lagrangian [54].
JHEP01(2020)128
then the other states, the electron and positron pair, emerge from the background. Thus, the discrepancy is the electron pair creation rate. Because the electron pair production rate per unit volume, Γ S , is given as Here, the magnetic field is assumed to be zero and the electric field is denoted by E. This pair-creation process is called the Schwinger effect [55].
A.2 Comparison with superradiance rate
We compare the energy loss rate by the Schwinger effect with the energy extraction by superradiance from a Kerr black hole. Around the Kerr black hole, the superradiant cloud has a size around (αµ A ) −1 . Thus the typical magnitude of the electric field is E µ A A, where A is the typical amplitude of the vector boson. Due to the superradiant instability, A(t) grows exponentially with a frequency ω I : A(t) ∝ exp(ω I t). The energy density carried by the vector field is ρ A ∼ µ 2 A A 2 /2. As the vector cloud grows, electron-positron pair is produced through the Schwinger effect, which reduces the energy of the vector cloud. The change of the energy density of the vector boson cloud around the Kerr black hole may be described bẏ For E > πm 2 e /e, the Schwinger pair production process is unsuppressed. In this case one can easily estimate that the Schwinger production rate exceeds the superradaince rate if ω I < α e m e /π 2 , which is satisfied for the standard model photon and electron. Therefore, it is expected that the superradiant growth stops at some instant where the Schwinger production rate becomes comparable to the superradiance rate.
Note that the produced electrons and positrons are accelerated by the electric field but they do not give net energy loss of the vector cloud. This is because the electric field E is oscillating with time scale µ −1 A and correspondingly the velocity of the electron/positron v is also oscillating, but the work done by the electric field is proportional to E · v that becomes zero after time average. Actually, however, there are number of effects that can reduce the electron/positron energy: the electron-positron pair annihilation, synchrotron emission associated with the magnetic field, interaction with plasma, absorption by the black hole, etc. Nevertheless, eq. (A.6) gives a conservative estimate for the upper bound on the magnitude of the vector boson amplitude. As will become clear below, the actual upper bound is not so sensitive to the detailed process because the Schwinger production rate is exponentially dependent on the vector boson amplitude. Now let us estimate more precisely. We take the following approximate configuration for the vector field with the dominant mode ( , j) = (0, 1) [23],
JHEP01(2020)128
where θ is the polar angle, A(t) denotes the overall amplitude which grows with time during the superradiant phase and we introduced dimensionless radial coordinater ≡ αµ A r. This form of the solution is valid except for the near horizon region. By substituting it, we obtain neglecting terms suppressed by powers of α. From this expression we may define the effective volume of the cloud as V ≡ π(µ A α) −3 . On the other hand, the Schwinger production rate is given by to the leading order in α, where we have defined dimensionless vector amplitude A ≡ eµ A A/(πm 2 e ). Therefore, from (A.6), the superradiance rate becomes comparable to the Schwinger production rate when The energy extraction rate from the cloud and black hole system is saturated atṀ tot ∼ −2ω I ρ max A V. Note that as is seen from eq. (A.3), for E > πm 2 e /e, the nonlinear effect in the effective Lagrangian becomes larger and the effective mass may changed by O(1). After all, the effect of the Schwinger pair production constrains the efficiency of the superradiance and the energy density of the vector boson cloud is bounded as (A.11), although still there is a gradual energy loss due to the pair production. In section 3 we discuss phenomenological implications in the context of the photon superradiance around primordial black holes.
A few comments are in order. First, we discuss the validity of the use of the Schwinger pair production rate calculated for a static electric field. If the one electron pair creation rate is much larger than the oscillation frequency of the vector boson, our assumption is justified. The former is 12 the constant field approximation is good enough. Actually this is well satisfied around A ∼ A max for the case of photon superradiance around PBHs mentioned above. Second, we comment on the Pauli blocking effect. If the electron and positron in the cloud were confined and abundant, the Schwinger process would be stopped due to the Pauli blocking. However, if the electron-positron abundance is high enough, the pair annihilation process also occurs. This can put the upper bound on the number density of the electron and positron. Let E e denote a typical energy of the electron/positron and we express the electron/positron number density as, for E e ∼ m e , where σ ann is the annihilation cross section and v is the relative velocity. The annihilation rate is smaller than the supply of the electron-positron pair, eq. (A.12), if (A. 16) | 10,309.6 | 2019-10-14T00:00:00.000 | [
"Physics"
] |
Computational analysis of functional SNPs in Alzheimer’s disease-associated endocytosis genes
Background From genome wide association studies on Alzheimer’s disease (AD), it has been shown that many single nucleotide polymorphisms (SNPs) of genes of different pathways affect the disease risk. One of the pathways is endocytosis, and variants in these genes may affect their functions in amyloid precursor protein (APP) trafficking, amyloid-beta (Aβ) production as well as its clearance in the brain. This study uses computational methods to predict the effect of novel SNPs, including untranslated region (UTR) variants, splice site variants, synonymous SNPs (sSNPs) and non-synonymous SNPs (nsSNPs) in three endocytosis genes associated with AD, namely PICALM, SYNJ1 and SH3KBP1. Materials and Methods All the variants’ information was retrieved from the Ensembl genome database, and then different variation prediction analyses were performed. UTRScan was used to predict UTR variants while MaxEntScan was used to predict splice site variants. Meta-analysis by PredictSNP2 was used to predict sSNPs. Parallel prediction analyses by five different software packages including SIFT, PolyPhen-2, Mutation Assessor, I-Mutant2.0 and SNPs&GO were used to predict the effects of nsSNPs. The level of evolutionary conservation of deleterious nsSNPs was further analyzed using ConSurf server. Mutant protein structures of deleterious nsSNPs were modelled and refined using SPARKS-X and ModRefiner for structural comparison. Results A total of 56 deleterious variants were identified in this study, including 12 UTR variants, 18 splice site variants, eight sSNPs and 18 nsSNPs. Among these 56 deleterious variants, seven variants were also identified in the Alzheimer’s Disease Sequencing Project (ADSP), Alzheimer’s Disease Neuroimaging Initiative (ADNI) and Mount Sinai Brain Bank (MSBB) studies. Discussion The 56 deleterious variants were predicted to affect the regulation of gene expression, or have functional impacts on these three endocytosis genes and their gene products. The deleterious variants in these genes are expected to affect their cellular function in endocytosis and may be implicated in the pathogenesis of AD as well. The biological consequences of these deleterious variants and their potential impacts on the disease risks could be further validated experimentally and may be useful for gene-disease association study.
136 Analysis of sSNPs 137 PredictSNP2 (http://loschmidt.chemi.muni.cz/predictsnp2/) is a web server that predicts 138 Table 1 lists the variants that change the number of motif matched in UTRSite, as 232 compared with their wild type UTR sequences (see Table S3 for complete prediction analysis 233 results). Both 3' and 5' UTRs are enriched with cis-acting regulatory elements, and both UTRs 234 are important in the regulation of protein expression. In this study, a total of 12 UTR variants 235 were predicted to cause an addition or a deletion of regulatory elements in the UTR sequences. 236 Further analysis on the impacts of these regulatory elements, including the effect of the miRNA 237 binding to the UTR sequence is outside the scope of this study. However, it should be 238 characterized in the future.
239 Prediction analysis of splice site variants 240 The prediction analysis of splice site variants was done using MaxEntScan, which is 241 integrated in VEP. Submission to MaxEntScan requires only the SNP IDs. The consensus score 242 and score difference between wild type and mutant sequences were obtained after the 243 submission. The consensus score for each variant was calculated based on different protein-244 coding transcripts and the same variant may have different consensus scores on different 245 transcripts. This allowed users to study the impact of the splicing variant on different transcripts. 246 Table 2 shows the variants with score difference exceeding the defined threshold. Columns Manuscript to be reviewed 277 Prediction analysis of nsSNPs 278 All 759 nsSNPs in the three genes were analyzed by five prediction packages. The 279 Ensembl genome database contains prediction results from SIFT and PolyPhen-2, in which a 280 total of 106 nsSNPs were predicted as "damaging" in SIFT and "probably damaging" in Table 4 while the complete prediction results of nsSNPs using five prediction 285 packages are shown in Table S8. All the deleterious nsSNPs were predicted with high SIFT score 286 and most of them have a PSIC score larger than 0.95 in PolyPhen-2. Prediction results from SIFT 287 and PolyPhen-2 showed that all deleterious nsSNPs were highly conserved in the proteins ( 295 the formation of clathrin-coated pit, which is one of the key functions of PICALM in CME 296 (Ishikawa et al., 2015). These deleterious nsSNPs were predicted to cause conformation change 297 and affect PICALM protein function. Figure 1 shows the sticks representation of the protein 298 structural changes caused by the deleterious nsSNPs in PICALM gene. In Fig. 1, the variant 299 residues are colored yellow while red dashed lines indicate the hydrogen bonds between the 300 residues. Variants rs780443419 (F109S) and rs765338634 (L179P) resulted in an addition or a 301 deletion of hydrogen bond formation between the mutant and neighboring amino acids.
302 Therefore, the substitution of these protein residues could significantly affect the ANTH domain 303 function as well as the overall PICALM protein structure.
304
For SYNJ1 gene, 13 nsSNPs were predicted as deleterious. Six of them including 305 rs781675993, rs398122403, rs762909719, rs771755243, rs768897710 and rs779479360 are The structural and functional importance of the 18 deleterious nsSNPs were further 328 analyzed using ConSurf analysis tools. Evolutionary conservation analysis determines the level 329 of conservation of each protein residue and predicts the potential structural and functional 330 importance of these deleterious variants to the protein. Figure 2 shows that 17 out of 18 (94%) 331 deleterious nsSNPs were analyzed to be "conserved", with 12 of them (70%) "highly conserved" 332 (score "9") through homologous sequence alignment. Only one deleterious nsSNP, rs745418083 333 (L776S) in SYNJ1 gene, was estimated to be "intermediate" in terms of evolutionary 334 conservation. Besides that, 11 of the 18 (61%) deleterious variants were predicted as structural Manuscript to be reviewed 335 residues and the rest (39%) were functional residues. Figure S2-4 shows the conservation scores 336 of full length proteins of PICALM, SYNJ1 and SH3KBP1, respectively.
337
Besides the prediction analysis of the functional and structural importance of the 338 deleterious nsSNPs and their level of conservation on the proteins, the changes of physical and 339 chemical properties between wild type and mutant amino acids were studied. Table S9 shows the 340 hydropathy, polarity and charge differences between the wild type and mutant amino acids of the 341 deleterious nsSNPs. Table S9 shows that 345 the hydropathy in eight deleterious nsSNPs has changed from hydrophobic to hydrophilic, and 346 the polarity of four nsSNPs has changed from non-polar to polar. The substitution of amino acids 347 may affect both covalent and non-covalent interactions among amino acids, subsequently 348 influencing the stability and conformation of protein structure.
349
To study the role of nsSNPs in affecting the total free energy and the stability of protein To demonstrate the reliability of nsSNP prediction, we predicted the functional 372 consequences of other nsSNPs that have been previously studied in other benchwork 373 experiments. Ten PSEN1 pathogenic nsSNPs that have been validated experimentally to affect 374 amyloid-beta (Aβ) level were retrieved from ALZFORUM (https://www.alzforum.org/). Besides 375 that, another five PSEN1 nsSNPs, including three non-pathogenic variants and two variants that 376 have never been reported to be deleterious or disease-associated, were selected as negative 377 controls. The prediction results of the total 15 nsSNPs in PSEN1 gene are shown in Table S10. 378 The prediction results show that nine out of these ten pathogenic nsSNPs were predicted 379 deleterious by all five prediction packages used in this paper. The only pathogenic nsSNP that 380 was not predicted as deleterious variant, which is rs63750231 (E280A), has I-Mutant DDG and 381 SNPs&GO score that is lower than the cutoff point. All these five negative controls of PSEN1 382 nsSNPs were predicted to be non-deleterious to the protein. Manuscript to be reviewed 447 experiments to determine the functional consequences of all the SNPs, even for a single gene. 448 For that reason, computational methods become an alternative and important way to prioritize 449 the SNPs that are possibly structurally or functionally significant for the genes of interest. 450 Computational methods such as prediction and modelling tools allow researchers to identify 451 functionally significant SNPs from neutral SNPs. The prediction accuracy is expected to be 452 improved when results from multiple algorithms are combined to perform meta-prediction. 453 Besides that, computational methods are able to provide high throughput prediction results at In our study, a total of 56 rare variants in PICALM, SYNJ1 and SH3KBP1 genes were 531 predicted as deleterious variants. These deleterious variants were predicted to affect the 532 regulation of gene expression and protein functions. Three of these genes have cellular functions 533 involved in clathrin-mediated endocytosis (CME) and deleterious variants in these genes were 534 expected to affect the functions of these proteins in endocytosis. Moreover, these genes were 535 previously reported as AD-associated and they are implicated in the pathogenesis of AD. The | 2,081.8 | 2019-09-30T00:00:00.000 | [
"Biology"
] |
The Neurotoxicity of Nitrous Oxide: The Facts and “Putative” Mechanisms
Nitrous oxide is a widely used analgesic agent, used also in combination with anaesthetics during surgery. Recent research has raised concerns about possible neurotoxicity of nitrous oxide, particularly in the developing brain. Nitrous oxide is an N-methyl-d-aspartate (NMDA)-antagonist drug, similar in nature to ketamine, another anaesthetic agent. It has been linked to post-operative cardiovascular problems in clinical studies. It is also widely known that exposure to nitrous oxide during surgery results in elevated homocysteine levels in many patients, but very little work has investigated the long term effect of these increased homocysteine levels. Now research in rodent models has found that homocysteine can be linked to neuronal death and possibly even cognitive deficits. This review aims to examine the current knowledge of mechanisms of action of nitrous oxide, and to describe some pathways by which it may have neurotoxic effects.
Introduction
Nitrous oxide (N 2 O) has been used alone or in combination with other agents to produce analgesia and anaesthesia for over 150 years [1]. It shows anaesthetic properties at very high concentrations, with a minimum alveolar concentration needed to produce anaesthesia in 50% of subjects (MAC) of 104% [2]. To reach near this level without compromising oxygenation, hyperbaric conditions are OPEN ACCESS necessary which are impractical in a surgical setting. During the 1940s however, doctors began administering N 2 O in combination with a number of other non-volatile anaesthetic agents to allow for lower N 2 O concentrations to be used [1]. Nowadays, nitrous oxide is still used in combination with various anaesthetic agents such as isoflurane and ketamine for anaesthetic sparing, to allow lower concentrations of volatile or non-gaseous anaesthetics to be used [3,4].
Nitrous oxide is also commonly administered in a 50:50 mixture with oxygen to give analgesia during labour as it has no effect on awareness, and can be self-administered by the mother allowing for more personalised pain relief during contractions. In recent years, however, the safety and efficacy of nitrous oxide has been questioned [5,6]. While many studies show adverse effects of nitrous oxide anaesthesia, there is still no general consensus as to whether N 2 O is dangerous enough to warrant discontinuation as an anaesthetic or analgesic [7]. This review article aims to summarise the current evidence for toxicity of nitrous oxide. However, with the limited clinical data presently available on nitrous oxide toxicity it is, as of yet, too soon to draw conclusions.
Neurotoxicity
It has been documented through a series of clinical studies that nitrous oxide administration is associated with post-operative cardiac problems [6,8]. Further evidence is now mounting which implies nitrous oxide may also cause neurotoxicity. Often, neurological damage may not have overt symptoms and vulnerable patients, such as the elderly, may experience cognitive changes which may go unnoticed. It is often the most vulnerable patients exposed to anaesthetic agents, so one must be sure that the stress of surgery is not exacerbated by the anaesthetic agent which should be providing relief.
Many of the neurotoxic effects of nitrous oxide are dependent on exposure at a certain age or developmental stage. Extensive research indicates that the main periods of vulnerability to nitrous oxide neurotoxicity are during the perinatal period and again in the aged brain. The foundation work has mostly been carried out in rat models but more recent research work has extended into non-human primate models. In rats, perinatal development extends to postnatal day (PND) 7, juvenile rats are classed PND 22-28, adolescence begins around PND 30-35 while adulthood is reached at PND 60 [9]. This demonstrates the rapid development of rats in comparisons to primates and humans.
Perinatal Brain
Rat neurodevelopment is confined to a short period directly after birth, from postnatal day (PND) 0 to PND 7. This roughly translates to a period spanning the third trimester of pregnancy to approximately the 6th month of age in humans [10]. During this period there is a massive increase in programmed cell death as excess neurons are cleared and synapses of remaining neurons are strengthened, known as synaptogenesis. As can be seen in Table 1, rats exposed to N 2 O in combination with other clinical anaesthetics during this period have a consistent, excessive increase in apoptosis in various brain regions, most notably the retrosplenial cortex (RSC) and thalamus [11]. It was also found that these animals have long term impairment of cognitive function [12,13]. It has been shown that at PND 7 in rats there is peak N-methyl-D-aspartate (NMDA) receptor expression in the developing brain, which may explain the increased sensitivity to N 2 O [14,15]. This period is approximately equivalent to 20-22 weeks in humans. Table 1. An outline of major in vivo studies, regarding the neurotoxicity and mechanisms of nitrous oxide in combination with other anaesthetics, spanning the past 25 years. The studies cover a wide range of ages, anaesthetic concentrations, duration of anaesthesia and even species. Abbreviations: N 2 O-nitrous oxide; iso-isoflurane; PND-postnatal day; mo-month old; BDNF-brain derived neurotrophic factor; RAM-radial arm maze. Using non-human primates, these models can be taken a step closer to clinical relevance, achieving that which would not be ethical or feasible in a human study. No current studies have looked at N 2 O alone, but one study assessed neurotoxicity after a N 2 O/isoflurane mixed anaesthesia protocol in PND 5 or 6 rhesus monkeys [23]. Similar to rodent models, there was widespread cell death in the young monkeys, but interestingly they found a different pattern of distribution in comparison to rodent models. While rodents usually had cell death in the posterior cingulate and retrosplenial cortex (PC-RSC) and thalamus [11,14], this study found widespread apoptosis in the temporal gyrus, hippocampus and frontal cortex [23]. The primate study also found evidence of both necrotic and apoptotic cell death occurring, as opposed to simply apoptotic in the rat. Other studies involving administration of ketamine, another NMDA antagonist anaesthetic, to PND 3-6 rhesus monkeys demonstrated similar patterns of cell death and cognitive dysfunction as rodent models [24][25][26]. These studies also found that by PND 35, the neurotoxic effects of ketamine were no longer present.
Reference
Loepke et al. [27] undertook a review of all general anaesthetics administered to children in the perinatal period and found a wide range of variability in neurotoxic potential of anaesthetic agents. Nitrous oxide itself had not been subjected to any clinical trials but it was reported that in utero or perinatal exposure to N 2 O was correlated with short term neurological problems such as resistance to smiles and increased muscle tone [28]. This indicates that further research into the effects of N 2 O on infants should be undertaken, considering how popular N 2 O is as an induction agent and anaesthetic.
Aged Brain
A study by Noguchi et al. [29] compared a range of different aged rats, from PND 20-60, and discovered that at PND 30, MK801 started producing NMDA antagonist toxicity, with PND 60 rats having the highest level of cell death. This indicates that once past the early vulnerable stages after birth, juvenile rats are not as susceptible to the neurotoxic effects of NMDA antagonists, while adolescents are less vulnerable than adults. A number of rodent studies found that N 2 O administration alone or in combination with other anaesthetic agents produced cognitive deficits in aged mice (18-20 months old) [20][21][22]. As shown in Table 1, all studies from this lab used the radial arm maze (RAM) to test cognitive function, which is known to involve hippocampal and cortical memory circuits. One interesting study found that older rats were more susceptible to N 2 O in combination with ketamine than younger (6 month old) rats [19]. They hypothesised that this was due to reduced hepatic function in older rats, which resulted in slower clearance of ketamine from the body. This highlights the fact that N 2 O may not always be reliably compared to ketamine or other non-inhalational anaesthetics, due to their different metabolisation processes. There is also a trend, as seen in Table 2, that in adult rats, high concentrations of N 2 O given under hyperbaric conditions can result in neurotoxicity [31], however due to the fact that these concentrations are unfeasible in a clinical setting, this may not be altogether relevant except to help understand the toxic potential of N 2 O. Together, these results infer that there is a period, beginning in the weeks just after birth, extending until adulthood, where rats appear to have a less severe reaction to NMDA antagonist toxicity, except at clinically irrelevant concentrations. This may be due to changes in the brain during this period, where there is a high level of development but less programmed cell death. In line with these findings, Yon et al. [16] showed that between PND7 and PND14, rats become desensitized to the damage induced by an isoflurane/N 2 O/midazolam cocktail, and showed a significant increase in expression of Bcl-XL, an anti-apoptotic protein. Clearly, this warrants further study.
Molecular Mechanisms of Action
Despite widespread use for many years, the mechanisms by which nitrous oxide achieves its anaesthetic and analgesic properties have still not fully been elucidated. It has been suggested that opioid receptors are responsible for the analgesic properties of nitrous oxide. Research revealed that administration of Nalaxone, an opioid reverse agonist, inhibits the analgesic effects of nitrous oxide [37]. It is well known that there are a range of opioid receptors so it is difficult to pinpoint one specific receptor as being responsible. Work done on the abdominal muscles of mice found that the endogenous ligand for the κ-opioid receptor, dynorphin, may be the mediator of N 2 O antinociception [38,39]. However, both the µ-and ε-opioid receptors were found to have involvement in the rat hot plate test, which involves more peripheral nerves [40]. A study in the guinea-pig brain served to compare binding by nitrous oxide to opioid receptors in the brain and discovered N 2 O acts differently on µ-and κ-opioid receptors. µ-receptors were competitively inhibited by N 2 O while κ-receptors were non-competitively bound [41]. Another mechanism of analgesia appears to be via indirect T-type calcium channel inhibition by N 2 O [42]. This shows there is a high level of variance in how nitrous oxide modulates receptor activity to produce its analgesic effects.
For a drug to have an anaesthetic effect it must decrease excitatory output or increase inhibitory signals to result in a net loss of neuronal activation. In terms of nitrous oxide anaesthesia, the glutamatergic N-methyl-D-aspartate (NMDA) receptors have been implicated as a major site of action. NMDA receptors are the natural receptors for endogenous glutamate and are excitatory in nature. In this way nitrous oxide, as an NMDA antagonist, may inhibit excitatory signalling in the CNS. At the simplest neurological level, it was found that NMDA receptors were necessary for the behavioural effects of N 2 O in the nematode Caenorhabditis elegans, while volatile anaesthetics such as isoflurane or halothane had another, unspecified mechanism of action [43]. While we cannot directly translate findings in this organism to rodents or humans, NMDA receptors are a highly conserved structure through phyla, allowing for some level of comparison. Jevtovic-Todorovic et al. [30] looked at the mechanistic similarities between N 2 O and other NMDA receptor antagonists in an in vivo rodent model to discover that N 2 O acted via NMDA receptor antagonism. N 2 O was found to produce neurotoxicity, afford neuroprotection, and induce blockade of NMDA currents, as well as work in the same age-dependent manner as other NMDA receptor antagonists such as MK801. It also provides the same sort of dissociative anaesthesia as ketamine, an NMDA receptor antagonist, overall suggesting that N 2 O likely works through this receptor.
Further work has revealed that N 2 O also has some actions on the two-pore domain TREK-1 potassium channel [44]. This channel functions as a leak channel to release potassium from the cell, stabilising the resting membrane potential in neurons [45]. Previously, it has been shown that TREK-1 channels are important for anaesthesia and knockout mice for the channel are resistant to volatile anaesthetics [46]. TREK-1 has also been found to be important in various types of pain perception [47]. This ion channel could then be a factor in both the anaesthetic and analgesic actions of N 2 O.
Mechanisms of Neurotoxicity
There are various mechanisms which are responsible for its neurotoxic effects, such as NMDA antagonism, enzyme inhibition and alteration of cerebral blood flow. Different brain conditions have different vulnerabilities to each form of toxicity, with neonatal brains more susceptible to NMDA antagonism, vitamin B 12 deficient patients more prone to homocysteine mediated problems and the damaged brain often more vulnerable to changes in cerebral blood flow. Because of this, there is a wide range of patients to whom N 2 O may have some form of toxicity, with different groups being at greater risks than others. In this way it is extremely important to understand all the needs of a patient before giving nitrous oxide. The danger arises when nitrous oxide is given during dental procedures or as emergency analgesia, e.g., en route to hospital, where underlying problems such as vitamin B 12 deficiencies may be undetected. One case study details a patient who presented with weakness in her lower limbs as well as peripheral numbness [36]. MRI scans showed abnormalities on the cervical spinal cord consistent with small lesions. The patient was found to be deficient in vitamin B 12 and had been exposed to nitrous oxide for dental surgeries 2-3 months previously. Following 10 months of vitamin B 12 injections the symptoms had abated, yet this could have been avoided altogether if nitrous oxide had been avoided for this patient. This highlights the need to fully elucidate nitrous oxide mechanisms of toxicity, so that clinicians can make informed decisions regarding N 2 O use. This case is reflected in further case reports involving patients with no prior N 2 O abuse experiencing myelopathies following N 2 O anaesthesia [48], as well as patients with a history of N 2 O abuse [49][50][51][52].
NMDA Antagonism
NMDA receptors are excitatory receptors in the body which respond to the endogenous agonist glutamate. NMDA antagonists are known to have both protective and toxic effects depending on their activation. As glutamate is an excitatory neuromodulator, excessive release, for example after traumatic brain injury, can lead to excitotoxicity due to high influx of Ca 2+ into neurons. In this way, NMDA antagonists can provide protection against excitotoxic damage [30,53]. This was shown in a rat model of middle cerebral artery occlusion (MCAO), where 75% N 2 O provided a reduction in cortical, but not striatal, infarcts [54]. This was associated with increased performance in motor coordination tasks compared to MCAO animals with no treatment [55]. While this might suggest some use for N 2 O as a treatment for stroke due to its NMDA antagonist features, it has also been shown that N 2 O has the ability to inactivate tissue plasminogen activator (tPA), as well as increase haemorhage and blood-brain barrier dysfunction [56]. This would preclude its use for stroke as the NMDA antagonist benefits are outweighed by the negative effects.
If administered to the naïve brain, however, N 2 O has the ability to cause neurotoxicity itself. N 2 O has been shown to induce cell death in neurons after prolonged exposure, and shorter term exposure also leads to a more reversible vacuolisation [30,31]. Another potent NMDA antagonist, MK-801 [57], like nitrous oxide, leads to antagonism and thus reduction of signal from excitatory glutamatergic neurons. Despite showing anti-convulsive actions [58], it has not been introduced clinically due to the finding it can form lesions in the brain [59]. It has also been found to alter the structure and function of hippocampal synapses [60]. Jevtovic-Todorovic et al. [30] used MK-801 in a comparative study to investigate the possibility that N 2 O was a NMDA antagonist, with N 2 O showing identical physiological outcomes to MK-801. They found that both drugs induce a similar age-dependent toxicity in older rats. They also discovered that administration of GABAergic or muscarinic agents was successful in reversing the vacuolisation of neurons after N 2 O or MK-801 administration [30]. N 2 O is not as potent an antagonist as MK-801 but there appear to be numerous similarities between the two drugs in terms of toxicity. Similar to N 2 O, another NMDA receptor antagonist, ketamine, is often used as an anaesthetic agent. Ketamine is also coming under scrutiny since it was discovered that, like N 2 O, it has the ability to induce reversible or irreversible vacuolisation of neurons [19,25]. Ketamine has also been implicated in causing neuronal cell death by increasing NMDA NR1 subunit expression, leading to increased cytosolic calcium and increased cell death. However, this was after prolonged (24 h) expression and there is little evidence of this mechanism being involved in N 2 O neurotoxicity.
The neurotoxic actions of NMDA antagonists have been attributed to modulation of GABAergic inhibition of various neuronal pathways. The PC/RSC has been associated with learning and memory, as well as pain and awareness. Studies have found that NMDA antagonist administration can result in increased acetylcholine (ACh) release in the PC/RSC as well as the septohippocampal pathway, also involved in learning and memory [61,62]. Normally, NMDA receptors on GABAergic neurons act as an upregulating mechanism to ensure constant inhibitory GABA release. GABA acts upon receptors on cholinergic neurons in the PC/RSC such that ACh release is tonically inhibited. NMDA antagonists release this GABAergic inhibition of cholinergic neurons, allowing ACh release for as long as the NMDA receptor is antagonised. Both studies suggested that NMDA antagonists act not at the region where ACh release is recorded, but instead at some separate site, with GABAergic projections between both sites. In the case of the PC/RSC this was shown to be the basal ganglia [62], while for the hippocampus it appears to be the medial septum. These findings have been extended to show increased ACh release in the cerebral cortex of rats exposed for 1 h with 75% N 2 O [63]. The area postrema is one of the major centres involved in emesis and can be stimulated by acetylcholine [64]. This increased cholinergic output has been postulated to underlie the increased nausea and vomiting often accompanying N 2 O administration. It has yet to be studied if N 2 O can have similar effects on other pathways.
The group led by John Olney [65], who has carried out a mass of work in this field, has referred to this NMDA hypofunction as a cause of complex excitotoxicity. Antagonism of NMDA receptors on GABAergic neurons can release other pathways from the inhibitory control of GABA. These other pathways are usually excitatory in nature, such as the cholinergic pathway investigated above [62].
This excitatory disinhibition has now been implicated in a range of disorders such as schizophrenia and Alzheimer's disease [66][67][68][69]. While N 2 O most likely does not have as severe an effect as MK801 or ketamine due to its shorter duration of action, it is nevertheless important to consider how these changes in learning and memory centres may affect the very young or old brain.
Homocysteine Imbalance
One side effect of N 2 O which may mediate its toxic effects is indirect inactivation of methionine synthase, an important enzyme in the remethylation pathway converting homocysteine to methionine. Nitrous oxide irreversibly binds to the cobalt atom in vitamin B 12 , also known as cobalamin, via mechanisms which are not well understood. This leads to oxidation of the enzyme [70], eventually causing inactivation of vitamin B 12 . Vitamin B 12 is an essential cofactor for methionine synthase so inactivation leads to a loss of function of the enzyme. In the normal methionine cycle, methionine is converted to homocysteine (Hcy) via the intermediary molecules S-adenosyl-methionine (SAM) and S-adenosyl-homocysteine (SAH). From here, homocysteine can either be irreversibly converted to cystathione (eventually becoming glutathione) via the trans-sulfuration pathway, or reversibly converted back to methionine by methionine synthase. Homocysteine, a sulphur-containing amino acid, does not appear to have any inherent function in the body except as a part of this methionine pathway. However, it is known to have various toxic effects in the body so any accumulation can be detrimental. A randomised double-blind study into the effects of anaesthesia post-surgery found that N 2 O administration was associated with higher rates of heart attack, even with patients having non-cardiac surgeries [8]. Homocysteine has been associated with a high rate of cardiac problems [71,72] and patients are found to have elevated homocysteine levels post-surgery [73,74]. This cardiovascular dysfunction appears to be regulated by increasing coagulation and endothelial adhesion, promoting atherosclerosis, as well as altering vascular responses to certain molecules such as argenine via oxidative mechanisms [75].
Recent interest in homocysteine has revealed multiple pathways by which it causes neurotoxicity at a cellular and cognitive level. Homocysteine has been shown to act as an agonist on the glutamate binding site on NMDA receptors, having an opposing effect to N 2 O (see Figure 1). While this might suggest that N 2 O may counteract homocysteine excitotoxicity, in reality N 2 O is cleared from the system very quickly following cessation of anaesthesia, while homocysteine is known to stay elevated in humans serum for days. In adolescents, homocysteine levels return to baseline between 12 and 24 h [74], while in adults this post-exposure increase is still high at 24 h [73] and continued elevation has been noted for up to one week [76,77]. Lipton et al. [78] also report on the dual actions on the NMDA receptor that homocysteine can have. As well as being an agonist, Hcy can also act as a partial antagonist on the glycine binding site of the NMDA receptor. While this might imply that the two binding sites for Hcy would cancel each other out, one being excitatory and one inhibiting the excitatory potential, the reality is more complex, particularly in a brain injury setting. During brain damage such as stroke or traumatic brain injury, glycine levels in the brain become elevated and will overpower the partial homocysteine binding on the NMDA receptor, leading to an elevated excitatory output. Adding to this the agonistic effect of homocysteine on the glutamate binding site, this achieves an even higher level of excitotoxic damage [78]. 12 , an essential cofactor in the conversion of homocysteine to methionine. This inhibition of vitamin B 12 leads to a buildup of homocysteine, a toxic amino acid. Homocysteine is toxic via at least two mechanisms; increasing reactive oxygen species (ROS) leading to eventual apoptotic cell death, and NMDA receptor activation. NMDA receptor activation can lead to an increase in ROS due to an influx of calcium into the cell. While N 2 O is also an NMDA antagonist, it is only effective during the course of anaesthetic exposure, while the rise in homocysteine levels induced by N 2 O lasts for hours or even days, suggesting that homocysteine mediated NMDA activation would play a larger role in cell death than N 2 O antagonism could counteract.
It is known that methionine synthase inactivation is very fast in rats as compared with humans, with a half-time in rats of 5.4 min vs. 40 min in humans when exposed to 50% N 2 O [32] (See Table 2). However, 40 min is still well within the time frame of human exposure during surgery so this should not be taken lightly. Often, these increases in homocysteine may not reach detrimental levels in normal patients, but patients at high risk for hyperhomocysteinemia (HHcy; classed as Hcy levels > 15 µmol/L) could be severely affected if given N 2 O anaesthesia. There are numerous risk factors for HHcy such as Alzheimer's disease [79], vitamin B 12 deficiency [80], MTHFR gene mutation [81], age and gender [82]. One striking example of this is a case report involving a young child (3 months old) with an MTHFR gene mutation, leading to an MTHFR enzyme deficiency [35]. This enzyme is important in the remethylation pathway and deficiencies have been linked to HHcy. This particular patient was administered 60% N 2 O on two occasions during surgery, and within 3 weeks after surgery was admitted to hospital suffering from seizures. Less than 2 months post-surgery the patient had died and was found to have severe lesions in the brain, as well as nerve demyelination. At such a young age it is probable that the brain was extremely sensitive to molecular changes and the rapid and extreme increase in homocysteine levels appears to have been involved in the patient's death. This again highlights the need for clinicians to be vigilant in ensuring their patients are not at risk if exposed to N 2 O. It also showcases the range of physiological parameters which are important to be aware of before administering N 2 O, which dentists and paramedics, who routinely use N 2 O as an analgesic and anxiolytic, do not normally have access to.
Reactive Oxygen Species and Mitochondrial Dysfunction
A range of molecules involved in apoptotic mechanisms in neurons have been found associated with increased homocysteine levels. One of homocysteine's main mechanisms of cellular damage is oxidative stress, which involves the formation of reactive oxygen species (ROS). ROS are strongly involved in apoptosis and cell death so any increase in levels will be detrimental. It was discovered that NMDA receptor activation leads to production of O 2 •− free radicals in cerebellar granular cells [83], peroxyinitryte (ONOO − ) in the midbrain [84] and various ROS in the forebrain [85,86]. Since homocysteine can act as an NMDA agonist this may cause increases in ROS. Increased intracellular Ca 2+ following NMDA receptor activation may account for the increased ROS, whilst ROS can themselves cause a rise in intracellular Ca 2+ [87,88]. As seen in Figure 1, this increased Ca 2+ can lead to disturbances in mitochondrial function, resulting in the production of ROS [85,89]. This mitochondrial dysfunction may be a major pathway involved in homocysteine mediated neurotoxicity. It is interesting to note that oxidative stress and mitochondrial ROS formation play a role in Alzheimer's disease (AD) pathogenesis [90], and it has been shown that high plasma homocysteine is a reliable biomarker for AD [91], although there is no clear consensus as to any causal relationship [92]. AD treatments which act as NMDA antagonists (e.g., memantine) have been shown to reduce homocysteine mediated neurodegeneration [93]. This implies a common underlying mechanism between the two and highlights the damage that can result from homocysteine overload in the brain. Increased intra-mitochondrial Ca 2+ also induces formation of mitochondrial permeability transition pores (MPTP), which allows release of cytochrome C from the mitochondria. Cytochrome C can then bind with APAF (Apoptotic Protease Activating Factor) to form an apoptosome, leading to downstream activation of caspase 3, resulting in apoptosis and cell death [94]. It has been shown that the vacuolisation resulting from N 2 O exposure is in fact massive swelling of mitochondria [31]. Drugs increasing mitochondrial membrane stability have been shown to be protective against the neurotoxic effects of N 2 O when combined with midazolam and isoflurane [95]. This membrane stabilisation was associated with improved cognition in rats tested [95].
In Combination with Other Anaesthetics
While N 2 O induced anaesthesia may not show convincing evidence of danger to some, it is also prudent to assess the toxicity of N 2 O in combination with clinically relevant anaesthetic agents to more closely mimic real world scenarios. There are a series of papers which combine N 2 O with isoflurane which consistently show an increase in neuroapoptosis when the two are combined over either agent alone [17,20,96]. These findings have even been replicated in a non-human primate model, the rhesus monkey [23]. It appears that this neurotoxicity is correlated with age; younger animals are susceptible to increased neurodegeneration with isoflurane addition, while adults are less prone to neuronal damage [18,23]. This may be related to the dual function of GABAergic neurons. In young animals, GABAergic neurons are excitatory in nature for a short period postpartum, while in older animals they take on their normal inhibitory function [97]. This may mean that alongside N 2 O induced excitotoxicity, isoflurane, a GABA receptor agonist, can induce extra excitotoxicity, while in adults isoflurane may counteract the excitotoxicity. This excitatory GABAergic action is also found in humans in the few weeks after birth [98], which would suggest that this same enhancement of N 2 O excitotoxicity by isoflurane or any GABA agonist could be present in humans.
Strategies to Minimize Toxicity of N 2 O
The primary concern in medicine is to cause no harm; therefore it would not be possible to perform procedures without anaesthesia. The stress and damage caused by this would be greater than any deleterious side effects from N 2 O anaesthesia. However, although N 2 O has been used for over a century, it should not be excluded from examination, and if similar or better alternatives are available, they should perhaps be utilised.
A number of possible adjuncts have been put forward. Xenon, another gaseous anaesthetic agent, has been found to be neuroprotective in comparison with other anaesthetics, including N 2 O [13] and has already begun clinical trials for neonates at risk for hypoxic brain damage (CoolXenon2-ISRCTN75602528; Toby Xe-ISRCTN08886155). Melatonin also shows neuroprotective promise when combined with anaesthetics [18]. This may be even more relevant for N 2 O due to the proven effect of melatonin in decreasing homocysteine mediated neurotoxicity in animal studies [99,100].
In terms of possible replacements, a few studies have looked at remifentanil, a fast acting opioid analgesic. Due to its speed of recovery, it has been suggested as a replacement for N 2 O in neurosurgery, as it does not adversely affect cerebral blood flow, unlike N 2 O [101]. Remifentanil has also been suggested as a labour analgesic agent if administered intravenously [102].
Conclusions
At the moment, it is premature to suggest that N 2 O should be discontinued as an anaesthetic agent. However, the growing body of evidence does support the theory that N 2 O has some neurotoxic effects and these results should not be taken lightly. Nitrous oxide is regularly used for neonatal surgery and, as shown, this is a high risk period for neurodevelopment. It is difficult to assess the long term cognitive outcomes in humans, but rat studies suggest long term developmental issues such as memory impairment. Nitrous oxide is also often used in elderly or brain damaged patients and it is clear from numerous studies, such as the ENIGMA trial, that N 2 O is not as harmless as some might believe. It is important that further molecular work be carried out to determine the pathways by which N 2 O has its toxic effects, as these pathways may reveal areas for drug development to replace or work alongside N 2 O to mitigate its neurotoxic effects. It would also be advisable to carry out further studies on non-human primates to determine any differences between rodent studies. At the moment, from rodent studies, we can only make educated assumptions on what might occur in humans. Non-human primates can help bridge this gap in knowledge without compromising patient safety in clinical trials. | 7,320.6 | 2014-01-28T00:00:00.000 | [
"Biology"
] |
An Uncertain Alternating Renewal Insurance Risk Model
(e claim process in an insurance risk model with uncertainty is traditionally described by an uncertain renewal reward process. However, the claim process actually includes two processes, which are called the report process and the payment process, respectively. An alternative way is to describe the claim process by an uncertain alternating renewal reward process.(erefore, this paper proposes an insurance risk model under uncertain measure in which the claim process is supposed to be an alternating renewal reward process and the premium process is regarded as a renewal reward process. (en, the paper also gives the inverse uncertainty distribution of the insurance risk process. (e expression of ruin index and the uncertainty distribution of the ruin time are derived which both have explicit expressions based on given uncertainty distributions. Finally, several examples are provided to illustrate the modeling ideas.
Introduction
e classical insurance risk models and their extended models generally assume that claim numbers and claim amounts are random variables. ey also suppose that the time of the accident and the time of payment are consistent; that is, the insurance company immediately pays compensation to the insured when the accident occurs. en, many types of insurance risk models are presented by means of stochastic process based on the probability theory; several scholars, for example, Dickson and Hipp [1], Li and Garrido [2], Dickson and Hipp [3], Chun [4], Gerber and Shiu [5], Sundt and Teugels [6], Paulsen and Gjessing [7], Albrecher and Hipp [8], and Yu et al. [9], extend the insurance risk model by considering inflation, dividend, and tax. Using Lévy process to model insurance risk processes and other insurance product has become popular, see, for example, Griffin [10], Biffis and Kyprianou [11], Zhang et al. [12], and Yu et al. [13].
However, for a new insurance product, it usually lacks historical data to estimate probability distributions. In this situation, we often use the belief degrees of the claim numbers and claim amounts estimated by some experienced domain experts to describe the indeterminacy. Kahneman and Tversky [14] find that humans tend to place too much emphasis on unlikely events. us, probability theory is difficult to model the belief degree unless we get enough historical data, and it is unreasonable to describe an insurance risk process with stochastic process. e research about the insurance risk process that is described by a stochastic process has been gradually challenged by many scholars. De Wit [15] first developed an insurance risk process under fuzzy theory. en, the insurance risk processes with fuzziness are studied by Lemaire [16], Cummins and Derrig [17], Derrig and Ostaszewski [18], Yu [19], Shapiro [20], and Li et al. [21]. Furthermore, Huang et al. [22] and Shapiro [23] regard the claim amounts as fuzzy random variable and propose the fuzzy random risk model.
When the estimated distributions are not close enough to the real frequencies, Liu [24] invented the uncertainty theory, which is used to model human indeterminacy due to belief degrees. Now, the theory has been applied to construct insurance risk models. Considering the human uncertainty in running an insurance company, Li et al. [25] propose a premium principle under uncertain measure via the distortion function. To study the evolution of uncertain phenomenon over time, Liu [26] develops the concept of uncertain process. Meanwhile, Liu also presents the uncertain renewal process as a special and important case. After that, Liu [27] researches the uncertain renewal reward process. Yao and Relescu [28] apply the uncertain renewal process to analyze an age replacement policy. In addition, Yao [29] studies the uncertain calculus of the uncertain renewal process by proposing the integration and differentiation about the renewal process. Yao and Li [30] regard off-times and on-times as uncertain variables and propose an uncertain alternating renewal process. Zhang et al. [31] also show a delayed renewal process for uncertain interarrival times, where the first interarrival time is completely different from other times. Recently, Liu [32] provides an uncertain insurance risk model by applying the renewal reward process and derives the ruin index. In the uncertain insurance risk model, Liu assumes that the premium is a real function proportional to time and the claim amount obeys an uncertain renewal reward process. Based on Liu's insurance risk process, Yao and Zhou [33] further investigate the uncertainty distribution of the ruin time. Yao and Qin [34] point out that the premiums follow the renewal reward process in actual applications rather than a real function, so they propose an uncertain insurance risk process in which both the premiums and claims follow the uncertain renewal reward process. Liu et al. [35] extend Liu's model and discuss an uncertain insurance risk model with a variational lower limit. Liu and Yang [36] establish an uncertain insurance risk process considering an insurance company having multiple claims with uncertainty theory.
In the above studies, the claim process is regarded as a renewal process or a renewal reward process. In fact, the moment of the accident and the moment of the payment are not simultaneous, and in many types of insurance, the time interval between a claim event and the determination of the payment for the claim can be very long [37]. So, the claim process should include two processes: the report process and the payment process. e report process refers to the insured formally notifying the insurance company about an event. e payment process refers to the process whereby the insurer reviews the claim and sees whether the event or situation falls within the risks covered by the policy. at is to say, the insurer will need to determine that the claim meets the terms and conditions of the insurance policy. Obviously, the two processes should follow different uncertainty distributions. erefore, it is more reasonable to view the claim process as an uncertain alternating renewal reward process. At present, few studies have considered the claim process in an insurance risk model as an uncertain alternating renewal process.
Inspired by the ideas we have reviewed, this paper proposes an insurance risk model in which the claims follow an uncertain alternating renewal reward process, while the premiums follow an uncertain renewal reward process. e inverse uncertainty distribution of insurance risk process, ruin index, and ruin time are derived. We also compare our model with Yao and Qin's model through numerical examples and explain the significance of describing the claim process by uncertain alternating renewal reward process. e rest of the paper is organized as follows. Section 2 presents an uncertain alternating renewal insurance risk model and gives the expressions of ruin index and ruin time. In Section 3, several examples are provided to clarify the modeling idea of the insurance risk model. Finally, conclusions will be listed at the end.
An Uncertain Alternating Renewal Insurance Risk Model
Next, we study an insurance risk process in an uncertain environment. e premium process and the claim process are regarded as an uncertain renewal reward process and an uncertain alternating renewal reward process, respectively. In the following discussions, we give an assumption that the new claim event will not occur during the period of reviewing the current claim. Let the premium process be an uncertain renewal reward process: where P 1 , P 2 , . . . are independent uncertain premium amounts, and is an uncertain renewal process with independent uncertain interarrival times ξ 11 , ξ 12 , . . .. e claim process is an uncertain alternating renewal reward process: where C 1 , C 2 , . . . are independent uncertain claim amounts. Consider the claim process as an uncertain alternating renewal reward process, and the claim process can be described as the following process. e first event happens as well as the insured reports claim for an uncertain time ξ 21 . After an uncertain time ξ 21 , the insurer reviews your claim and provides a payout for an uncertain time η 21 and at an uncertain claim amount C 1 . Next, the second event happens and the insured reports claim for an uncertain time ξ 22 . After an uncertain time ξ 22 , the insurer reviews your claim and provides a payout for an uncertain time η 22 and at an uncertain claim amount C 2 . e process continues infinitely (see Figure 1). en, let be an uncertain alternating renewal process, where ξ 21 , ξ 22 , . . . and η 21 , η 22 , . . . are independent uncertain interarrival times. S n � (ξ 21 + η 21 ) + (ξ 22 + η 22 ) + · · · + (ξ 2n + η 2n ) denotes the moment of the payment of the nth claim. Let a be the initial capital, then the capital of an insurance company at time t is e Z t is an insurance risk model with an uncertain alternating renewal reward process. Apparently, once Z t < 0, the insurance company faces the risk of ruin.
For an uncertain insurance risk model Z t , in order to obtain some important theorems, the following notations will be used: Φ: the uncertainty distribution of the premium amount P 1 Ψ: the uncertainty distribution of the claim amount C 1 μ 1 : the uncertainty distribution of the interarrival time ξ 11 μ 2 : the uncertainty distribution of the interarrival time ξ 21 λ: the uncertainty distribution of the interarrival time η 21
Ruin Index.
e ruin index can be defined as the uncertain measure that the capital Z t < 0 at time t. is section derives the explicit forms of the ruin index. Firstly, we get the inverse uncertainty distribution of an uncertain insurance risk process.
Theorem 1. For an uncertain insurance risk process
Proof. Note that, for any we have Since the uncertain variables are independent and according to the property of uncertain measure, we can obtain Furthermore, it follows from the monotonicity of uncertain measure that In addition, since for any we have According to the independence of these uncertain variables again that we get which is equivalent to the following form (see duality of uncertain measure): Above all, where m and n are nonnegative integers and Proof. Obviously, the ruin risk can be calculated by (1) For given nonnegative m and n, en, we have (2) For given nonnegative m and n, Mathematical Problems in Engineering 5 en, we also have Similarly, it follows from definition of uncertain measure that From (1) and (2), we can draw eorem 2 is proved. □ Theorem 3. Let Z t � a + R 1t − R 2t be an uncertain alternating renewal insurance risk process. en, the ruin index can be calculated by the following form: Proof. It is obvious that Because we assume the uncertain interarrival times are independent, we have Hence, eorem 3 is proved.
□
When uncertain variables P 1 , ξ 11 , C 1 , ξ 21 , and η 22 have determinate uncertainty distributions and corresponding inverse uncertainty distributions exist, we can calculate the 6 Mathematical Problems in Engineering crisp expressions of ruin index through eorem 3 and eorem 2, respectively.
Ruin Time.
In addition to the ruin index, ruin time can also be used to measure the risk of an insurance company. Next, the definition of ruin time will be given, and the uncertainty distribution of ruin time can be derived.
j�1 C j be the uncertain alternating renewal insurance risk process of an insurance company. en, the ruin time of the insurance company can be defined as It is easy to know that τ � +∞ means that the insurance company will not ruin. erefore, for any t ≥ 0, the ruin index also can be expressed in the following form: where M τ ≤ t { } denotes the uncertainty distribution of the ruin time.
We assume that the nth claim occurs at the instant S n � n i�1 (ξ 2i + η 2i ) and N 1t � m at this time. us, the capital of the insurance company at the nth claim is en, we have e uncertain event m i�1 ξ 1i ≤ t, n j�1 (ξ 2j + η 2j ) ≤ t, Y m,n < 0} means that the nth claim occurs before the instant t, and the capital of the insurance company is less than 0 at this time.
Theorem 4. Suppose that the inverse uncertainty distributions of all the uncertain variables exist in the insurance risk process. en,
where
Mathematical Problems in Engineering
According to the monotonicity of uncertain measure, it is obtained that Additionally, we have Similarly, we can obtain Above all, we get (40) eorem 4 is proved.
□ Theorem 5. Suppose that the inverse uncertainty distributions of all the uncertain variables exist in the insurance risk process. en, the ruin time τ has an uncertainty distribution: where
Mathematical Problems in Engineering
Proof. Since According to the monotonicity of uncertain measure, it is obtained α m,n (t). (45) Mathematical Problems in Engineering then, we have α m,n (t). (47) index of the insurance company to be ruin � 0.0223. In addition, when the initial capital a varies from 0 to 1200, we calculate the ruin index and present the results in Figure 2.
e results show that the ruin index falls from 0.3556 to 0.0106 when the initial capital a increases from 0 to 1200.
Conclusions
is paper extends an insurance risk model with an alternating renewal process in uncertain environment. We propose an uncertain insurance risk process in which the claim process is regarded as an uncertain alternating renewal reward process, and this process is more in line with the claim process in real life. Moreover, we provide the inverse uncertainty distribution of the uncertain insurance risk process and the ruin index. e explicit form of uncertainty distribution of ruin time is also derived. At last, several examples are provided to illustrate the proven results. In future research, some criteria such as inflation, dividend, and tax can be considered to further extend the uncertain insurance risk process.
Data Availability e data presented in Examples 1-5 and Figures 2-4 in this paper, which are used to support the findings of this study, are included within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 3,353 | 2020-07-06T00:00:00.000 | [
"Mathematics"
] |
Residual life evaluation of remanufacturing blanks considering the crack closure
The nonlinear continuous fatigue cumulative damage model proposed by Chaboche was modified considering the effect of crack closure on fatigue damage, and the fatigue cumulative damage and residual life evaluation model of remanufactured blanks was obtained. The related parameters of the modified model were obtained from the data of symmetric cyclic tensile and compression fatigue tests. The residual life evaluation model of remanufactured blanks was verified by the two-stage loading (high and low loading and low and high loading) tensile and compressive fatigue test. The results show that the calculated values of the model are in good agreement with the test values, which proves that the modified model can accurately predict the residual life of remanufactured blanks.
Introduction
With rapid economic development and population growth, the lack of various resources is becoming increasingly serious, highlighting the urgent need for energy conservation. Green remanufacturing technology is an effective method for saving resources. Green remanufacturing refers to a series of technical measures or engineering activities that take waste products as the remanufactured work blank and use advanced manufacturing technology as the means to repair and transform these products. 1 Local failure areas on the blank structure in the remanufacturing process can be repaired by advanced technological means. However, fatigue damage is a process of the continuous accumulation of damage; the accumulated fatigue damage generated during its longterm service still exists, and its degree of damage is closely related to the service history of the product, the state of use, the environment, and other factors. The service records of used products cannot fully record this information, and the accumulated fatigue damage analysis plays an important role in predicting the fatigue life of a build or structure. Different parameters have been used to measure the damage of the material after managerial loading from different perspectives. Therefore, how to accurately describe the cumulative fatigue damage in the remanufactured workblank is a key factor restricting the assessment of its remaining life.
In this paper, the nonlinear continuous fatigue cumulative damage model proposed by Chaboche was modified based on the effect of the crack closure on the fatigue cumulative damage, and a new model was obtained to evaluate the remaining life of remanufactured blanks with a view to obtaining higher predictions. This guarantees the normal working capacity of the material, the safety of its use, and the economic efficiency of the enterprise.
Literature review
Since Palmmgren introduced the concept of fatigue damage in 1924, researchers have proposed more than 50 fatigue cumulative damage models. 2 Crack closure is a common phenomenon and can be caused by a variety of potential factors such as debris, oxides, or chemical deposits within the crack. 3 In the 1950s, Zapffe and Worden 4 discovered special features with the aid of electron microscopy and showed that crack growth was cyclical. In the 1960s, Sihet al. 5 used the stress intensity factor to describe the expansion of fatigue cracks. In the 1970s, Wolf formally proposed the concept of crack closure, 6 Johan Singh et al. used the concept of crack closure proposed by Wolf to conduct an experimental study on fatigue crack propagation of AISI 316 (N) weld, and compared the results with the results measured by acoustic emission technology. The results were in good agreement. 7 Subsequently, based on previous studies, several other mechanisms leading to crack closure, such as roughness-induced crack closure, 8 oxidation-induced crack closure, 9 and phase changeinduced crack closure, 10 have been proposed by researchers. Mcclung 11 considered plasticity-induced crack closure when studying the size of the forward and reverse plastic zone at the crack tip and developed new models for predicting the plastic zone size. Pippan and Hohenwarter 12 synthesized three types of crack closure, plasticity, roughness, and oxide, with special attention to the effect of the experimental measurements on the fatigue crack extension. Cuenca and Serna, 13 in order to evaluate the autogenous selfhealing and self-healing ability of early ultra-high performance fiber-reinforced concrete, conducted experiments to compare and analyze the crack closure on the specimen surface with the help of digital microscope measurements. Based on the original method, Xie et al. 14 proposed the axial stress difference method for objectively obtaining the crack closure stress. Enaki and Macovei 15 proposed an RICC model and a new method to evaluate fatigue crack closure, simulate fatigue crack closure, perform crack closure, and evaluate real cracks in use. Khoei and Eghbalian 16 studied the evolution pattern of the damage and destruction of ductile metals in cyclic loading; their case study analyzed the fatigue behavior and life assessment of the alloy and compared the results with the experimental data. Aktaaet al. 17 developed a new fracture mechanics method to determine the mode varying crack loading and to predict the crack expansion capacity.
Commonly used models include the linear fatigue accumulation damage model, 18 the bilinear fatigue accumulation damage model, 19 the energy-based fatigue accumulation damage model, [20][21][22] etc. These models are all designed to consider the effect of crack closure on fatigue damage. However, crack closure leads to the hindrance of fatigue crack expansion, so it can enhance the damage tolerance properties of materials and structures prone to fatigue fracture. Ignoring crack closure effects often leads to overly conservative life estimates, which can cause unnecessary material and economic losses. Subsequently, the model proposed by Chaboche and Lesne 23 has been more widely used because it takes into account the effects of damage caused by stresses below the fatigue limit, the loading sequence, and the average stress, and the model parameters are easily accessible. [24][25][26][27] Therefore, the effect of the crack closure on fatigue damage is considered in the life assessment analysis in this paper to improve the accuracy and economy of the life assessment model, which is of great significance for the life assessment of remanufactured blanks.
Modification of the nonlinear fatigue cumulative damage model
The nonlinear continuous fatigue cumulative damage model proposed by Chaboche and Lesne is as follows 23 : For the uniaxial fatigue problem, Chaboche and Lesne suggests using equation (2) to express the relationship between the damage and the number of fatigue loads 23 : In the above two equations, D is the damage variable, N is the number of fatigue load actions, s max is the maximum stress, s m is the average stress, b, M 0 , and b are the parameters related to the material, and a is the parameter related to both damage and load. Dattoma where if x.0, then x h i = x; and if x ł 0, then x h i = 0; s r is the fatigue limit of the material corresponding to the stress ratio R, s b is the strength limit of the material, and H and a are experimental constants. In this paper, H = 0:0801 and a = 0:434.
Effect of the crack closure on fatigue damage
The crack closure mechanism was first proposed by Elber in 1970, whose experimental results showed that fatigue cracks could be closed. Elber believed that the fatigue crack would not expand at just any moment.
The crack will open and expand only when the load is large enough to overcome the obstruction of the plastic zone at the crack tip. Before that, the crack would still be in a closed state even with the load. Accordingly, Elber proposed the concept of the effective stress intensity factor range: In the formula, DK eff -effective stress intensity factor range; K max -maximum stress intensity factor; K op -splay stress intensity factor. K op is usually determined by the compliance curve measured by the compensation method. 28 In other words, the crack can propagate only when the stress intensity factor is greater than K op . Therefore, the effective factor of crack propagation can be defined as: In the formula, j-effective factor of crack propagation; DK-stress intensity factor range; R-stress ratio. According to the experimental data, the relationship equation between the effective factor of crack expansion j and the loading ratio R was proposed.
Subsequently, many studies on the crack closure phenomenon have been conducted, and different expressions for the effective factor of crack expansion have been derived. Some of these expressions are shown in Table 1.
In this paper, the low carbon steel Q345R produced by Masteel was the research object, so the expression provided in reference 7 was used to express the effect of the effective factor of the crack growth on the cumulative fatigue damage.
When the crack closure effect is taken into consideration, equation (1) can be modified: Equation (2) can be modified: Assuming that the initial damage state of the metal material is D 0 = 0 N = 0 ð Þ, and the material is damaged when D = 1 N = N f À Á , integrating equation (7) with D 2 0, 1 ð Þ, the failure fatigue life of the material can be obtained: If the material does not fail after experiencing N n cyclic load action, the fatigue damage is D n 0\D n \1 ð Þ . Integrating equation (8) with D 2 0, D n ð Þ, the in-service fatigue life of the material can be obtained: Substituting equation (9) into equation (8), the expression for the material damage variable after considering the crack closure effect is obtained as follows:
Determination of the model's material parameters
In this paper, the Q345R steel produced by Masteel was selected to determine the material parameters of the nonlinear fatigue cumulative damage model. The main chemical composition and mechanical properties of Q345R steel are shown in Tables 2 and 3, respectively. The fatigue limit of the material involved in the model was determined by the fatigue test, and the shape and size of the specimen used in the fatigue test are shown in Figure 1. The axial tensile and compressive fatigue test was carried out at room temperature in a standard atmosphere with a stress ratio of R = 21 and a frequency of 15 Hz. The test process was divided into six levels of stress, and three fatigue specimens were tested for each level of stress. The S-N curve of the Q345R steel obtained from the fatigue test is shown in Figure 2, and its fatigue limit was 197 MPa.
Using the parameters of the material tensile strength s b , fatigue limit s 0:6 , and maximum stress s max , the value of parameter a was obtained according to equation (3). Based on equation (8), the fatigue test data were fitted in Origin software, using a custom function. The material parameters in equation (8)
Remaining life assessment of remanufactured blanks
The evaluation of the residual life of the remanufacturing blank tests whether the used parts can be used for remanufacturing. If the residual life reached the lifecycle specified by the product, it could be used for remanufacturing. Otherwise, even if the used parts had no other damage, they could not be used for remanufacturing.
For remanufactured blank material, there is initial damage D 0 6 ¼ 0, as it already has had service experience. Assuming that its initial damage was D n 0\D n \1 ð Þ , integrating equation (7) with D 2 D n , 1 ð Þ, the remaining fatigue life of the remanufactured blank is calculated as: In the formula, N r -residual fatigue life of the remanufactured blank.
In order to simulate the service experience of the used parts, a two-stage stress tensile and compressive fatigue test was carried out on the material. The number of stress actions of the first stage was used to simulate the service life of the specimen before remanufacturing, and the number of stress actions of the second stage when the specimen was fractured simulated its remaining life. During the test, the stress ratio was R = 21, and the frequency was 15 Hz. A high-low (low-high) fatigue load was used to load the fatigue specimen. After loading under the first stress for a certain number of times, the fatigue load was converted to the second stress, which continued until the specimen failed. The remaining life of the specimen was calculated according to formula (11), and the actual action times of the second-stage load were compared to verify the correctness of the model.
The two-stage loads used in the two-stage stress tension and compression fatigue test were 340 and 300 MPa. There were two conditions in the test: first, high load was applied, then, it was converted to low load after 20,000 times, until the specimen broke; second, a low load was applied, then, a high load was applied after 50,000 times, until the specimen broke. The second-stage loading times were recorded and compared with the remaining life calculated by Equation (11). The results showed that the calculated results of the model were in good agreement with the test values with the low-high loading method, and the error between the two was only 13.76%; however, the error between the two was as high as 61.21% under the high-low loading method, and the accuracy of the model was poor. Through analysis, we found that the Chaboche nonlinear continuous fatigue cumulative damage model considered the influence of loading order; however, the effect of the loading order on the crack closure was not considered when the crack closure effect was introduced to modify the model. Hence, the model accuracy was poor.
In order to consider the influence of the load loading sequence on the crack closure effect, assuming a twostage loading fatigue test, the number of acts of the first stage of the load was n 1 , and the number of acts of the second stage of the load was n 2 . The number of acts of the first stage of the load until the fracture of the specimen was n f 1 , and the number of acts of the second stage of the load until the fracture of the specimen was n f 2 . Then, the two stages were loaded until the fracture of the specimen. The actual damage of the specimen was 1, but the calculated damage of the specimen was: In the formula, D c -calculated damage of specimen.
The difference between the actual damage and the calculated damage was caused by the different effects of the different loads on the crack closure. Therefore, the effective crack growth factor was modified by the following equation: In the formula, j m -the effective crack growth factor considering the loading sequence. Thus, equation (11) for the residual fatigue life of the remanufacturing blanks was modified as follows: In the formula, N jm -influencing factors of the crack closure life considering the loading sequence; After introducing the loading sequence on the crack closure effect, the model accuracy was improved greatly. The main reason was the load; when the load shifted from one block to the other, the crack propagation mechanism changed obviously, and when load shifted from high to low or from low to high, the change law of the crack propagation mechanism was not the same. 26 So, considering the influence of the loading sequence, the accuracy of the model was improved by maintaining the actual situation of the crack propagation under load. The specific test and model calculation results are shown in Table 4. The calculation results of the Chaboche model are also listed in the table. The error between the calculated life of the Chaboche model and the test life was 21.25% in the case of low-high loading, while the error between the two was as high as 84.95% in the case of high-low loading. It was higher than that of the first revised model (13.76% and 61.21%) and much higher than that of the second revised model (4.09% and 12.98%).
The used fatigue life of the material was estimated from its service history; then, its initial damage D n was calculated by equation (10), and the remaining life of the remanufactured blank was calculated by equation (14). To determine whether a blank can be used for remanufacturing, we simply calculate the difference between the remaining life of the blank and the design life of the part. If the difference is greater than zero, it means that the remaining life of the blank can meet the needs of the lifecycle of the part and can be used for remanufacturing, and vice versa.
Conclusion
(1) The symmetrical cyclic tension-compression fatigue test of the Q345R steel at room temperature was carried out, and the S-N curve of the tension-compression fatigue was obtained. (2) A two-stage load loading symmetric cyclic tensile fatigue test was introduced; the number of times the first-stage load was applied simulated the service life of the remanufactured blanks, and the number of times the second-stage load was applied simulated the remaining life of the remanufactured blanks. | 3,935.4 | 2022-11-01T00:00:00.000 | [
"Materials Science"
] |
Qubit-compatible substrates with superconducting through-silicon vias
We fabricate and characterize superconducting through-silicon vias and electrodes suitable for superconducting quantum processors. We measure internal quality factors of a million for test resonators excited at single-photon levels, on chips with superconducting vias used to stitch ground planes on the front and back sides of the chips. This resonator performance is on par with the state of the art for silicon-based planar solutions, despite the presence of vias. Via stitching of ground planes is an important enabling technology for increasing the physical size of quantum processor chips, and is a first step toward more complex quantum devices with three-dimensional integration.
I. INTRODUCTION
P ERFORMANCE of superconducting qubits has greatly improved since the first demonstrations of quantum coherence, with dephasing time, in particular, increasing four orders of magnitude from 20 ns demonstrated by Chiorescu et al. in 2003 [1] to hundreds of microseconds measured recently [2]- [5]. To an extent, this astonishing progress in coherence time has been achieved by avoiding complexity in fabrication. State-of-the-art superconducting qubits are typically fabricated using an extremely restricted set of materials, a low thermal budget, and a minimal number of depositions and lithographic steps.
Besides long coherence times required to achieve highfidelity single and two-qubit gates, quantum computers also need to become sufficiently large to solve useful computing tasks. For example, tens or hundreds of millions of physical qubits are likely required for factoring thousandbit numbers using Shor's algorithm [6] and similar estimates have been given for quantum chemistry applications [7]. These estimates assume error correction based on the surface code [8], which is currently the most promising approach to quantum error correction. One attractive feature of the surface code is that it requires only two-dimensional nearestneighbor coupling between qubits, which makes a physical implementation of a large quantum computer more feasible. Nevertheless, separate control and readout lines still need to address essentially all of the qubits. Routing the control and readout lines around coherent qubit couplers necessitates the use of more than a single electrode layer in larger quantum processors. Consequently, moving to more complex fabrication seems unavoidable, either monolithically or by using multichip modules. Flip-chip bonded modules of two chips connected by superconducting bumps increase the layer count to two and air bridges further alleviate routing challenges. These have indeed been used successfully to construct processors of several dozen qubits [9]- [11], although with coherence times and gate fidelities significantly lower than in planar [12]- [16] or flip-chip bonded [17] single-or few-qubit devices.
Superconducting vias compatible with high-coherence qubits are an important next step toward larger processors. In addition to routing purposes, so-called via stitching is likely needed to shunt nominally grounded planes in different layers to control and push up the frequencies of harmful parasitic microwave modes that become problematic in physically large chips [18]. Traditional integrated circuit vias are, however, optimized for different goals, such as high normalstate conductivity and reduction of parasitic capacitance, VOLUME -, -1 arXiv:2201.10425v3 [quant-ph] 8 Nov 2022 instead of superconductivity and extremely low microwave loss required for qubit compatibility. Integrating their fabrication with qubits also poses challenges related to material compatibilities and the low thermal budget of aluminumbased qubits. Superconducting vias have long been used for multilayer wiring in superconducting quantum interference (SQUID) and single-flux quantum (SFQ) devices [19], [20], but the vias are shallow and pass through amorphous dielectric layers with poor microwave performance.
Yost et al. [21], [22] have on the other hand demonstrated through-silicon vias (TSVs) that have a relatively high aspect ratio and high critical currents, and show promise in terms of not destroying qubit coherence, as the demonstrated qubit relaxation time of 12.5 µs [21] and resonator internal quality factors of 10 5 to 2 × 10 5 [22] were identified to be limited by factors unrelated to TSVs. For comparison, widelyreproduced relaxation times for transmon qubits on silicon substrates are near 50 µs [12]- [17]. In addition, Gordon et al. have reported relaxation times of hundreds of µs [4]. Corresponding widely-reproduced resonator quality factors are roughly one million for typical co-planar waveguide (CPW) test resonator geometries [13], [14], [23]- [25], although this can be exceeded with deep trenching or short-lived oxide removal treatments [26]- [28]. Resonator quality factor is often used as a diagnostic predictor of qubit relaxation time for a qubit with electrodes fabricated using the same flow as the resonators. Others have also fabricated superconducting TSVs but the microwave performance of those approaches remains to be measured [29]- [31]. Furthermore, coherence times exceeding 300 µs have recently been demonstrated for transmon qubits on planar sapphire substrates [2], [3]. However, typical methods of etching high aspect ratio TSVs, like the Bosch process [32], [33], are not available for sapphire substrates.
In this article, we report on resonator internal quality factors of roughly a million measured on chips with TSVs stitching the ground planes on the front and back. The TSVs and resonators are fabricated on full 150 mm wafers, with a via-last approach where the first electrode layer is deposited and patterned before via formation. The via-last approach helps create a high-quality interface between the substrate and the critical electrode layer used for the resonators since the electrode layer is deposited on virgin wafers before other potentially harmful processing steps. Furthermore, the tantalum-based electrode layer used here has relatively low kinetic inductance, similar to commonly used niobium films. While high kinetic inductance is useful in superinductors, for example, in the fluxonium shunt inductor [34], kinetic inductance in electrodes of transmon qubits is not typically desirable. Low kinetic inductance tends to also yield better parameter control since geometric inductance is often more accurately reproducible than kinetic inductance, which tends to be very sensitive to variations in chemical composition and crystal structure. Fig. 1 presents our TSV structure consisting of the main electrode layer on the front side of the wafer, a hollow via with metallized walls, a metal membrane covering the via on the front side of the wafer, and a metallized back side (not visible). The fabrication process begins with sputtering the main electrode layer, i.e. a bilayer of 15 nm of titanium nitride and 200 nm of tantalum (orange in Fig. 1), on highresistivity silicon. For brevity, we refer to this as the Tabased electrode layer. We then pattern the electrode layer using photolithography and plasma etching. Next we deposit a sacrificial silicon dioxide layer using plasma-enhanced chemical vapor deposition and pattern holes in it for the membranes, using photolithography and plasma etching. We then sputter a 2-µm-thick titanium nitride layer and pattern it into circular membranes using photolithography and plasma etching (green in Fig. 1). We choose membrane sputtering parameters that yield relatively low compressive stress of approximately 190 MPa, as measured on 250-nm-thick reference films. Next we define the via holes on the back side of the wafer using photolithography and etch them using the Bosch process. Finally, we coat the inner walls of the vias and the back side of the wafer with Ti-N by using plasmaenhanced atomic layer deposition (ALD, purple in Fig. 1), and remove the sacrificial silicon oxide layer from the front side. The ALD film thickness is 260 nm, as measured on the back side of the wafer. As seen in Fig. 1(b), the film is noticeably thinner at the other end of the via (ca. 200 nm), as is typical for plasma-enhanced ALD processes.
II. DEVICE STRUCTURE
The aspect ratio of our TSVs is approximately eight, with a nominal TSV diameter of 60 µm and substrate thickness of 525 µm. The aspect ratio is similar to that of Refs. [21], [22]. It should be possible to increase the aspect ratio in the future since the only coating needed inside the vias is pro-duced by ALD, which is highly conformal compared to most deposition methods. Furthermore, a smaller via diameter is likely achievable with thinner wafers, even without increasing aspect ratio. Smaller-diameter vias enable increased via density and are likely to lead to increased mechanical robustness of the metal membranes covering the vias. Increased via density is likely to be beneficial in the future, when footprint per qubit decreases below current typical values of roughly 0.5 mm 2 . Increased mechanical robustness on the other hand improves post-processability. Currently, the membranes survive typical wafer level handling and processing, but the suspended parts of the membranes are susceptible to being cleaved off when the wafers are diced into chips. The suspended part of the membrane is inconsequential from the point of view of electrical connectivity, so this is not a significant issue for the resonator samples characterized here, but the fragility of the membranes may be an inconvenience in some applications requiring post processing on diced chips. Optimizing the stress of the membrane layer to optimally pretension the membrane could be another future path toward improving mechanical robustness.
Our measurements and results focus on CPW resonators patterned on chips with TSVs, to demonstrate long relaxation time in the presence of TSVs. We compare these to planar reference chips with resonators patterned on Nb, or on the same Ta-based electrode layer as on the TSV chips. The TSV chips have two to eight CPW resonators coupled to a common feedline through which transmission is measured (see Fig. 1). Each resonator acts as a bandstop filter at each of its resonance frequencies, and thus provides a sensitive probe of microwave loss at those frequencies, assuming the internal quality factor Q i and coupling quality factor Q c are of similar order of magnitude. Here, the resonators are open near the feedline and shorted at the opposite end, with geometry chosen such that the fundamental λ/4 resonance frequency varies between 4 GHz and 8 GHz and the coupling quality factor between 2 × 10 4 and 7 × 10 6 . The width of the CPW center trace is 20 µm and the gap between the center and ground is 10 µm, as in Ref. [25]. Overetching past the metal layer is small, less than roughly 50 nm, and no additional trenching is applied. The exact dimensions of the CPW cross-section play a significant role when making direct quantitative comparisons since, in extremely low-loss resonators, losses are generally dominated by material imperfections in thin interface layers between different materials, and the participation factors of different interfaces are somewhat geometry dependent [23], [35]- [37].
In terms of density and role of TSVs, we use four types of layouts: (1) no TSVs and no ground plane on the back, used as reference, (2) sparse TSVs stitching the ground planes on the front and back, (3) dense TSVs stiching the ground planes, and (4) sparse TSVs stiching the grounds and TSVs terminating the resonators to the ground plane on the back. As shown in Fig. 1(d), the sparse TSV design (2) has a spacing of 0.5 to 2 mm between the TSVs in areas near the resonators, with each resonator having one to three stitch vias at a distance of 150 to 300 µm from the center trace. This design aims to minimize the currents and electric fields induced in the vias when the resonators are excited, while still providing sufficiently dense via stitching for the ground planes to increase the frequency of the parasitic chip modes above the measurement band. The dense TSV design (3) TSVs an integral part of the resonator, with the termination consisting of one TSV for the center conductor and three or four TSVs for the ground. On the back side of the chip, the ends of the terminating TSVs are approximately at a voltage node of the resonator, but the non-negligible physical and electrical length of the vias implies that the ends of the vias on the front side are roughly 0.02λ away from the voltage node. Figure 2(a) shows that the best resonator chips with sparse TSVs stitching the front and back ground planes reach internal quality factors exceeding 10 6 at single-photon powers circulating in the resonator. The mean photon number in the resonator is nearly linearly proportional to the input probe power. In panel (a), we show the power dependence for a few exemplary resonators, and panels (b,c) show the lowpower Q i for all resonators on the measured chips. Details of the measurement setup, samples and data analysis are given in Appendices A and B. The resonators with sparse stitch vias perform approximately identically to planar reference resonators fabricated on either the same tantalum-based elec-VOLUME -, - . Furthermore, the resonator performance is similar to other reported results for silicon substrates [13], [14], [23]- [25], and suggests that transmon-type qubits patterned on the same electrode layer can achieve state-of-the-art coherence. This is the main result reported in this article as it demonstrates that none of the processing steps required to form the TSVs is fundamentally detrimental to the coherence. Even though the TSVs are not strongly coupled to the most sensitive long-coherence elements on the chip, the fact that via stiching of the ground planes can be compatible with high-coherence qubits is an important advancement in itself, as it allows physically larger quantum processor chips.
III. QUALITY FACTOR MEASUREMENTS
We draw this conclusion by comparing the best TSV chips to the best reference chips, which are on par. However, certain uniformity and yield issues remain to be solved. This can be seen in the histogram in Fig. 2(b) showing a small fraction of outlier resonators with anomalously low quality factors, which are relatively power independent. We occasionally find such outliers also in both Ta and Nb based planar reference devices. Furthermore, resonators near the edges of the 150 mm wafers show Q i below 10 5 , even for the sparse TSV test design. In this article, we exclude the edge chips, as wafer-level uniformity of the process has not yet been a development priority and we assume that it can be improved in the future.
We observe somewhat decreased Q i in chips with dense stich vias [ Fig. 2(a)]. Furthermore, in resonators terminated with TSVs, the internal quality factors are drastically lower, ranging from Q i less than 10 4 to 2 × 10 5 , as shown in Fig. 3. The line shapes of TSV-terminated resonators also generally become asymmetric at just 10 3 to 10 5 photons, after which the model used to extract Q i from the response [38] no longer fits well. In resonators without TSV terminations, we observe such a threshold only above powers corresponding to over 10 7 photons. Furthermore, TSV-terminated resonators show essentially no power dependence of Q i , until the nonlinearity leading to asymmetric response becomes significant.
These observations suggest that, unlike in the resonators with Q i in the range of a million, two-level systems (TLSs) in thin dielectric interface layers are not a significant loss mechanism for the TSV-terminated resonators, as they would be expected to lead to quality factors increasing with photon number. It is instead possible that the ALD titanium nitride film on the inner walls of the hollow TSVs contains weak spots with suppressed superconductivity. This should lead to rapid decrease of quality factor at powers where the current through the termination becomes comparable to the critical current of the weak spot. We estimate that, at the threshold power for asymmetric response, the current through the TSV terminations is on the order of 10 microamperes (see Appendix B). Such stochastically occurring weak spots could explain the large variation in Q i , as well. It is also possible that resistive losses occur at the interface between the ALD titanium nitride and the sputtered electrode layer. Nevertheless, the best TSV-terminated resonators perform as well as the TSV-interrupted resonators in Ref. [22].
The resonance frequencies f r of the TSV-terminated resonators are consistent with those of other resonators, after accounting for approximately 650 µm of CPW-equivalent length added by the TSV terminations. The added length is qualitatively consistent with a wafer thickness of 525 ± 25 µm and a higher effective dielectric constant within the TSV termination, as compared to the CPW part. To demonstrate this, Fig. 3(b) shows inverse resonance frequency versus resonator length l design for different resonator types, as well as fits to where the speed of light in the CPW 1/ √ µ and extra CPWequivalent length l extra are fit parameters. Neglecting kinetic inductance and film thickness [39], we expect µ = 6.23µ 0 0 for our CPWs, assuming r = 11.45 for the permittivity of silicon [40]. For chips with stich vias only, as well as for planar reference chips, the measured frequencies of individual resonators deviate from the linear fit by less than 0.05%. For TSV-terminated resonators, however, the scatter is relatively large, showing an average deviation of 2% even within a single chip. The variation in the resonance frequencies from the prediction of the linear fit is not explained by variation in the termination design [three vs four grounding TSVs surrounding the via terminating the center conductor, see based electrode layer and therefore high compatibility with typical Nb-based designs for superconducting qubits. Figure 4 shows that a chip with sparse TSV stitching continues to show internal quality factors of 10 6 even after the chip is left at room temperature and atmospheric pressure for two weeks after the intial measurements. This is consistent with our observation (not shown) that planar reference samples with the same tantalum-based electrode layer are also stable in time.
Power dependence of Q i is commonly used to estimate the contribution of TLS losses, as most other loss mechanisms are expected to be independent of power at these powers. All of the resonator types in Fig. 2(a) show relatively weak power dependence and lack clear saturation of Q i at high powers, making accurate estimation a challenge. Here, we use the common simplistic approach of defining the total TLS loss is of the same order of magnitude as best reported results for planar devices [23].
IV. DC CHARACTERIZATION
The superconducting transition temperature of the tantalumbased electrode layer is slightly above 4 K [ Fig. 5(a)], in line with a literature value of 4.46 K for high-purity bulk tantalum [41]. On the planar back side of the wafer, the ALD titanium nitride coating has a transition temperature in excess of approximately 2 K, which is typical for highly disordered titanium nitride deposited by ALD. The critical temperature is significantly below highest values achieved with ALD or other methods (4.5 to 5.4 K) [42] but easily satisfies the basic requirement of effectively suppressing thermal quasiparticles in superconducting qubit applications, which operate around 10 mK to 30 mK. The critical temperatures and currents were measured with standard lock-in techniques in a four-probe configuration (see Appendix B for details).
The transition of titanium nitride to the superconducting state is significantly broadened toward lower temperatures when measured through a TSV, as shown in Fig. 5(a). This is qualitatively similar to the broad transition measured through TSVs lined with titanium nitride by Mallek et al. [22]. Furthermore, Fig. 5(b) shows that the via switches from the superconducting state to the normal state gradually, in multiple steps from tens to hundreds of µA, and the lowest switching current varies from a few microamperes to over 100 µA [ Fig. 5(c)]. Both the broad superconducting transition over temperature and observation of multiple critical currents are consistent with the existence of weak spots in the titanium nitride lining inside the via. The weakest spot determines the lowest switching current and, due to Joule heating, also likely limits the highest observed switching currents to only hundreds of µA, which correspond to only a few kA/cm 2 of nominal current density. The measured critical currents are also of similar magnitude as the current through the TSVs at the threshold power where the lineshapes of TSV-terminated resonators become asymmetric (Fig. 3).
V. CONCLUSIONS
In conclusion, we successfully fabricated and characterized qubit-compatible microwave resonators on silicon wafers with TSVs stitching the front and back ground planes. The measured resonator internal quality factors improve over previous results [22] by nearly an order of magnitude and are on par with fully planar resonator results, despite the added complexity of fabrication. The resonator performance provides strong evidence that state-of-the-art qubit coherence times would likely be reached if the same process were used for transmon-type qubits, with Josephson junctions VOLUME -, -post-processed on the samples using established evaporationbased methods. Stitching the ground planes with TSVs is an important technique for controlling parasitic microwave modes within the silicon chip. Without TSVs or other methods for controlling them, the parasitic modes limit the physical size of transmon-based quantum processor chips to the range of two centimeters. Qubits fabricated on sapphire substrates have shown even better performance but there is no clear path to fabricating qubit-compatible high-aspect-ratio vias on sapphire substrates. Critical currents of the TSVs demonstrated here leave room for future improvement. The low switching currents in dc measurements, the existence of outliers in the microwave measurements, and the dramatically lower performance of TSV-terminated resonators all hint in the direction of weak spots in the titanium nitride film inside the vias. This may be due to roughness of the via walls or due to imperfect conformality of the plasma-enhanced ALD process, which could lead to variation in film quality and weaker superconductivity at the far end of the via. Alternatively, the losses may be due to poor contact between the ALD titanium nitride and the sputtered titanium nitride in the electrode layer. These potential issues can be improved without drastic changes to the TSV structure. Together with additional patterning of the back side metallization, improving critical currents to the mA range would make the TSVs applicable to flux line routing. The low critical currents observed here limit the applications to grounding, charge excitation lines, and readout lines. Other possible future improvements include increasing the aspect ratio of the vias or reducing the thickness of the wafers, which would both lead to smaller diameter vias. Smaller diameter vias increase integration density and would improve the mechanical stability of the membranes covering the TSVs. .
APPENDIX A RESONATOR DESIGNS AND SAMPLES
The results presented in this manuscript are obtained from measurements of over 100 resonators on 15 chips in several cooldowns. The measured resonators are λ/4 coplanar waveguide resonators in a hanger-type configuration [43]. Transmission measurements in this configuration lead to a Lorentzian dip in the response and allow taking cable losses and impedance mismatch into account by normalizations of the measurement data with well established models [23], [38], [44]. Each chip hosts up to 10 resonators with differing lengths. The resonators are either capacitively coupled to a common microwave feedline on one end and shorted to ground at the opposite one, or inductively coupled to the feedline with the opposite end left open. All designs have a 20 µm wide CPW center trace and 10 µm gap between the center pin and ground electrodes, and incorporate a square grid of flux trapping holes in the ground planes. The measured designs differ in the presence and role of TSVs and backside metalization, as well as the precise lengths of the resonators and couplings to the feedline. The design parameters along with the coupling quality factors Q c obtained from fitting the data are summarized in Table 1. The resonator lengths range between 4 mm and 7 mm for all designs.
In Table 2, we list the resonator chips and their measurement configurations. All resonances on all the listed chips are included in the Q i histograms shown in Figs. 2 of the main text. Figure 4 shows measurements of chip S2 in two cooldowns. We have excluded measurements performed without magnetic shielding as well as chips from the edges of the wafers from the manuscript. In Fig. 3, we include only those resonances where the resonator fitting (described below) produced a reliable result.
APPENDIX B MEASUREMENTS AND DATA ANALYSIS 1) DC measurements
For the dc characterization results shown in Fig. 5, test chips were glued to an insulating sapphire chip which was in turn mounted with vacuum grease to a copper sample holder thermally anchored to the mixing chamber stage of a dilution refrigerator. The critical temperatures of the tantalum-based electrode layer and ALD titanium nitride were measured in standard four-probe configurations with a lock-in amplifier. The measurement configuration used in the measurements of individual TSVs is schematically shown in Fig. 6. For the critical current measurements shown in Fig. 5(c), the refrigerator temperature was stabilized to approximately 100 mK with a PID controller, as the dissipation from the TSVs in the normal state is significant compared to the cooling power of the refrigerator.
2) Resonator measurements
For the resonator measurements, we wire bond chips to sample holders machined from copper or gold-plated copper thermally anchored to the mixing chamber plate of a dilution refrigerator. The sample holders are mounted inside magnetic shields consisting of a mu-metal shield and a su-6 VOLUME -, - perconducting aluminum tube. In most of the measurements, the magnetic shields are mounted inside a radiation shield thermalized to the mixing chamber flange, but we have not observed significant differences between samples measured inside or outside the mixing chamber shield. For most of the measurements, the base temperature of the refrigerator was below 15 mK, with exceptions indicated in Table 2.
The measurement setup used is schematically depicted in Fig. 7. The probe signal is generated by a vector network analyser (VNA) at room temperature. The VNA output is connected to one of two attenuated and filtered coaxial lines used for either transmission or reflection measurements. In the transmission configuration, the attenuated signal is connected to the input port of the device under test (DUT), while the reflection measurement line is connected to the other port of the DUT with a circulator. However, in this work we only present transmission measurements. To estimate the power reaching the DUT, we have measured the transmission through two identically attenuated coaxial lines connected in series at the base temperature of the refrigerator. We use this reference data as well as datasheet values for the frequencydependent attenuation of room temperature cables and components to calculate the power reaching the DUT. Slight differences in the attenuation or filtering between cooldowns are indicated in Table 2.
We use a pair of microwave switches at the mixing chamber to allow characterizing up to five DUTs in a single cooldown, as well as a coaxial cable that can be used as a transmission reference. The transmitted signal is amplified with a near quantum limited three-wave mixing travelling wave parametric amplifier (TWPA) at the mixing chamber stage and high-electron-mobility transistor (HEMT) amplifiers at 4 K and at room temperature. The pump tone for the TWPA with frequency typically close to 14 GHz is combined with the signal from the DUT with a diplexer VOLUME -, - and filtered again from the signal after the TWPA stage to avoid saturation of the following amplifiers, while several isolators provide isolation between the DUT and the TWPA and TWPA and HEMT amplifiers, respectively. While the TWPA decreases the measurement time required for accurate characterization at single-photon powers circulating in the resonators, the largest probe signals would saturate it. We thus turn the pump tone off at high probe powers and verify that the internal quality factors extracted at intermediate powers are the same with the pump tone on and off.
3) Extraction of the internal quality factor
In this manuscript, we have used the open source fitting routine of Ref. [38] to extract the quality factors from the measured data. For resonators in the hanger configuration, the model for the transmitted signal S 21 (f ) reads The term in front of the brackets covers contributions from the measurement environment where a describes the baseline level of transmission, α the phase shift and τ the electrical delay across the measurement line. f is the probe and f r the resonance frequency. The model is based on the diameter correction method (DCM) of Ref. [44] where the complex coupling quality factor with magnitude |Q e | and rotation angle −φ accounts for asymmetries in the Lorentzian shape, e.g., due to impedance mismatch between the resonator and feedline, or the feedline and the measurement environment [44], [45]. The loaded quality factor Q is then given by where and Q i is the internal quality factor. Following the circuit analysis of Refs. [46], [47], the root mean square voltage V r of the standing wave inside a hanger type CPW resonator at resonance is given by with the characteristic impedance Z r of the resonator's CPW. λ is the wavelength at resonance, l is the length of the resonator and P dev is the power entering the DUT. This equation is valid for both quarter-and half-wave resonators and their higher frequency modes as well. The average energy E inside such CPW resonator is given by Thus with Eqs. (6) and (7), we calculate the average number of photons circulating in the resonator in accordance with Ref. [48] as where h is Planck's constant. We estimate P dev from the output power at the room-temperature generator as described in section B-2 above. Note that Eq. (8) is valid for any resonator in hanger configuration. However, in a one-port reflection measurement, the right hand side of the equation would be multiplied by a factor of 2. In the Q i histograms of Figs. 2 and 4 of the main text, we show for each resonator the mean Q i from all measurements with n ph < 5.
The TSV terminations of the devices shown in Fig. 3 of the main text represent the shorted end of the quarter-wave resonators and are therefore located at the current maxima of the standing waves. Thus for l = λ/4, the maximum current flowing through the TSV terminations can be estimated from with Z r = 50 Ω.
ACKNOWLEDGMENT
We acknowledge Jan Toivonen, Harri Pohjonen, Ville Selinmaa, Paula Holmlund, and Jaana Marles for technical assistance. This work was financially supported by OpenSuperQ, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 820363. The work at VTT was supported by the Quantum Computer Codevelopment project funded by the Finnish government, and performed as part of the Academy of Finland Centre of Excellence program (projects 336817, 312059 and 312294). We also acknowledge financial support from the European Commission H2020 project EFINED (grant agreement no. 766853). The Chalmers work was in part supported by the Wallenberg Center for Quantum Technology, and performed at Myfab Chalmers. V. Vesterinen acknowledges financial support from the Academy of Finland through Grant No. 321700. | 7,156.8 | 2022-01-25T00:00:00.000 | [
"Physics",
"Engineering"
] |
Modelling producer behaviour in fixed route private transport services
Objectives : While fixed route private transport services have grown in prominence in the recent times, there is a dearth of models that specifically tackle pricing in it. The current study aims to model producer behaviour in this model, keeping in mind its peculiar physical characteristics. Methods : This study develops a rational-actor model of the behaviour of producers operating in this market. There is however, an added assumption of the heuristic of least perceptible difference to add behavioural realism to the model. Results : The predictions derived from the model developed in this study include repeated usage of a single type of round-trip for a non-zero interval of time, the convexity of expected-waiting time with respect to changes in prices and a negative relation between external (exogenous) demand at one-point of a path and the price charged at the other. Conclusion : Pricing in this market, due to physical factors, can exhibit unique features as modelled.
Introduction
Transport economics has developed into a separate sub-field of economics largely due to the peculiar nature of operation in the sector, including unique problems such as congestion and timing issues. Several industries within transportation also warrant special models. In this study, by the 'Private Passenger Road Transport Industry' , the author refers to a wide range of operators of private commercial vehicles, who provide road travel service directly to consumers rather than to firms for shipment. This may include e-rickshaws, cycle-rickshaws, auto-rickshaws etc. The model with little modification can also be used for ferries etc., however.
The empirical importance of the private passenger road transport industry is well documented in the literature. Estimates by Singh (1) claim, that the share of private and para-transport in mobility in India has risen from 16.2% in 1990-1991 to 21.2% in 2000-2001. Malik et al. (2) argue that in the next 3-4 years following 2018, e-rickshaws would generate "entrepreneurial opportunities" for nearly 5 lakh people in Delhi NCR alone. This growth, however, is largely informal. While attributing the rapid expansion of the industry to the fast pace of GDP growth in India and China, Pucher et al. (3) also highlight that the industry is under-regulated in India. In a more recent study by Majumdar and Jash in 2015 (4) , it has been observed that while the number of e-rickshaws is rising, their operation is not properly regularised. From the empirical literature, it can be concluded that immediate policy attention to the industry is of crucial importance because of both its growing size and opportunities and its unmonitored and under-regulated nature and quality, possible lack of safety etc. Studies to gauge the effects of and response to various policies are, however, largely missing.
The drastic effect that such vehicles have had on commuting in central business cities, developed and developing cities was highlighted also in Bagul et. al. in 2018 (5) . It was also suggested that these services should be integrated with public transport. An additional fact worth highlighting at this point is their finding that overall, 40% of the total commuting populations of daily users for fixed routes in their study used autorickshaws.
Other important developments include the increasing popularity of centralised transport-service platforms such as Ola and Uber, especially in large cities, as noted by Rupali and Chincholkar (6) . However, the nature of travel that these services provide, that is of choosing the pick-up and drop locations make them not "fixed-route" services. Furthermore, the pricing considerations are not of the driver, but of a centralised platform, making it deviate from the focus of this study. This is an important distinction to make, which without the mention of these services may have led to confusion. Similarly, other developments such as Intelligent Transportation Systems (ITS) are gaining popularity in the developed countries as noted by Singh and Gupta (7) . Such services however tend to be provided by centralised providers again, and need not necessarily operate in the fixed-route system, while possible. The focus of this study is instead on decentralised and largely independent providers of transport services such as e-rickshaws, auto-rickshaws etc. whose importance has already been outlined above. Recent theoretical and model-based works in transport-modelling include Lin, Chen and Song (8) and Grooves and Gini (9) who studied pricing in the air-transport industry, following the classic work in railways by Meyer and Morton (10) . Effect of delays on fares and the use of data to optimise fares were topics discussed. Public transport, such as buses, railways and metros also find a place in the discussion, such as in Allport (11) and Janson (12) . They discuss optimal timing and routes for the buses and rails and congestion respectively.
Certain models focussing on the interactions between producers and consumers have been made, but they include very longdistance travel. Ivaldi et. al. (13) for instance, developed such a model to study inter-city travel. They also attempted to gauge the effect of infrastructural interventions on the market equilibrium, along with other policy measures. Many other theoretically innovative approaches have been adopted to study structural problems. Garcia and Marin (14) studied the impact of parking space allocation and design on transport markets. Theoretically interesting explorations have had a further impact on research in this field. Banerjee (15) found the remarkable fact, that despite no prior knowledge of Rabin's fairness axioms, New Delhi authorities have often proposed policies that satisfy them for instance.
A theoretical model studying how sellers behave in the private passenger road transport industry as defined in this paper is, however, difficult to find. The papers mentioned above diverge in their discussion from the behaviour of service providers in the said industry or are too generic to be directly applied to the industry. The decision-making by the sellers is also largely un-discussed.
Description of physical features of the problem
The private passenger road transport industry as defined in this paper faces a physical reality peculiar in certain respects. The operators usually operate on a fixed route, say from A to B and B to A. The demand is concentrated in the two end-points A and B. While this can be identified as a segmented market, it is important to note that the seller here cannot provide his or her services simultaneously in both the 'segments' unlike what is assumed in many segmented market models. This is plainly because of the physical condition that the seller can be travelling either A to B or B to A but not both at any particular point in time. This setting thus demands a separate model of its own to be analysed. The paper thus develops a model to analyse the behaviour of the sellers in this industry.
The model
The model to be developed assumes a fixed path of operation AB where the journey is made to and fro. While this path may be thought of, perhaps as the optimal path of operation, the model does not require it to be so and is not concerned with its determination, it assumes it to be given. The demand is concentrated at the two endpoints and is partly exogenously determined and partly affected endogenously. The following are the parameters and variables of the model: t: The time required to travel from A to B or B to A. T: The total time of operation where the exogenous demand is perceived by the seller to be fixed. We also define the following variables: The time required to wait at A, then have a passenger and travel to B. τ 2 = t + t 2 : The time required to wait at B, the have a passenger and travel to A. We assume all the variables and parameters mentioned except p1 and p2, to be strictly positive. p1 and p2 are assumed to be non-negative.
We assume the following relations: Note, that T is strictly positive, which implies that the seller perceives external demand to be stagnant for non-zero intervals of time. While in reality, exogenous demand may be continuously changing, we introduce this postulate to argue that the seller 'aggregates' demand behaviour across non-zero time intervals. Rather than reacting to changes in demand over every moment in time, he or she may divide a period, say the workday into finite sub-periods with fixed demands. For example, the seller may perceive the workday to consist of three sub-periods, the morning period with average demand x 1 , the afternoon period with average demand x 2 and the evening period with demand x 3 . Alternatively, the seller may divide the day into a 'peak period' (with a higher expected demand) and a 'low demand period' or view the entire day as a single period with fixed exogenous demand too. The model does not specify the exact length of a period and thus, does not impose any restrictions on the number and timings of the sub-periods. It does, however, assert that such a division happens. This may deviate from purely rational behaviour, and signify the use of a 'heuristic' to base seller behaviour, making assessments easier for the seller. Also, note that the last inequality relations assumed are regarding demand behaviour. We expect a rise in prices to reduce demand and thus increase the time the seller must wait to get a customer. The relation between exogenous demand and waiting time can be interpreted similarly, a rise in exogenous demand means the seller must wait for a lesser amount of time to get a customer. This can also be thought of as a consequence of the way the exogenous demand variables are defined-such that raising them reduces expected waiting times. Also note, that by 'passenger' , we do not mean one single physical passenger-it means the number of passengers the seller is willing to or allowed to carry. For example, for an e-rickshaw, four physical passengers can count as one passenger in this paper's sense, because the seller would wait until four passengers have come before leaving. In a pandemic, if the mandated number is reduced to, say 2, then a passenger in this paper's sense would mean 2 physical passengers.
T T (pe) = τ 1 + t https: The total time required to complete all round trips is, therefore, The seller earns the following profits for each type of round trip respectively: The total profit in the period is
Equilibrium
We solve the seller's choice problem in a two-step process. First, we find the optimal values of n (pp) , n (pe) , n(ep) and n(ee) given the prices p 1 and p 2 . This will enable us to evaluate the highest possible profit for each price pair. We then choose the prices p 1 and p 2 that yield the largest optimal profit.
Step 1: Note that, once we fix the values of p 1 and p 2 , because the cost c is given, the values of π (pp) , π (pe) , π(ep) and π(ee) also get fixed.
Since these are the coefficients of the choice variables n (pp) , n (pe) , n(ep) and n(ee) in the objective function π, the objective function essentially becomes linear.
Also note that the coefficients of the choice variables n (pp) , n (pe) , n(ep) and n (ee), that are T T (pp) , T T (pe) , T T (pe) and T T (ee) are also constants, because the times required are given. The constraint is also linear then.
The choice problem, when the prices are fixed thus becomes a linear programming problem with the given objective and constraint. The only possible solutions are the 5 corner solutions corresponding to the 4 choice variables and the origin. If (n (pp) , n (pe) , n (pe) , n (ee)) are points in the choice space when prices are fixed, the following are the only possible solutions: https://ijed.in/ Note that in the last corner point, because T, t and c are all strictly positive, the profit T 2t (−2c) is strictly negative, that is strictly less than 0, the profit in the first corner point. Therefore this cannot be the optimum for any values of the parameters. We are left with the first four corner points only. Which of these is the optimal point would depend on the specific values of the prices and times. Thus, the following is the optimal profit for any price pair p 1 , p 2 .
While this can be treated as the objective function with only p 1 and p 2 as the choice variables, we note that because T is a non-zero constant, maximising it is equivalent to maximising the following function with the same choice variables: Here the operator Max refers to 'the maximum of ' . The results so far are intuitively plausible. Given the external demand and prices, it is optimal to repeat only one type of round trip throughout the period T (note that in all the corner points except the origin, only one type of round trip has a positive number associated with it) or to not operate at all (the origin). Also, ee is never gone for in equilibrium. This is because it yields a negative profit of −2c T 2t , while it has a positive time cost. Once the type of round trip is determined, the total profit for the period T is just the profit for one such round trip multiplied by the total possible number of round trips T T T (x) π(x). In step 2, we shall maximise the obtained objective function with the prices as the choice variables.
The seller's choice problem becomes
Maximise by choosing p 1 and p 2
Max ( 0, First note that, because the number of each round trip has already been determined through linear-programming (that is, they are built into the current objective function), the only choice variables remaining are the prices. The only constraint on the prices is that they are non-negative.
Step 2: In this step, we seek to determine the first and second-order conditions for optima. Because, depending on the values of the prices and times, the optimal corner solution in the preceding linear programming problem may differ, with the four possibilities mentioned, we shall find the conditions for optima in all four cases.
Case 1 (The optimal point is (0, 0, 0, 0)): In this case, the profit is identically equal to zero. Therefore, all values of p 1 and p 2 are compatible with the optimum. Empirically, the seller does not offer any services in this case, i.e. does not make any round trips at all.
Case 2 (The optimal point is ( T τ 1 +τ 2 , 0, 0, 0)): In this case, only type pp round trips are made. The value of the objective function here is https://ijed.in/ Case 4 (The optimal point is (0, 0, T τ 2 +t , 0): In this case, the seller would only go for type ep round trips. The value of the objective function here is This is a function only of p 2 because the seller does not carry passengers from B to A, but only from A to B. The first and second-order derivatives are The first and second-order conditions are respectively In cases 3 and 4, only the equilibrium conditions for p 1 and p 2 are determined respectively. This is because the seller does not carry passengers from B to A and A to B respectively in the cases. In all the four cases, the model demands or predicts that if the seller does carry passenger from a point to another, raising the price at the point of initiation must increase the waiting time of the seller at that point at an increasing rate, as otherwise, the seller fails to attain equilibrium. While this is a prediction about demand behaviour, this demand behaviour is seen at equilibrium not because of some demand-side phenomenon, but because the seller actively chooses prices such that this condition is met, as we have derived this by optimising the seller's payoff and not the consumer's payoff.
An application
We have already discussed a prediction of the model on demand behaviour at equilibrium. We here try to derive a prediction about producer behaviour explicitly. Specifically, we try to find out the effect of a marginal change in exogenous demand at point B on the price charged at point A. This exercise makes empirical sense only for cases 2 and 3, because in the other two cases, services are not being offered at point A at all. We examine case 2 here.
For case 2, the first-order conditions are These conditions can be rearranged to yield https://ijed.in/ That is, at equilibrium, a marginal change in prices leads to an equal marginal change in the respective waiting times at equilibrium. The first condition for optima can be differentiated with respect to d 2 , keeping in mind that a change in d 2 does not affect τ 1 (because the demands at both the points are independent) to get: Using the fact that ∂ τ 1 , from equation (37), this result can be simplified to We know, (p 1 + p 2 − 2c) ≥ 0 because otherwise, the case 1 corner point would yield a strictly higher profit, also ∂ 2 τ 1 ∂ p 2 1 > 0 by the second-order condition. Finally, ∂ τ 2 ∂ d 2 < 0 (from inequality (2)).
By all of these, the sign of ∂ p 1 ∂ d 2 is unambiguously negative, and if (p 1 + p 2 − 2c) > 0. That is, a marginal increase in exogenous demand at point B leads to a decline in prices at point A. It can also be shown that a change in exogenous demand at point A has a similar effect on the price charged at point B. Combining these facts, it can be stated that a marginal increase in exogenous demand at a point results in a decline in the price charged at the opposite point.
This is a strong result. The price falls despite a rise in the total demand in the system. It can also be shown that for case 3, there is no effect of change in exogenous demand at point B on the price charged at point A. This is understandable because in this case, the seller does not offer any service at point B and is therefore not affected by marginal changes in exogenous demand there. Similarly, for case 4, there is no effect of a marginal change in demand at A on the price charged at point B.
Conclusion
While the private passenger road transport industry is expanding, the industry is still under-regulated and under-monitored. Despite the abundance of empirical literature ascertaining this fact, there was a lack of theoretical models to understand decision making in this industry to guide policy. The model develop seeks to fill this gap.
The model, by and large, a rational-actor model also leveraged the behavioural reality of heuristics to suit reality better. It has led to interesting predictions. Firstly, despite being a model of producer behaviour, the model yields predictions regarding demand characteristics at equilibrium. Expected waiting times are a convex function of prices. Additionally, the change in waiting times due to a marginal change in price at the respective locations would be equal at both the endpoints of the path of operation. Secondly, the model yields a very interesting and strong prediction regarding the pricing behaviour of a provider of transport services in this industry. A marginal rise in exogenous demand at a point would lead to a decline in price at the opposite end whenever service is provided at both the ends.
Thirdly, owing to the postulate that the seller behaves as if exogenous demand is constant for non-zero length intervals of time, we shall observe only one kind of round trip being repeated over an extended period.
The predictions derived in this study display the empirical meaningfulness of the model. The model restricts both demand and supply behaviour at equilibrium and can thus be tested empirically which can either support or oppose the validity of the model. This yields Popperian scientific content for the model. Empirical specification in future research may prove fruitful too. The model can then be used to anticipate and predict the response of the industry to change in policy.
The model must, however, still be used with caution and be properly tested. The heuristic postulated is untested in this industry. Also, the price making feature may not work due to nuances not included in the model. Future empirical research may augment the model to fill this gap.
The model, however, definitely serves as a strong tool to both guide policy and understand producer behaviour in this structurally and physically peculiar industry. This provides both empirical and theoretical relevance and importance to the model developed in this paper. https://ijed.in/ | 4,898.6 | 2020-12-09T00:00:00.000 | [
"Economics"
] |
Board Structure, CEO Equity-Based Compensation, and Financial Performance: Evidence from MENA Countries
: This paper investigates the association between board of director (BOD) structures and CEO equity-based compensation (long-term incentive) for commercial banks (conventional and Islamic banks) in MENA countries. Specifically, we take board size and board independence to measure the board structure. Furthermore, we investigate the influence of board structure on the association between CEO equity-based compensation and financial performance. Moreover, we compare conventional and Islamic banks in testing these relationships. Using a sample of 65 banks in MENA countries for the period between 2009 and 2020, we show a significant positive association between board size and CEO compensation. However, we find the same association between these variables for IBs, but the effect of board size on CEO compensation is less. We also show that board independence is negatively correlated with CEO compensation. Nevertheless, the relationship between board independence and CEO ownership is positive for IBs. For the moderating test, we find that effective board structure provides more incentives to the CEO, leading them to achieve higher financial performance. The Islamic bank’s business model (based on Shari’ah principles) contributes to the different influences of board structure on CEO compensation. Our results provide the insight that a strong and effective board is important for managing the executive’s compensation system. The findings of this study have implications for financial firms, policymakers, and regulators. Specifically, the study may help in understanding the benefits of different compensation structures relative to different types of financial firms.
Introduction
Over the last several decades, corporate governance has become increasingly important around the world.More and more countries have adopted corporate governance codes and principles for achieving best practices.Maher and Andersson (2000) stated that effective corporate governance improves the efficiency, competitive advantage, and effectiveness of companies.The significance of corporate governance mechanisms lies in the fact that they help to ensure that management acts in the best interests of all stakeholders, and they give investors greater confidence by encouraging both transparency and accountability (Mallin 2007).In addition, it has been shown that effective corporate governance can prevent the occurrence of undesired events that derail the implementation of imperative programmes, and it can inculcate a culture of integrity and mitigate an organisation's risks (John et al. 2008) and enhance financial performance (Al-Matari 2022).
One of the most vital mechanisms that corporate governance manages is a CEO's compensation.According to Conyon and He (2012), the agency theory assumes that CEO compensation is related to performance in order to resolve the moral hazard problems linked to the asymmetric information between managers and owners.CEOs' compensation has been a growing and important area of research in recent years, especially in emerging markets such as Saudi Arabia, Egypt, and Jordan.In the most modern corporations, especially those in the United States, CEO compensation is a very complex and contentious subject, and it is determined by a board of directors via the compensation committee (Frydman and Jenter 2010).The recent attraction to executive compensation topics is a result of the universal economic recession and the growing interests in corporate governance over the recent decade (Alfawareh et al. 2023;Deysel and Kruger 2015).CEOs' compensation refers to the economic reward given to CEOs, and it is generally measured by basic pay, bonuses, and stocks (Shah et al. 2009).Solomon (2007) states that the board of directors is similar to a heart that needs to be correctly fitted in order to carry out the critical duties of advising and monitoring top management (Coles et al. 2008).Jamali and Mirshak (2007) stated that the corporate governance mechanism depends on the board of directors, since a board's effectiveness has been the essential focus of recent attention.The main role of the board of directors is to oversee management decisions and control and lead their companies so that they are successful (Mallin 2007).A high-performance board must achieve these objectives: introduce strategic themes to assure the firm's growth, assure accountability for the firm, assist to bring about prosperity, and assure that a highly qualified top management team is managing the firm (Andoh et al. 2023;Epstein and Roy 2006).Nguyen and Vo (2020) report that effective corporate governance can enhance a bank's efficiency.However, prior studies argue for two types of compensation; these comprise non-based equity compensation (Ozdemir and Upneja 2012) and equity-based compensation (Li and Kuo 2017).Nevertheless, the previous literature argues that CEO equity-based compensation provides managers with high-powered incentive (e.g., Conyon and He 2012;Li and Kuo 2017).As a result, this study focuses on such a compensation structure.
However, most of the previous literature has examined the association among the board of directors and CEO compensation for only non-financial firms in developed countries.Banks have been ignored in the investigations of this issue.Due to the important role that financial firms play in the economy and their complex business models, it is important to investigate this relationship using a sample of financial firms.Furthermore, emerging countries have different corporate governance characteristics than developed countries.For example, corporate governance codes in the Middle East and North Africa (MENA hereafter) countries rely on the "comply or explain" principle, and the percentage of independent members on boards in these countries is higher than in developed countries.Moreover, most of the previous literature only examines either total compensation (nonequity and equity compensations) or only non-equity compensation when investigating the relationship between the board of directors and CEO compensation.
This paper aims to explore the impact of board structure on CEO equity compensation as well as the influence of board structure on the association between CEO equity compensation and financial performance.By employing a sample of 65 banks in 11 MENA countries for the period between 2009 and 2020, we find a positive and significate significant relationship between a board's size and CEO equity compensation.However, this association is weaker for Islamic banks.Furthermore, we find a negative and significant association between board independence and CEO equity compensation.In contrast, board independence is positively correlated to CEO structure for Islamic banks.This could be a reason for the IBs business model's differences.Furthermore, we show that an effective board of director could provide appropriate incentive for CEOs, leading to increases in the bank's financial performance for both bank types.
We contribute to the previous literature (e.g., Core et al. 1999;Reddy et al. 2015;Sheikh et al. 2018;Vafeas 1999) by investigating the relationship between board structure and CEO equity-based compensation using a sample of commercial banks in MENA countries.Furthermore, prior studies have not investigated the influence of board structure on the association between CEO equity incentives and financial performance, which we provide in this study.In addition, contrary to previous studies, we contribute to the previous literature by comparing Islamic and conventional banks in terms of the relationship between board structure and CEO equity compensation.It is apparent that this relationship has not been explored in the financial industry.Furthermore, board structure and CEO equity compensation research in MENA countries is also limited.
The remaining sections of this paper are organised as follows: Section 2 shows the literature review and hypotheses.Section 3 presents the data and methodology.The results and the empirical analysis are shown in Section 4, and the final section presents the conclusion of this paper.
Corporate Governance
Corporate governance focuses on two significant dimensions.The first dimension concentrates on the stewardship and the accountability of corporate governance, i.e., controlling and monitoring the managers' actions and ensuring that their responsibilities are in the shareholders' interests.The second dimension concentrates on providing managers with appropriate incentive schemes in order to avoid managerial opportunism (Keasey and Wright 1993).Previous studies have argued that providing firm managers with incentive contracts helps align their interests with shareholders' interests (Alfawareh et al. 2023).Incentive contracts can be in the form of share ownership, stock options, or the threat of dismissal (Jensen and Meckling 1976;Fama 1980;Shleifer and Vishny 1997).
Board of Directors' Structure and CEO Compensation
According to the agency theory, CEOs might make decisions that serve their own interests.Ross (1973) stated that agency problems between agent and principles might be raised when the agent acts for their own interests.However, an effective board can mitigate this problem by managing the executive's compensation.Jensen and Meckling (1976) showed that executive compensation packages can mitigate the agency problem and reduce agency costs.CEOs are self-interested and might act resourcefully at the cost of shareholders' interests.Therefore, the board of directors is expected to confine and mitigate executive opportunism and align the CEOs' interests with those of shareholders using effective corporate governance mechanisms and by constructing efficient pay contracts that normally link top management executive compensation with firm performance (Sheikh et al. 2018). Nevertheless, prior studies (e.g., Holmström 1979;Shleifer and Vishny 1997;Matolcsy and Wright 2011) report that CEO behaviour and incentives towards maximising the shareholders' wealth are significantly improved if the compensation includes some longterm equity-based compensation.Specifically, shareholder wealth is increased by achieving high financial performance, which might be a major goal for CEOs if their compensation structure relies on equity-based compensation.
Given the above, researchers have discussed the significance of a board's delegation mechanism and how it influences CEO compensation.For instance, Fama (1980) and Fama and Jensen (1983) argued that board characteristics play an essential role in determining CEO compensation.These studies claimed that outside directors should make compensation decisions, as these directors do not have affiliations with the managers of the firm.That is, such directors are more able to make unbiased decisions regarding CEO quality and their efficient compensation, firing, and hiring.On the other hand, some studies argued that outside directors may be less informed or that their monitoring can be excessive (Adams and Ferreira 2007).Jensen (1993) claimed that, in US firms, CEOs may participate in nominating new directors.Such directors may feel obligated to these CEOs.
Moreover, the influence of board structure on CEO compensation has been empirically examined.For instance, Ozkan (2011) found that larger boards with higher independent proportions pay higher compensation to their CEOs.Alfawareh et al. (2023) discusses that corporate governance mechanisms have influence on CEO pay, which supports the agency theory arguments.Hallock (1997) found that, when the CEO of firm A is a director on the board of firm B, and the CEO of firm B is a director on the board of firm A (interlocking relations), both CEOs obtain high compensation.Core et al. (1999) examined the level of compensation of large US firms.They found that the level of CEO compensation was higher in the following cases: the CEO participated in nominating new directors; directors had little stake in the firm; the CEO was a board chair; the board's size was large.Along the same lines, Cyert et al. (2002) found that, when CEOs held dual roles in firms, they received higher compensation.Grinstein and Hribar (2004) examined the association between the size of the bonuses received by CEOs and their board power.They found that, when the CEO was also the board chair and was involved in the process of nominating new directors, they received a larger bonus.Cahan et al. (2005) used a sample of 80 public sector firms in New Zealand.They found a positive association between board size and CEO compensation but a negative association between board independence and CEO compensation.In addition, they found that CEO duality positively affects CEO compensation.Chhaochharia and Grinstein (2009) found that CEOs' pay was reduced by around 17% in firms with a minority of independent directors.Ozkan (2011) investigated the association between CEO pay and performance using a sample of 390 non-financial UK firms for the 1999-2005 period.The researcher found that firms with a large board size and a high proportion of independent directors pay higher compensation levels for CEOs.Similar to Ozkan (2011), Kohli (2018) emphasized that there is a significant positive relationship between board size and CEO compensation.Guthrie et al. (2012) found that board independence has no relationship with the CEO's level of pay.However, the compensation committee independence increases the CEO's pay level, but the increase only occurs when the concentration of institutional ownership is high.Reddy et al. (2015) investigated the relationship between board structure and CEO compensation in New Zealand for the 2005-2010 period.They found that board size was positively related to CEO compensation, showing that larger board size led to higher CEO remuneration.However, independent directors had no significant relationship with CEO compensation.
Utilizing a sample of Australian companies for the 2001-2011 period, Nguyen et al. (2016) found that firms with a large board size pay higher CEO compensation.Benkraiem et al. (2017) investigated the role of gender on boards and board independence in determining CEO compensation.They found that both women sitting on as board members and independent directors positively affect CEO compensation.Al-Najjar (2017) investigated the impact of board characteristics on the CEO compensation of firms listed in the Travel and Leisure sector on the FTSE 350.The researcher found that large boards pay lower CEO compensation.This could be justified, as CEOs may not be able to monitor large boards, leading to lower CEO compensation.Another study by Patnaik and Suar (2020) found that a higher number of independent directors on the board of directors who possess the necessary skills and qualifications can have positive effects with respect to CEO compensation.Nevertheless, independent directors have a positive relationship with respect to CEO compensation.Using a sample of non-financial firms listed on the Karachi Stock Exchange over the 2005-2012 period, Sheikh et al. (2018) found that neither board size nor board independence had a relationship with CEO compensation.Furthermore, Jatana (2023) found that the association between a larger proportion of independent directors and CEO compensations is positive.
Interestingly, after reviewing studies on the relationship between board structure and CEO compensation, we observe that there are conflicting results describing this relationship.Based on the arguments above, we develop the following two hypotheses: H1.There is a significant association between board size and CEO compensation for banks in MENA countries.
H2.
There is a significant association between board independence and CEO compensation for banks in MENA countries.
Islamic Governance and CEO Compensation
Unlike conventional banks (CBS), which are based on the profit-maximisation principle (Olson and Zoubi 2008), the business model of IBs relies on Shari'ah principles.Specifically, IBs must comply with Shari'ah law.Aljughaiman and Salama (2019) and Trinh et al. (2020a) argue that IBs must share profits and risks.They are not allowed to provide or receive debts with interest (riba) or engage in excessive risks, and they must prevent uncertainty (gharar) and speculation (Abadi and Silva 2020;Kettell 2011).Within this law, IBs design Shari'ah-compliant financial services and products.The existence of these principles adds to the corporate governance in IBs, as there are more norms and duties that have to be achieved and maintained.Specifically, the characteristics of IBs are therefore different from those of CBs, which might also have different roles relative to corporate governance compared to IBs.Both the International Financial Standards Board (IFSB) and prior studies have argued that IBs are subject to considerable restrictions with respect to their business models (Iqbal 2013;Safiullah and Shamsuddin 2018).
Based on the arguments above, the boards of IBs may encounter additional restrictions relative to the options they have in managing bank activities, thus reducing their ability to achieve high performance.(Aljughaiman and Salama 2019;Aljughaiman et al. 2023) argue that the boards of directors in IBs have additional responsibility in assuring banks' activities to be compliant with Shari'ah law.This responsibility may add further restrictions to the board's ability to manage risks, which in turn might lead to different risk-taking behaviours.The board of directors' decisions regarding compensation might differ from those of CBs.Chhaochharia and Grinstein (2009) argue that the board of directors might reduce the CEO's compensation when the firm encounters additional requirements.Shari'ah principles could be considered as an additional requirement that could influence the compensation policy of IBs.On the other hand, Alnasser and Muhammed (2012) and Trinh et al. (2020b) argue that the existence of IB restrictions may add constraints to managers (e.g., CEOs), which might influence their decisions.However, effective corporate governance could reduce this negative influence on the banks' decisions with respect to CEOs.In detail, good corporate governance enhances the CEO's decision making, as it provides guidance (advisory role) and a monitoring role that can improve the bank's financial performance.This in turn increases the CEO's compensation as they achieve good financial performance for the bank.Based on the arguments above, we suggest the following hypotheses: H3.There is a significant difference in the influence of board size on CEO compensation among Islamic banks and conventional banks.
H4.
There is a significant difference in the influence of board independence on CEO compensation among Islamic banks and conventional banks.
The CG, CEO Compensation, and Financial Performance
The association between the board of directors, CEO compensation, and bank performance has drawn significant attention in the field of corporate governance.The board of directors plays a crucial role in determining the compensation of the CEO.The agency theory posits that the board, as representatives of the shareholders, should design compensation packages that align the interests of the CEO with those of the shareholders (Jensen and Meckling 1976).In the context of bank performance, the board has the responsibility to determine the appropriate combination of fixed and variable pay as well as the use of long-term incentives (such as stock options) in order to align the CEO's interests with longterm financial performance and risk management (Fahlenbrach and Stulz 2011).A study by Adams and Mehran (2012) found that bank boards with more independent directors were more likely to use performance-based CEO compensation.Another study by Zoghlami (2021) investigated the effect of CEO compensation on financial performance on the French stock exchange.The author found that CEO compensation is positively associated with financial performance.In contrast, other studies have shown that excessive CEO compensation can lead to increased risk-taking and reduced bank performance (Cheng et al. 2015;Fahlenbrach and Stulz 2011).
H5.The board of directors has a significant influence on the association between CEO ownership and financial performance.
Sample
The initial sample of this study comprised 360 banks that were listed in 22 MENA countries during the 2009-2020 period.Our sample period avoids the potential effect of the recent 2007 financial crisis.The sample has been filtered based on similar criteria employed in the banking literature (see Aljughaiman and Salama 2019;Abdelsalam et al. 2016).These criteria are as follows: (a) banks' full annual reports had to be available; (b) CBs with an Islamic window and investment banks were dropped. 1 We ended up with an unbalanced panel data sample containing 65 listed banks (760 bank year observations) located in 11 MENA countries.We obtained the financial data from Bloomberg and BankScope databases.We manually collected the corporate governance-level data from the banks' annual reports, which are available on official websites.Country-level variables were obtained from the World Bank's World Development Indicators database.
Appendix A Table A1 presents the sample distributions by bank type and country with 432 observations for CBs and 328 observations for IBs.Kuwait and Bahrain have the highest number of IBs, while the highest number of CBs is concentrated in Jordan.Panel B in the same table shows the key variables and characteristics classified by country.The findings show that banks in Qatar achieve the highest financial performance compared to other banks in the sample, which achieve 2.29% on average, while the lowest financial performance is achieved by banks operating in Bahrain.Banks in Lebanon pay higher long-term compensation to their CEOs compared to banks in other countries in the sample.On the macroeconomic level, we find that Qatar's economic situation outperforms other countries in the sample, since their GDPG is 8.7 on average compared to the lowest (1.4) achieved by performance, which is exhibited by Kuwait.
Measures of Variables
Following Matolcsy and Wright (2011), we measure CEO compensation by taking the percentage of stock ownership held by a CEO.According to Kim and Lu (2011), stock ownership is a reliable proxy to measure managerial compensation.The corporate governance factor was captured through the board of directors' structure.Specifically, we take two proxies for board effectiveness, which are board size, measured by the number of directors on the board, and board independence, measured by the percentage of independent members on the board (Almulhim 2023).
We also control for a number of firm-specific and country-specific variables.At the firm characteristic level, we control for CEO tenure, which is measured by the number of years the CEO has served in this position.Hou et al. (2013) argue that long-tenured CEOs are very likely to take low equity ownership because they become less engaged in extensive information processing.We also control for institutional ownership, as a higher proportion of institutional ownership might lead to low CEO ownership.Khan et al. (2005) found that a larger percentage of owner concentration is related to a lower level of compensation.Firm size is expected to influence CEO ownership, as a larger firm provides a higher percentage of ownership to the CEO.Thus, we control for the firm size.
A firm's financial performance might affect managerial ownership, since firms tend to provide ownership to executives as an incentive to increase the returns.However, most firms set up an incentive plan for managers when they achieve bad returns.Thus, the CEO ownership could be affected (Fahlenbrach and Stulz 2011).We measure the firm's financial performance by taking the return on average assets.Furthermore, we control for investment opportunities, measured by Tobin's Q, and leverage, measured by equity to total assets.As our sample includes conventional and Islamic banks, we control for IBs using a dummy variable that takes the value of 1 if the bank is Islamic and zero otherwise.We control for country-specific variables by considering GDP growth.Also, we control for the years fixed effect.
Estimation Methods
Pooled Ordinary Least Squares (OLS) with robust standard errors is used to control for heteroscedasticity.To test for the sensitivities of the results, we use different classifications of control variables.Besides, we employ both the GMM system and a lag model to control for any potential endogeneity issues.We test our hypotheses H1 and H2 by running the following empirical model, as shown in Equation ( 1): where CEOOWNER is the CEO stock ownership of bank, BODS is the board size, BODI is the percentage of independent members, X is the matrix of the bank-level control variables, GDPGrowth is the matrix of country-level macroeconomic variables, and ε is the error term.
Regarding the hypotheses H3 and H4, we run the following empirical model, as shown in Equation ( 2): where (BODS * IB ) is the interaction term between board size and Islamic banks, and (BODI * IB ) is the interaction term between the percentage of independent members and Islamic banks.The rest of the variables are described in Equation (1).
For hypothesis H5, we run the following empirical model, as shown in Equation (3): where Performance is the bank return on average asset, (BOD * CEOOW NER) is the interaction term between board of directors index and CEO ownership, and (BOD * CEOOW NER * IB) is the interaction term between board of directors index, CEO ownership, and Islamic banks.The rest of the variables are described in Equation (1).
Descriptive Statistics
We present our descriptive statistics in Table 1.Table 1 shows the mean and the distributional characteristics of all the variables used in our regression.The mean value of CEO stock ownership is 0.42%.Moving to the financial performance of banks in our sample, the mean value of ROAA is 1.19%, where the max return that banks achieve is 4.46%.The mean values of board size (BODS) and board independence (BODI) are 9.9 and 0.36, respectively.This means that the average board size of banks in our sample is 10 members on the board, and 36% of them are independent members.Interestingly, banks in our sample do not appoint new CEOs until they have served for approximately 6 years in this position.For bank characteristics, we find that the average size of the banks in our sample is 15.63, whereas the smallest bank size has a value of 11.17.The mean value of bank growth opportunity is 1.43%, where the average value of the equity to total assets (ETA) is equal to 14.28%.Importantly, 43.3% of our sample is classified as Islamic banks.The t-test in Table 1 presents a comparison between IBs and CBs across all main variables.The results show that the mean value of CEO ownership in CBs is significantly higher compared to CEO ownership in IBs.This indicates that Islamic banks provide less compensation and long-term incentives (CEO ownership) to their CEOs.Interestingly, the mean value of the board size of CBs is higher than the average board size of IBs, while IBs appoint a higher number of independent members on their board of directors compared to CBs.Furthermore, CEOs in CBs serve longer in their position compared to IBs, which is 7 years compared to 5 years, respectively.Institutional shareholders own more shares in Islamic banks than the institutional shareholders of conventional banks.In contrast, IBs maintain higher equity as reserves, and CBs achieve higher performance.
Table 2 presents the correlation matrix using the Pearson pairwise correlation for all the variables.This allows us to check for any significant intervariable correlations.The results of this table show that there is no high degree of cross-correlation between the key variables.This confirms that there is no problem of multicollinearity among the regressors.Furthermore, the correlation between board size (BODS) and the CEO stock ownership (CEOOWNER) is positively significant, whereas the relationship between BODI and CEO compensation is negatively significant.
BOD Characteristics and CEO Ownership
Table 3 provides the results for CEOOWNER, where we regress CEO stock ownership on the board structure variables.For sensitivity purposes, column 1 only shows the results that were obtained after regressing the main variables.Columns 2 and 3 show the results after controlling for firm and government variables and years fixed effects, respectively.The BODS has a significant positive association with CEOOWNER at the 1% level across all columns.This suggests that the larger the board of directors is, the higher the percentage of CEO stock ownership is.This finding is in line with prior studies in the literature (see, Cahan et al. 2005;Reddy et al. 2015;Nguyen et al. 2016).These studies argue that the board of directors is expected to restrain and soften executive opportunism and associate the CEOs' interests with shareholders' interests by constructing an effective pay contracts policy that links top executive compensation with firm financial performance (Sheikh et al. 2018).According to agency theory, larger boards can be less effective in disciplining and monitoring CEOs, leading to less oversight and potentially higher compensation demands from CEOs (Fama and Jensen 1983).
In addition, the BODI is negatively and significantly associated with the CEOOWNER at the 1% level across all columns.This shows that lower proportions of independent members on the board are related to higher CEO ownership.Our result is consistent with Cahan et al. (2005) who found a negative association between board independence and CEO compensation.According to Fama (1980) and Fama and Jensen (1983), outside directors are more able to make unbiased decisions regarding CEO quality and their efficient compensation, firing, and hiring.Moreover, independent members, with their lack of personal ties to the corporation, are traditionally considered more objective in monitoring CEO compensation and performance.Having fewer independent directors might weaken this monitoring mechanism, potentially creating room for CEOs to negotiate higher pay packages (Fama and Jensen 1983).
For Islamic banks results, our second independent variable (the interaction between board size and IBs) has a negative and significant relationship with CEOOWNER at the 1% level.However, since our main board size variable is strongly positive at a 0.15 coefficient, and the BODS*Islamic is −0.09, this indicates that the board size in IBs increases the CEO ownership as well, although the influence is weaker than in CBs.Regarding the interaction between board independence (BODI) and IB, there is a positive and significant association between the two variables at the 1% level.That is, larger percentages of independent members on the board are related to higher CEO ownership in IBs across all the columns.This indicates that, unlike CBs, independent members in IBs seem to increase the CEO ownership.As we discussed previously, Islamic banks have different business models that could lead to different influences of BOD composition on CEO compensation.Although the CBs model is based on the risk-shifting concept, the Islamic banks model is based on profit and risk sharing (Olson and Zoubi 2008;Aljughaiman and Salama 2019).
In terms of control variables, we find that bank size is significantly and positively associated with CEO compensation.This means that larger banks provide more compensation to their CEOs.In contrast, return on assets, capital ratio, and growth opportunities are negatively associated with CEO compensation.That is, banks with higher returns on assets, capital ratio, and growth opportunities tend to pay less for CEOs.Furthermore, GDP growth (GDPG) has a positive association with CEO compensation, which indicates that banks in countries with higher GDP growth pay more compensation to their CEOs.
BOD Characteristics and CEO Ownership (Robustness Check)
Prior studies debate that the research on corporate governance and financial performance may be influenced by endogeneity problems and therefore may employ traditional techniques; for example, OLS may not be sufficient (Wintoki et al. 2012).In this section, Table 4 re-examines the relationship between board structure and CEO compensation after controlling for endogeneity using the lag approach and GMM.Previous studies argue that these methods can solve three types of endogeneity, namely, unobserved heterogeneity, simultaneity, and dynamic endogeneity (Wintoki et al. 2012;Almulhim 2022).Table 4 reexamines the relationship between board structure and CEO compensation after controlling for endogeneity using the lag approach and GMM.Column 1 shows the results using the lag approach, while column 2 presents the findings using the GMM method.AR1, AR2, and Hansan assure that our GMM model is valid.The results are consistent with the main estimation results.In this section, we provide additional analysis by investigating the relationship between BOD structure and CEO compensation for firms in the sample that did not change their CEO (see Table 5).The results are also in line with the main estimation results of Table 3. 6 presents the results of testing our H5, which investigates the impact of BOD characteristics on the association between CEO ownership and financial performance.We specifically create interaction variables by multiplying board structure and CEO ownership.We utilize principal component analysis using board size and independence to create a board structure index.Principal component analysis allows us to effectively obtain a decomposition value of the correlation matrix of director structures (following Aljughaiman and Salama 2019;Ellul and Yerramilli 2013).This allows us to use the eigenvector in the decomposition as a single main factor in our study.Using principal component analysis provides key benefits for measuring the board of directors mechanisms, which allows us to avoid the subjective elimination of any characteristic or the subjective judgment of the influence of these categories (Tetlock 2007).
We also take the interactions between board structure, CEO ownership, and an Islamic bank dummy variable to capture the influence in the Islamic banks sample.However, the results show that the interaction coefficient variable of BOD structure and CEO ownership has a significant and positive association with a bank's financial performance.This indicates that board structure influences the CEO compensation in a way that makes the financial performance increase.Furthermore, the interaction variable that captures the Islamic sample shows no significant association, which indicates that the results are not different for the Islamic sample.
Conclusions
Board structure is one of the most important mechanisms for controlling the agency problem in firms.However, many researchers have excluded financial firms from their sample due to their different characteristics (e.g., high leverage and complex business models).More importantly, there is scant research on CEO equity compensation in financial firms.Thus, this paper contributes to the extant literature by examining this issue using a financial firm sample.Specifically, we investigated the influence of board structure on CEO equity compensation.Furthermore, we examined the impact of board structure on the association between CEO equity compensation and financial performance.The study's sample comprised 65 listed banks in MENA countries over a period of 12 years from 2009 to 2020.
The findings of this paper are that board size is positively correlated to CEO equity compensation and that board independence is negatively correlated to CEO compensation.However, we found that board size has a weaker positive influence on CEO compensation for IBs.In addition, the relationship between board independence and CEO compensation is positive.The Shari'ah principles add more restrictions on the board member activity, leading to different influences on CEO compensation.In addition, we found that an effective board of directors provides more incentive in regard to CEO compensation, wisely leading to increases in financial performance.This is consistent with the agency theory and the idea that board structure could operate as a controlling mechanism to manage executives' compensation.
Overall, this study has implications for financial firms, policymakers, and regulators.The findings shown in this study can provide direction and guidance to regulators who are responsible for managing financial systems in MENA countries and the top management of financial companies.Our findings are relevant in the following manner: these firms need to have an effective board of directors that can mitigate agency costs and enhance corporate performance by implementing an appreciative reward scheme for the CEO.
Our study has some limitations; for example, we only focused on CEO equity compensation.Future studies can therefore employ more aspects of CEO compensation, such as bonuses and salaries, in order to explore its impact on financial performance.Moreover, our sample covered the period from 2009 to 2020, and we observed that a board's size has a positive association with CEO compensation, whereas board independence is negatively correlated with CEO compensation.Future studies may examine this association before and during COVID-19.Conventional banks with Islamic windows are banks that provide products that are compliant with Shari'ah (Beck et al. 2013).This type of bank does not provide a separate financial report for the Islamic products window ( Čihák and Hesse 2010), thus we excluded them from our sample.
Table 1 .
Descriptive statistics.The table presents descriptive statistics of all variables used in the regression models.It also presents the t-test for the mean value for both samples (CBs and IBs banks).* p < 0.10; ** p < 0.05; *** p < 0.01 (two-tailed test).CEOOWNER: CEO stock ownership, BODS: board size, BODI: board independence, CEOT: CEO tenure, INSTITO: institutional ownership, SIZE: bank size, ETA: equity to total assets, ROAA: return on average assets, GROWO: growth opportunity, IB: Islamic banks, GDPG: GDP growth. Notes:
Notes:The table shows the Pearson pairwise correlation matrix for all variables used in the analysis.* p < 0.10 (two-tailed test).CEOOWNER: CEO stock ownership, BODS: board size, BODI: board independence, CEOT: CEO tenure, INSTITO: institutional ownership, SIZE: bank size, ETA: equity to total assets, ROAA: return on average assets, GROWO: growth opportunity, IB: Islamic banks, GDPG: GDP growth.
Table 4 .
Robustness check: controlling for endogeneity using lag approaches and GMM.
Table 5 .
Additional analysis: regression between BOD structure and CEO compensation for sample that did not change CEO.
Note:The table presents regression results for board structure and CEO compensation for the period 2009-2020 after we sub-sample the firms that did not change CEO over our period of the study.Heteroscedasticity-robust standard errors are in parentheses.* p < 0.10; ** p < 0.05; *** p < 0.01.CEOOWNER: CEO stock ownership, BODS: board size, BODI: board independence, CEOT: CEO tenure, INSTITO: institutional ownership, SIZE: bank size, ETA: equity to total assets, ROAA: return on average assets, GROWO: growth opportunity, IB: Islamic banks, GDPG: GDP growth.4.2.4.BOD Characteristics, CEO Ownership, and Financial Performance Table
Table 6 .
Regression results for BOD, CEO, and performance.
Note:The table presents regression results for the effect of board structure on the association between CEO compensation and financial performance for the period 2009-2020 using OLS approach.Heteroscedasticity-robust standard errors are in parentheses.* p < 0.10; ** p < 0.05; *** p < 0.01.CEOOWNER: CEO stock ownership, BODS: board size, BODI: board independence, CEOT: CEO tenure, INSTITO: institutional ownership, SIZE: bank size, ETA: equity to total Assets, ROAA: return on average assets, GROWO: growth opportunity, IB: Islamic banks, GDPG: GDP growth.
Table A1 .
Cont.The final sample employs unbalanced panel data of 65 listed banks (760 bank year observations) operating in 11 MENA countries.Panel B shows the key variables of the study classified by country. Notes: | 8,335.6 | 2024-01-31T00:00:00.000 | [
"Economics",
"Business"
] |
Charting the Research Course for Sustainable Aquaculture in Sabah, Malaysia
. Due to arising needs and demands, aquaculture is currently the fastest growing food production sector. In order to increase yield and yet to remain sustainable, the challenges would be to minimise impact on the environment and ecosystem services. Aquaculture activity contributes significantly to Malaysia and also the state of Sabah’s economy and food security. Hence, the future changes in the environment as a result of rapid population growth and development would pose as threats to this industry in terms of quality, quantity and sustainability. Unforeseen environmental changes such as environmental pollution from other sources, climate change and the changes in policies would jeopardize the sustainability of this industry. In order to anticipate such impacts to the aquaculture activities, this paper set to chart a sustainable course for its development. Four important research courses were proposed: establishment of a sustainable framework, assessment of impacts of climate change, viability and vulnerability assessment due to future environmental changes and food security. Such findings would eventually allow the stakeholders to plan and manage the resources and aquaculture activities in such a way that foster sustainable food security and resilient aquatic ecosystems.
Introduction
Aquaculture is currently the fastest growing food production sector and it is expected to supply over half of the world's seafood [1].With the arising needs and demands, Wilfart et al. [2] foresees that aquaculture will face four inevitable challenges: increasing cultivable areas without decreasing biodiversity or increasing water demand, improving food quality, producing ecosystem services and adapting to climate change.Thus, aquaculture faces the pressure to increase yield yet be sustainable.
In Sabah, aquaculture has great potential to contribute to the state's economy and food security, although it is not as significant as the palm oil industry, it is promoted as a promising commercial venture to meet consumer demand for seafood and support community livelihoods in general [3].It is inevitable that there will be changes to the structure of the activity (from small scales to corporate), the coverage (from meeting domestic needs to meeting regional and global needs), the intensity, the approach (from conventional to innovative) and also the stakeholders (from individual to local to national to multiplayers).
Hence, aquaculture is vulnerable to the impact of climate change, future variability and changes in the environment as a result of rapid population growth.This would mean that aquaculture production in the state will be exposed to the threats of future environmental changes to its quality, quantity and sustainability.Continued growth in aquaculture production is likely to set off intensification of production and this would bring about a range of resource and environmental problems.Yet at the same time, this sector is at risk due to possible future environmental changes, particularly as aftereffects of climate change and increased anthropogenic wrong doings.Climate change is only one among many environmental and anthropogenic stresses faced by aquaculture but is likely to complicate the process of achieving sustainability [4].
The need for a sustainable course
The first question that we pose in this scenario is how to ensure that this growing sector will incorporate sustainable practices and avoid some of the possible resource and environmental problems that have plagued the agricultural and livestock sectors in the past decades.The second question we ask is how the aquaculture sector can anticipate the impacts of future environmental change.
The answer to the first question have been aptly addressed by Klinger and Naylor [5] in a recent review, they have managed to identify some of the most promising pathways toward sustainable growth in the future of aquaculture sector.They raised three main issues: the technological innovations related to energy efficiency, the waste management matter and the genetic technologies for the enhancement of terrestrial aquafeed ingredients.They acknowledged that most innovative, productive, profitable, and environmentally sound In view of this, we attempted to propose measures to address the second question on charting a course for Sabah, so that the future of development in aquaculture in the state will be on a sustainable course.
Anticipating future environmental changes such as climate change and anthropogenic stresses will enable policy-makers and the stakeholders to be more aware and alert of the future challenges, and they could plan and manage the resources and activities in such a way that foster sustainable and resilient aquatic ecosystems.As this will benefit the aquaculture industries, and also provide goods and services at the national and even global levels, for example, through sustained food security and the conservation of biodiversity.
Charting a sustainable course
When attempting to meet sustainability, I firmly believe that sustainable development comes after taking into due account the future sustainable opportunities for the future generations.The simple idea would be as follows: Resources -Future Sustainability and Opportunities = Sustainable Development As clearly started by Serageldin [6] and in his own words: sustainability requires leaving to the next generation exactly the same amount and composition of natural capital as we found ourselves, and to substitute a more promising concept of giving future generations the same, if not more, opportunities than we found ourselves.
Thus, we attempt to address the future aspect first by developing a sustainable framework for aquaculture activities, and then follow by preparing the industry in anticipating future environmental changes.
Smil [7] in his book on Global Catastrophe and Trends: The Next Fifty Years remarked that any prediction of the future environmental changes could potentially be hampered by two incessant processes: the ever changing existing trends and also the shifting significance and concerns due to interactions of current and underestimated trends.However, Smil also mentioned that catastrophes and endings are also opportunities and beginnings.Taking these into consideration, we believe that a framework would serve as a basis for us to begin with (in moving towards sustainability) and also a platform for us to deal with any imminent negative environmental changes.
Sustainable framework for aquaculture
Developing a sustainable framework for aquaculture activity in Sabah will provide valuable understanding of the limit and provide the standards for the industry.The identified and developed suits of indicators will enable all stakeholders to know the rhythm of the aquaculture activities, symptoms and signs are good indicators that foretell possible problems and concerns.
Sustainability indicators also enables monitoring and reporting on the state of the aquaculture activities and relevant ecosystems locally and regionally.By taking into account a broad spectrum of sustainability perspectives and concerns, a comprehensive framework for the assessment can therefore be proposed, and results communicated to the decision making process.Although this does not necessarily offer a definitive judgement on sustainability, but present a holistic view, allowing recognition of trade-offs or compromises involved between conflicting sustainability objectives if any.
Impact of climate change on aquaculture
Understanding the impact of climate change on aquaculture is necessary on the research agenda as climate change is inevitable.Review of local and regional adaption and mitigation strategies related to aquaculture sector in view of climate change will allow stakeholders to anticipate such impacts [8].
This issue is also highly relevant to the formation of Malaysia's national climate policies and strategies.With the close of the recent COP21 (2015 Paris Climate Conference), the commitment to tackle climate change will again be renewed and new targets will be pledged.Thus, it is important to identify the impact of this on aquaculture as it posed as one of the most vulnerable economic sector, especially in developing countries like Malaysia.Of course the actual impact will be hard to predict, but due consideration can be given to the different impacts of climate change over a period of time, whether they are positive or negative.
Effects of environmental changes on aquaculture
Open-intensive aquaculture activities would have an impact on the water environment, nonetheless, this activity is susceptible to the changing environment, and such as the decline in water quality will affect the long term sustainability of the aquaculture activity.Thus, it is important to carry out long term and periodical monitoring to look at the potential sources of environmental pollutants that might threaten this activity, so that the management of this issues could be aptly addressed.Understand the risks of aquaculture activities in relations to nutrients salts, medication and also organic pollutants within the vicinity.This would enable the establishment of standards for continuous monitoring and food safety.
Once we understand the pollutants in the environmental, we would be able to implement useful measures, such as the 4S strategy (Stop, Slow, Simplify and Share) in dealing with environmental problems [9].
Aquaculture and food security
No matter how much we consider the issue of sustainability, ultimately it is the food security for the present and future generations that must be addressed.
The impact of aquaculture on the environment and local communities: issues on food security, food nexus, access to and management of common pool resources, could result in conflicts with exiting users and potentially acute social, political, and economic problems [10].So, partnership is a must for this to be addressed.
Aquaculture and ecosystem health
Ecosystem health is closely related to aquaculture and this depends greatly on how the farmers, aquabusiness, aquaculture scientists, local communities and policy makers to understand the interrelationships and work towards tapping into what aquaculture activities can offer towards ecosystem health.Ecosystem health has been a neglected aspect for most development as the approach tends to drain certain services and sometimes even cause disservice.Aquaculture activities could well be a double edged sword as it could also provide beneficial environmental services if governed and practised in an ecosystem approach and an integrated manner.However, such ideal or sustainable use of goods and services and impacts on ecosystem integrity depend greatly on our understanding of the system dynamic.Further research is required to answer this because to achieve such a balance is very challenging.
The sustainable aquaculture framework
Thus, to achieve sustainability in this activity, having a sustainable framework would be a good start.As this would provide the minimal framework for all the stakeholders to begin their work with.Furthermore, an existing framework would serve as a precautionary step to move forward.Oversight in development would jeopardise the industry.And finally, a framework would help the stakeholders to take action on any future environmental changes that might have impacts on the industry.
Figure 1 depicts the framework proposed in this paper towards sustainable aquaculture.The box in the middle shows that farming activities and processing have an effect on the water quality, that is why technical options are important as they play a role in identifying the most effective and efficient strategy to reduce such impact.Dotted boxes show the important factors and players that will influence aquaculture activities, minimal environmental changes and healthy ecosystem services would provide the desired environment for the industry to thrive in.
Environmental policies changes (e.g.climate change policy) will also define the direction of the industry.The same goes for governance and best management practices as they would also shape the aquaculture activities.One last but not least is the partnership from the local communities.Taking all these into account would make a good alliance in bringing the industry towards sustainable aquaculture.
The viability of a "sustainable aquaculture" industry would depend on a collaborative and malleable process that allow all the stakeholders to adapt and communicate in order to establish, shift and deploy any policy goals [11].
From the business front, there is a need to experiment with new ways of conducting the business affairs, according to Williams [12], businesses that fail to take responsibility, embrace transparency and open up to new collaboration will increasingly seem out of touch in this green era.Thus, Williams has proposed a New Behavioural Contract that allows the businesses to try-out innovative and never-thought-possible ways of working together towards sustainability.A new sustainable paradigm shift for the businesses.
Conclusion and suggestion
By anticipating future environmental changes such as climate change and anthropogenic stresses will empower policy-makers and all stakeholders to be more alert of the future challenges, and they could plan and manage the resources and activities in such a way that foster sustainable and resilient aquatic ecosystems.This will not only benefit the aquaculture industries, but it will also provide goods and services at the national and even global levels, for example, through sustained food security and the conservation of biodiversity.
The proposed framework for sustainable aquaculture will help to organize and guide the planning to move forward while leveraging and incorporating all stakeholders' input as well.With the presence of a framework, the organization would help to put everything into proper perspectives, remove any potential assumptions and sort out what is the best step(s) to go forward based on the best available knowledge and information about the industry and also the environmental dynamics.
The framework will also incorporate others' thinking and ideas.In this process, not only that the precautious principles are applied, it also could be an eye opener to realise that there are better ideas that could solve potential problems, such as local knowledge that could address the same issue without carrying out any time and cost consuming researches for that matter.With such inputs, they will also stimulate the thinking in others to propose ideas that are even better.Thus, this framework can also be viewed as a platform or starting point for the stakeholders to interact with one another and decide on the best solution.
Critical gaps in planning and application, policies and enforcement could be identified and filled, hence the implementation of adaptation and mitigation measures would be possible.Raised awareness and well managed aquaculture activities play important roles also in maintaining healthy and productive ecosystems and vice versa.Indeed, it is the economic returns that will drive and influence and farmers' production decision.However, through awareness, farmers could learn that sound environmental responsibilities actually make good business and create wealth and that pollution will lead to the outbreak of diseases and this is not business smart.
The two main purposes of sustainable aquaculture are for the sustained business for the farmers and also for the sustained and desirable healthy generations to come because the food source is meant for all but it is not just about feeding but staying healthy.Thus, the research course helps to move the industry towards that end.
In principle, the 4S approach would help: Stop, Slow, Simplify and Share [9].Collectively, if all the stakeholders share the same approach, this would help to regulate the course for sustainable aquaculture to be always on the right track.For example, stakeholders united to stop the use of drugs, slow down the intensification that might cause the misuse of unhealthy seeds, simplify certain processes and share or advocate water recycling or treatment before discharge would promote a shared partnership towards environmental protection.Only in protecting the viability, vitality and diversity of the environment that we can ensure sustainability.
In propagating industrial and agricultural activities in the past, mistakes have been made because of ignorance, carelessness and the lack of restrain.As one moves towards a surging and challenging industry, we should embrace prudence and seek to create a desirable environment for us and also the future generations.
Charting the research course for sustainable aquaculture in Sabah, Malaysia is to be ahead of the game.We especially need committed proactive and aggressive management of resources for sustainability.
Figure 1 .
Figure 1.Framework towards sustainable aquaculture for Malaysia. | 3,521 | 2016-01-01T00:00:00.000 | [
"Engineering"
] |
High-Throughput Absolute Quantification Sequencing Revealed Osteoporosis-Related Gut Microbiota Alterations in Han Chinese Elderly
Objective Accumulative evidence suggests that gut microbiota play an important role in bone remodeling and hence bone health maintenance. This study aimed to explore the association of gut microbiota with the risk of osteoporosis and to identify potential disease-related taxa, which may be promising targets in osteoporosis prevention and treatment in the future. Methods Absolute quantification 16S ribosomal RNA gene sequencing was used to detect absolute and relative abundances of gut microbiota in 44 patients with osteoporosis and 64 controls. In combination with one of our previous studies, a total of 175 samples were involved in the relative abundance analysis. Results Compared with the controls, the patients with osteoporosis had higher absolute and relative abundances of Bacteroidetes phylum, and Bacteroides and Eisenbergiella genera. The absolute abundances of Clostridium_XlVa, Coprococcus, Lactobacillus, and Eggerthella genera increased, and that of the Veillonella genus decreased in the osteoporosis group. As for relative abundance, that of the Parabacteroides and Flavonifractor genera increased, whereas that of the Raoultella genus decreased in the osteoporosis group. Controlling for potential confounders, the associations of Clostridium_XlVa, Coprococcus, and Veillonella genera with the risk of osteoporosis did not maintain significance. Ridge regression analysis suggested that Bacteroides is associated with reduced bone mineral density (BMD) and T-score at lumbar spines, and Anaerovorax is associated with increased BMD at the femoral neck. Functional predictions revealed that 10 Kyoto Encyclopedia of Genes and Genomes pathways were enriched in the osteoporosis group. Conclusions Gut microbiota compositions may contribute to the risk of osteoporosis. Several specific taxa and functional pathways are identified to associate with reduced bone density, thus providing epidemiologic evidence for the potential role of aberrant gut microbiota in osteoporosis pathogenesis.
INTRODUCTION
The human body is populated by trillions of microorganisms, and a vast majority of which consists of more than 1000 species of microbes that inhabit the gastrointestinal tract (Falony et al., 2016). Gut microbiota is crucial in maintaining human health, promoting the defensive responses to pathogen invasion and regulating immunity in the host (Carding et al., 2015). Alternated gut microbiota compositions have been linked with a range of chronic clinical conditions including obesity, diabetes, heart disease, and Alzheimer's disease (Bordalo Tonucci et al., 2017;Jiang et al., 2017;Tang et al., 2017;Torres-Fuentes et al., 2017).
Emerging evidence also suggests an association between gut microbiota and bone health. Despite inconsistencies, previous studies have reported that germ-free mice show changed bone mass compared with conventionally raised ones (Sjögren et al., 2012;Schwarzer et al., 2016). Moreover, oral antibiotics capable of regulating gut microbiota compositions affect bone mass (Cox et al., 2014;Guss et al., 2017). In addition, animal and human studies have demonstrated the benefit of probiotics, i.e., reducing bone loss (Britton et al., 2014;Nilsson et al., 2018). Gut microbiota may affect bone remodeling by regulating nutrient (e.g., calcium) absorption in the intestinal tract, thereby regulating host immune system and functions indirectly on bones mediated by systematic circulation-translocating microbes and molecular products (e.g., serotonin and short chain fatty acids) of microbiota (Yan et al., 2016;D'Amelio and Sassi, 2018).
Osteoporosis is a common bone disorder characterized by reduced bone mineral density (BMD), altered bone microstructure, and increased fracture risk (Locantore et al., 2020). One important complication of osteoporosis is fragility fracture, which easily occurs after minor injuries and possibly results in enormous distressful events (body pains, physical function impairments, mental depressions, and even mortality) in patients (Giangregorio et al., 2014). At present, osteoporosis therapeutics mainly depends on medications of reducing bone resorption, and/or enhancing bone formation, with potential safety and tolerance problems during long-term treatment (Khosla and Hofbauer, 2017;Locantore et al., 2020). Studies on gut microbiota composition identify disease-related microbial biomarkers and may provide new directions for osteoporosis screening, diagnosis, and treatment in the future.
The presence of high-throughput sequencing technology has dramatically accelerated association studies of gut microbiota and human well-being, enabling researchers to profile microbial community compositions and functions in a high-resolution and culture-independent pattern (Franzosa et al., 2015). So far, several studies have been conducted using 16S ribosomal RNA (rRNA) gene sequencing and linked microbial alterations to varied bone mass in human beings. In one of our previous studies, the relative abundance of gut microbiota differed at several levels (phylum, genus, etc.) among Chinese elderly individuals with different bone densities . Similarly, Das et. Al. and Wang et al. observed osteoporosisrelated taxa-specific changes in gut microbiota profiles (Wang et al., 2017;Das et al., 2019). In those studies, microbial relative abundance was detected and compared between groups of different BMDs. However, the relative measurements are inadequate to reveal exact disease-related microbial alterations in case of substantial variations in microbial loads among samples (Vandeputte et al., 2017). Thus, associations of absolute compositions of gut microbiota and osteoporosis risk must be further investigated.
In this study, we adopted absolute quantification 16S rRNA gene sequencing to determine the absolute and relative abundance measurements of gut microbiota simultaneously, identified the key disease-related microbiota taking both relative and absolute profiling into account, and hence explored potential roles of gut microbiota in osteoporosis pathogenesis.
Participant Enrollment and Data Collection
This study was approved by the Institutional Review Board of Tongji Medical College, Huazhong University of Science and Technology. Written informed consents were obtained before the study. All participants were recruited at Union Hospital of Tongji Medical College in Wuhan City from 2018 to 2019. Adults older than 60 years or postmenopausal women with natural menopause were included in this study. Individuals taking antibiotics or hormones within the past month before stool collection were excluded. Participants with disease history of hyperthyroidism or hypothyroidism, and prevalent gastrointestinal, renal, or osteoarthritis diseases were also excluded. Women with hysterectomy and ovariectomies were excluded. Dual-energy X-ray absorptiometry (Lunar Prodigy, GE, USA) was applied to measure BMD at skeleton sites of the lumbar spine (L1-L4) and femoral neck of each participant. A Tscore of ≤ −2.5 at any skeleton site was designated as prevalent osteoporosis. Finally, 44 patients with osteoporosis and 64 controls were involved in our analysis. Demographics data (sex, age, body weight, and height); cigarette smoking, alcohol drinking, and dietary habits; and disease and medication histories were collected before BMD examinations by trained investigators. Body mass index (BMI) was calculated as weight (kg) divided by the square of height (m).
Stool Sample Collection and Microbiota Sequencing
Fresh stools were collected in sterile tubes transported with ice packs and stored at −80°C until laboratory detection within 3 months. The absolute quantification of 16S rRNA gene sequencing was conducted by Shanghai Genesky Biotechnologies Inc., Shanghai, China. Genomic DNA was extracted with FastDNA ™ SPIN Kit (mpbio, California, USA) according to the manufacturer's instructions. The integrity of genomic DNA was detected through agarose gel electrophoresis. NanoDrop2000 (Thermo Fisher Scientific, Massachusetts, USA) and Qubit3.0 spectrophotometers (Thermo Fisher Scientific, Massachusetts, USA) were used to examine the concentration and purity of DNA extracts. The spike-in sequences with identical conserved regions to natural 16S rRNA genes and variable regions replaced by random sequences with approximately 40% GC content were artificially synthesized. The spike-in sequences with known gradient copy numbers were added to the sampled DNA pools, functioning as internal standard, and allowed the absolution quantification across samples. The V3-V4 hypervariable regions of microbial 16S rRNA gene and spike-in sequences were amplified with a forward primer (Illumina adapter sequence 1 + CCTACGGG NGGCWGCAG) and reverse primer (Illumina adapter sequence 2 + GACTACHVGGGTATCTAATCC). PCR amplification was achieved on an ABI 2720 thermal cycler (Thermo Fisher Scientific, Massachusetts, USA) with a TopTaq DNA polymerase kit (Transgen, BeiJing, China). After library quantification, pooling and quality check, all samples were sequenced on the Illumina NovaSeq 6000 platform (Illumina, California, USA) with the NovaSeq 6000 SP Reagent Kit (500 cycles) (Illumina, California, USA) using the 2×250 bp pairedend method.
Raw data from the Illumina platform were then processed as described in our previous study . Only sequences of >100 bp and those with an average score of >20 were included for further analysis. Operational taxonomic units (OTUs) were generated by clustering the clean sequences at a similarity level of 97%, and chimeras were removed by USEARCH (v10). The spike-in sequences were filtered out for read counting. The standard curve of spike-in sequences was generated for each sample, and the sequenced microbial DNA was quantified and estimated in reference to the representative standard curve (Jiang et al., 2019). Taxonomic annotation was performed at a confidence threshold of 80% by Mothur (v1.41.1) with the command classify.seqs based on the RDP (v11.5) database.
Statistical Analysis
All statistical analyses were performed using R version 3.4.3, SPSS version 22 and GraphPad Prism version 5.01. Microbial alpha diversity was assessed with Chao1 for community richness, and with Shannon and Simpson for community diversity. Principal coordinates analysis (PCoA) was performed using weighted-UniFrac distance matrix to assess microbial beta diversity at the OTU level. Permutational multivariate analysis of variance (PERMANOVA) was performed using the Adonis () function in the R package vegan with 9999 permutations to evaluate the between-group differences of microbial communities.
The Wilcoxon rank sum test was used for between-group comparisons of absolute and relative taxon abundances at the phylum and genus levels; only shared taxa in over 20% of samples with a relative abundance of >0.01% were included. As to the microbial relative abundance analysis, data from our previous study using a same laboratory sequencing platform were combined. Multi-covariate adjusted generalized linear model was then performed to analyze the dependency of the risk of osteoporosis on the specific taxa of significantly different abundance between the case and control groups with a presumed negative binomial distribution: Taxon~osteoporosis-status + confounders.
Spearman correlation analysis was performed to explore the correlations between microbial absolute abundance and BMD measurements, including BMDs, and T-scores. A penalized regression approach (ridge regression analysis) was performed to detect the effect of microbial taxa on BMD measurements adjusted for age, BMI, sex, smoking, alcohol drinking, coffee drinking, and dietary habits.
Biological functions of osteoporosis-related gut microbiota were explored on the basis of Kyoto Encyclopedia of Genes and Genomes (KEGG) using Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt). Between-group differences in functional pathways indicated by taxa variations were assessed using Welch's t-test. A P value of <0.05 was considered statistically significant.
General Characteristics of the Participants
As shown in Supplementary Table 1, data of 108 individuals (44 osteoporosis cases and 64 controls) were involved in the absolute abundance analysis, and data of 175 individuals (74 osteoporosis cases and 101 controls) were included in the relative abundance analysis. The patients with osteoporosis had lower BMI than the controls. The proportions of women and fracture history were comparatively higher in the osteoporosis group than the control. Between-group differences of age, smoking, alcohol drinking, coffee drinking and dietary habits did not reach the significance level.
Between-Group Microbial Diversity Comparisons
After low-quality, short, ambiguous and singleton reads were excluded, a total of 26,087,887 clean reads (constituting 1538 OTUs) sequenced from 108 samples were left for further analysis. The proportion of spike-in reads to total reads of each sample ranged from 12.32% to 37.00%. After spike-in reads clustered OTUs were removed, 1529 remained and the number of generated OTUs ranged from 221 to 537 per sample. The Venn diagram showed that 1252 OTUs were shared across the two groups ( Figure 1A). The rarefaction curves for all samples were all near to saturation, indicating that the sequencing work was adequate with a few missed genes ( Figure 1B). Figure 1C shows the alpha-diversity indicators for the gut microbiota in osteoporosis cases and controls. Significant between-group differences were not observed among the alpha-diversity indices (Chao1, Shannon, and Simpson). PCoA results based on weighted Unifrac matrix clustering data of microbial beta diversity among the 108 samples are shown in Figure 1D. PERMANOVA test results revealed achieving-significance between-group differences of microbial beta diversity (R 2 = 0.033, P value = 0.022).
Associations Between Phylum-and Genus-Level Microbial Compositions and Risk of Osteoporosis
Results of relative abundance and absolute copy numbers of the phyla and genera in the gut microbiota among all participants are shown in Figure 2. Phyla Fimicutes, Bacteroidetes, Proteobacteria, and Actinobacteria dominated among all participants ( Figure 2A). Compared with the relative and absolute abundances of the Bacteroidetes phylum in the controls, that in the osteoporosis patients was higher ( Figures 3A, B). A total of 10 genera differed in relative and/or absolute abundances between the two groups. Specifically, the Clostridium_XlVa, Coprococcus, Lactobacillus and Eggerthella genera were enriched and Veillonella decreased in patients with osteoporosis referring to absolute abundance quantifications ( Figure 4A). Compared with the controls, the patients with osteoporosis had reduced Raoultella and elevated Parabacteroides and Flavonifractor referring to relative abundance quantifications ( Figure 4B). The relative and absolute abundances of Bacteroides and Eisenbergiella were higher in the osteoporosis group than the control ( Figures 3A, B).
A B
FIGURE 3 | Reaching-significance of different taxa between osteoporosis and control groups with respect to absolute and relative abundances. p, phylum; g, genus; (A) indicates the differential analysis of absolute profiling; (B) indicates the differential analysis of relative profiling. *0.01 ≤ P value < 0.05. As shown in Table 1, after controlling for potential confounders, all the above-mentioned associations remained significant, except that of the absolute abundances of Clostridium_XlVa, Coprococcus, and Veillonella and risk of osteoporosis.
Correlations Between Gut Microbiota Compositions and BMD Measurements
The results of Spearman's correlation analysis for correlations between the absolute quantification indices for gut microbiota composition and BMD measurements are presented in Table 2. The Fusobacteria phylum showed negative correlation with T-score at femoral neck. The Anaerovorax and Lachnospira genera are related to BMD positively, whereas Coprobacillus, Erysipelotrichaceae_incertae_sedis, Intestinibacter, Lachnospiracea_incertae_sedis, and Terrisporobacter are negatively correlated with BMD at femoral neck. Weissella are positively linked with BMD and T-score, whereas Bacteroides, Cetobacterium, Eggerthella, Fusobacterium, and Megasphaera are negatively related to bone density at lumbar spines. We also demonstrated negative associations of Clostridium_XlVa, and Veillonella with the BMD and T-score at lumbar spines and femoral neck.
With age, BMI, sex, smoking, alcohol drinking, coffee drinking, and dietary habits controlled, ridge regression A B FIGURE 4 | Reaching-significance of different taxa between osteoporosis and control groups with respect to either absolute or relative abundance. p, phylum; g, genus; (A) indicates the differential analysis of absolute profiling; (B) indicates the differential analysis of relative profiling. *0.01 ≤ P value < 0.05; **P value < 0.01. analysis showed that the BMD and T-score at lumbar spines decreased in response to the increase in the absolute abundance of the Bacteroides genus, whereas the BMD at the femoral neck increased with the increase in the Anaerovorax genus (Table 3).
Functional Pathway Predictions for the Identified Osteoporosis-Related Gut Microbiota
The KEGG functional pathways were predicted with PICRUSt to elucidate the potential roles of gut microbiota identified in this study. As shown in Figure 5, 10 KEGG pathways were predicted to show differences between osteoporosis and control groups. Specifically, pathways relevant to steroid hormone biosynthesis, protein digestion and absorption, lysosome, glycosphingolipid biosynthesis, glycosaminoglycan degradation, and flavone and flavonol biosynthesis were functionally enhanced in patients with osteoporosis in comparison with the controls (P value < 0.05).
DISCUSSION
In this study, the 16S rRNA gene sequencing technique was used to quantify gut microbiota compositions from the absolute and relative views. Representative indices for microbial abundances were analyzed to investigate associations of microbial compositions and osteoporosis risk among the Han Chinese elderly. To the best of our knowledge, this is the first osteoporosis-related gut microbiota association study that considers absolute quantifications. Several phylum-and genuslevel taxonomic differences were discovered between osteoporosis patients and the controls. With potential confounders controlled, the Bacteroidetes phylum and the Bacteroides, Lactobacillus, Eisenbergiella, and Eggerthella genera are associated with risk of osteoporosis. In addition, the Bacteroides genus is associated with BMD at the lumbar spine, and the Anaerovorax genus is associated with BMD at the femoral neck in adjustment of multiple covariates. Moreover, 10 pathways relevant to steroid hormone biosynthesis, protein digestion and absorption, lysosome, glycosphingolipid biosynthesis, glycosaminoglycan degradation, and flavone and flavonol biosynthesis were predicted on the basis of the KEGG to be functionally enhanced in patients with osteoporosis compared with the controls.
The relative abundance of the Parabacteroides, Flavonifractor, and Raoultella genera had between-group difference, whereas the absolute abundance did not. The enrichment of taxa in relative abundance does not necessarily indicate the alternations in absolute abundance (Props et al., 2017). Different microbial loads in samples may contribute to the discordant enrichment of taxa in relative and absolute abundances. Vandeputte et al. utilized relative and absolute microbiome profiles to assess Crohn's disease-related microbiome signals, concluding that the microbial load is a key driver of the observed disease-related microbiota alterations (Vandeputte et al., 2017). Several studies focused on the association between gut microbiota composition, expressed in proportional abundances of taxa, and bone health and showed inconsistent results (Wang et al., 2017;Das et al., 2019;Li et al., 2019). Consistent with our findings, one cohort study including 181 participants revealed that the relative abundance of the Eggerthella genus was increased in osteoporosis cases (Das et al., 2019). By contrast, Wang et al. reported a reduction in Bacteroidetes phylum in osteoporosis cases (Wang et al., 2017). Caution must be taken in interpreting these results, which were solely observed from relative abundance comparisons, because relative abundance alternations cannot always reflect precise absolute abundance changes (Smets et al., 2016).
The patients with osteoporosis were found to have increased absolute abundances of the Bacteroidetes phylum and Bacteroides genus; moreover, Bacteroides showed a negative correlation with the BMD and T-score at the lumbar spine. The Bacteroidetes phylum consists of various gram-negative bacteria in the gastrointestinal Estimates were expressed as correlation coefficient, and statistical significance is indicated by *0.01 ≤ P value < 0.05, and **P value < 0.01. tract, including Bacteroides genus (Eckburg et al., 2005). Lipopolysaccharide (LPS), a component of the gram-negative bacterial outer membrane, can stimulate the production of pro-inflammatory cytokines, resulting in systemic inflammations (Maldonado et al., 2016;Shen et al., 2018). In vivo and in vitro studies have suggested that the LPS-induced pro-inflammatory cytokines are involved in the processes of osteoclast formation and bone destruction (Abu-Amer et al., 1997;Zou and Bar-Shavit, 2002;Mörmann et al., 2008). Several observational studies have also reported the inflammation mechanism underlying osteoporosis and/or osteopenia pathogenesis (Scheidt-Nave et al., 2001;Sponholtz et al., 2014). Increasing gram-negative bacteria, such as Bacteroides, may trigger a cascade of inflammatory responses that contribute to the initiation of bone loss, ultimately disrupting bone health. Three genera (Anaerovorax, Lactobacillus, and Eisenbergiella) belonging to the Firmicutes phylum were found to be associated with BMD and osteoporosis risk in this study. The Anaerovorax positively correlated with the BMD at the femoral neck. However, high amounts of Lactobacillus and Eisenbergiella were observed in the osteoporosis group. Lactobacillus is commonly used as a probiotic; increased abundance of some Lactobacillus species, such as L.reuteri, have been reported to prevent bone loss (Nilsson et al., 2018). Our finding of the increased absolute abundance of Lactobacillus in patients with osteoporosis suggested that the effect of Lactobacillus on bone metabolism may be species and strain specific. A previous study focusing on weight gain and Lactobacillus also indicated that the effect on metabolism varied among different species (Million et al., 2012). A vaginal microbial community research indicated L.iners contains features of probiotic Lactobacillus as well as of vaginal pathogens (Petrova et al., 2017). Further studies are needed to clarify the role of Lactobacillus species and strain on bone health. To the best of our knowledge, no previous studies have reported the association between Anaerovorax, and Eisenbergiella with bone health whether in humans or animal models. Therefore, further studies are warranted to elucidate the roles of these bacteria in the development of osteoporosis.
In addition, compared with the controls, the patients with osteoporosis exhibited increased abundance of Eggerthella genus, which is consistent with the findings of a previous study (Das et al., 2019). Many studies have reported the contributable effect of Eggerthella on inflammatory diseases, including rheumatoid arthritis, ankylosing spondylitis and systemic lupus erythematosus (Scher et al., 2013;Chen et al., 2016;He et al., 2016). Specifically, Eggerthella was also found to enrich in the vitamin D receptor (VDR) knockout (Vdr -/-) mice compared with wide-type mice (Jin et al., 2015). VDR has been previously shown to increase the formation and decrease the resorption of bone, and VDRmediated activity in osteoblasts and osteocytes can prevent bone loss caused by vitamin D deficiency (Gardiner et al., 2000;Lam et al., 2014). In an epidemiology study, VDR gene polymorphisms were found to be significantly associated with the decrease in BMD and increase in osteoporosis risk (He et al., 2015;Kow et al., 2019). The elevated abundance of Eggerthella was probably relevant to the inefficient function of vitamin D receptors in the osteoporosis cases.
Functional predictions for gut microbiota revealed that several KEGG pathways may contribute to osteoporosis pathogenesis. We observed that glycosaminoglycan (GAGs) degradation increased in the osteoporosis group. As important extracellular matrix components in bone, GAGs play important roles in regulating biological processes in bone (Salbach et al., 2012). A pilot study in rats presented a positive effect of GAGs on bone formation in a critical bone size defect. In vitro studies demonstrated that GAGs may contribute to bone homeostasis by direct interactions with bone-regulating proteins and cytokines, such as RANKL, OPG, and cathepsin K (Li et al., 2002;Theóleyre et al., 2006). The roles of steroid hormones (estrogen, corticosteroids, androgen, and progesterone) in bone cell development and in the maintenance FIGURE 5 | Predicted functional differences between osteoporosis and control groups. A total of 10 metabolic pathways varied between the two groups. Tests were conducted at Kyoto Encyclopedia of Genes and Genomes (KEGG) hierarchical level 3. Difference in mean frequency = mean abundance in osteoporosis group minus mean abundance in control group.
of normal bone architecture have been well established (Bland, 2000). A recent study revealed that sex steroid deficiency-related bone loss is microbiota dependent . Moreover, great intakes of flavonols and flavones were associated with increased bone density in humans . Mechanistic studies indicated that flavonoid may regulate bone metabolism though the inhibition of RANKL-induced osteoclast differentiation (Lee et al., 2009).
This study is the first to investigate the composition alternations of osteoporosis-related gut microbiota considering the absolute and relative abundance of microbiomes. In the relative abundance comparisons, data obtained from our previous work applying a same laboratory sequencing platform were combined to increase the statistical power. Spiked exogenous sequences of known concentrations were applied to quantify the absolute abundance of gut microbiota in the present study. Another group of methods, such as flow cytometry, total DNA, quantitative PCR, or digital PCR, can also be used for microbial taxa quantification (Kleyer et al., 2017;Vandeputte et al., 2017;Contijoch et al., 2019;Barlow et al., 2020). The relative abundances of microbial taxa were transformed to absolute data by measuring the total concentration of cells, DNA, or amplicons for flow cytometry, total DNA, and quantitative PCR, respectively. The digital PCR quantitative microbial analysis is appropriate for biogeographically diverse sample types and enables the mapping of the microbial biogeography of the gastrointestinal tract.
STUDY LIMITATIONS
Several limitations should be acknowledged. First, the present study is of a case control design, which has a limitation in causality inference from altered microbiota composition to osteoporosis prevalence risk. Second, a limitation in our method is that 16S rRNA gene sequencing does not have sufficient resolution to identify the microbiota at the species or strain level, and thus can lead to the omission of some microbial taxa (Ravi et al., 2018). Third, dietary factors are crucial in driving microbial community structure, as well as lifestyle (e.g., physical exercise and stress) and genetic factors (Conlon and Bird, 2014;Wang et al., 2016). Although a few covariates were adjusted, residual confounding by these unmeasured factors may influence the results to some extent. Fourth, ethnicity and residence location of subjects were also associated with variations of microbial abundance (Gupta et al., 2017;He et al., 2018). In this study, the participants were all Han Chinese from the same region, possibly affecting the generalizability of our findings to other ethnic and regional populations. Therefore, future studies in this filed should consider the dietary, lifestyle, and genetic factors and recruit participants from various regions.
CONCLUSION
Through absolute quantification 16S rRNA gene sequencing, this study suggests that osteoporosis and bone density at the lumbar spine and femoral neck of Han Chinese elderly are potentially associated with the altered composition of gut microbiota at the phylum and genus levels. In particular, our findings indicate a link between Bacteroidetes-dominated microbiome and osteoporosis risk. The findings of this study may provide new clues to understand the microbiota-related mechanism in osteoporosis pathogenesis and provide potential biomarkers and therapeutic targets for disease prevention and treatment in the future.
DATA AVAILABILITY STATEMENT
The data presented in the study are deposited in the National Center for Biotechnology Information (NCBI) Bioproject database with accession number PRJNA724901, https://www. ncbi.nlm.nih.gov/sra/PRJNA724901.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Institutional Review Board of Tongji Medical College, Huazhong University of Science and Technology. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
QW and QH designed and managed the research. MW, QW, and CL contributed to the statistical analysis and manuscript writing. CL, MW, YD, HZ, YC, and YZ contributed to the data collection. All authors contributed to the article and approved the submitted version.
ACKNOWLEDGMENTS
We are grateful to all the participants for their contributions to this work. We would also like to thank the staff in the Department of Rehabilitation Medicine and Nuclear Medicine of Union Hospital of Tongji Medical College for their administrative and technology assistance. | 5,975.2 | 2021-04-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
Testing Short-term Variability and Sampling of Primary Volatiles in Comet 46P/Wirtanen
The exceptionally favorable close approach of Jupiter-family comet 46P/Wirtanen in 2018 December enabled characterization of its primary volatile composition with exceptionally high spatial resolution and sensitivities using the iSHELL spectrograph at the NASA Infrared Telescope Facility on Maunakea, HI. We sampled emissions from H2O, HCN, C2H2, NH3, C2H6, and CH3OH on UT 2018 December 21 using two instrumental settings that spanned the 2.9–3.6 μm spectral region. We also obtained a sensitive 3σ upper limit for H2CO and for the rarely studied molecule HC3N. We report rotational temperatures, production rates, and mixing ratios (relative to H2O as well as to C2H6). We place our results in context by comparing them with other comets observed at near-IR wavelengths. We also compare our results with those obtained using the NIRSPEC-2 spectrograph on Keck II on UT December 17 and 18 and with results obtained from iSHELL on other dates during the same apparition. Within 1–2σ uncertainty, production rates obtained for all molecules in this work were consistent with those obtained using NIRSPEC-2 except H2O, indicating low-level variability on a timescale of days. Mixing ratios with respect to H2O in 46P/Wirtanen were consistent with corresponding values from NIRSPEC-2 within the uncertainty with the exception of CH3OH, which yielded a higher ratio on December 21. Our measurements afforded a high temporal resolution that spanned ∼2/3 of the rotational period of 46P/Wirtanen, enabling us to test short-term variability in the production rates of H2O and HCN due to rotational effects. Both H2O and HCN production rates showed similar temporal variability, resulting in nearly constant HCN/H2O.
Introduction
Comets are relatively unprocessed remnants of the early solar system. As some of the first objects to have accreted in the cold regions (>5 au) of the solar nebula, comets may retain the compositional record of icy materials present in the solar nebula. Processes that can affect the properties of cometary nuclei generally alter a very thin layer near the surface, which is thought to be lost during a typical perihelion passage (Stern 2003;Gronoff et al. 2020), preserving the primitive nature of comets. Furthermore, because comets lack a known mechanism for internal heating owing to their small sizes, their present-day composition may reflect the chemistry and prevailing conditions in the early solar system where they formed ∼4.5 billion years ago (Bockelée- Morvan et al. 2004;Mumma & Charnley 2011).
Initially, it was widely accepted that Oort Cloud comets (OCCs) formed at a heliocentric distance (R h ) of ∼5-30 au, whereas Jupiter-family comets (JFCs) formed even farther out in the early solar nebula. However, the presence of crystalline silicates in comets from both dynamical classes, such as C/ 2001 Q4 (NEAT; Wooden et al. 2004), 1P/Halley (Bregman et al. 1987), 9P/Tempel 1 (Harker et al. 2005), and 81P/Wild 2 (Zolensky et al. 2006), and improved dynamical models (Levison & Duncan 1997;Gomes et al. 2005;Morbidelli et al. 2005;Levison et al. 2011;Nesvorný et al. 2017) suggest that comets may have formed in large but spatially overlapping regions in the solar nebula (A'Hearn et al. 2012). Considering these distinct versus overlapping formation region scenarios, an important goal in cometary science is to ascertain whether systematic differences exist between the chemical compositions of these two dynamical classes of comets. If comets were formed in overlapping regions, their present-day composition may reflect the composition in those regions provided that evolutionary effects do not dominate. On the other hand, if post-formation thermal processing effects dominate in comets, these effects will be more pronounced in JFCs (owing to their frequent and repeated passages close to the Sun) as compared to OCCs (Combi et al. 2019).
Near-IR spectroscopy is a powerful tool to characterize the primary volatiles (i.e., gases subliming directly from the nucleus and thus indicative of its native composition) in comets by sampling the rovibrational transitions of a suite of molecules between ∼2.9-5 μm. To date, roughly 40 comets have had their volatile composition characterized in the near-IR, but only ∼15 of those have been JFCs. Emerging trends suggest that on average JFCs are depleted in some parent volatiles (especially in hypervolatiles) as compared to OCCs (Dello Russo et al. 2016). In contrast, more than 200 comets have been sampled at optical wavelengths, (A'Hearn et al. 1995;Fink 2009;Cochran et al. 2012;Schleicher & Bair 2014) where nearly one-third exhibit depletion in carbon-chain species. Of these, about half are JFCs whereas only 10% are OCCs, suggesting that the compositional differences between the two dynamical classes may be natal rather than due to postformation processing (Schleicher 2007;Dello Russo et al. 2009;Fink 2009).
The highly favorable apparition of 46P/Wirtanen (hereafter 46P) in 2018 was the focus of a worldwide observing campaign (http://wirtanen.astro.umd.edu). 46P reached its perihelion on UT 2018 December 12, with a heliocentric distance (R h ) of ∼1.05 au. Shortly after its perihelion, it reached a minimum geocentric distance of 0.077 au (∼30 lunar distances) and a visual magnitude of ∼3, resulting in exceptional observing circumstances for a JFC. It remained within a distance of 0.1 au from Earth for ∼20 consecutive days, allowing for detailed observations by both professional and amateur astronomers.
46P was the original target of the Rosetta spacecraft mission; however, 67P/Churyumov-Gerasimenko was selected for the mission due to a delay in launch. We emphasize that knowledge of the overall activity and composition is often an important parameter in assessing the suitability of a comet as a mission target. It is also useful in placing mission data in context with the much larger database of remote-sensing observations of comets. The historic 2018 apparition of 46P thus provided a timely opportunity for sampling the primary volatile composition of a JFC that remains a favorable candidate for a future mission.
The importance of 46P as a JFC, coupled with the exceptional and rare observing conditions during its 2018 apparition, lends great significance to these observations, as well as to the scientific knowledge that will be extracted from them. Moreover, distinguishing between natal versus post-formation processing effects in OCCs and JFCs requires comparison of a sufficiently large sample of comets from each dynamical class. Sampling of the chemical composition of 46P using near-IR spectroscopy is a useful addition to the overall comet inventory as well as to the generally underrepresented JFCs.
An increasing number of comets measured to date have displayed variability in their coma composition within an apparition as well as across perihelion passages. This variability has been attributed to numerous effects, including seasonal effects on the nucleus, diurnal illumination effects, and chemically heterogeneous nuclei (Feaga et al. 2014;Hässig et al. 2015;Luspay-Kuti et al. 2015;McKay et al. 2015;Bockelée-Morvan et al. 2016;Fink et al. 2016;Combi et al. 2020b). Surface regions of some comets were observed to have been covered by thermally processed fall-back material. For example, the north hemisphere of 67P/Churyumov-Gerasimenko (Keller et al. 2017) and the waist of 103P/Hartley 2 (A' Hearn et al. 2011;Kelley et al. 2013) are covered by processed material. However, the coma activity was dominated by the fresh material emitted by the southern hemisphere of 67P/Churyumov-Gerasimenko and ends of the small lobe of 103P/Hartley 2. This mass transfer on comet nuclei due to fall-back material may affect their surface evolution and is an example of post-formation processing.
Remote-sensing observations do not resolve the nucleus, and time-resolved compositional measurements through a complete nucleus rotation are lacking at near-IR wavelengths. Jehin et al. (2018) reported a ∼9 hr rotational period for 46P using a CN lightcurve measured from photometry obtained at the TRAP-PIST telescopes on UT 2018 December 9-10. The relatively short rotational period of 46P provided us with an opportunity to sample ∼2/3 of its period during a single observing night on UT 2018 December 21, and to test for rotational variability in HCN and H 2 O on a timescale of a few hours.
In this work, we report production rates and mixing ratios (i.e., abundance ratios in percent) of H 2 O, HCN, CH 3 OH, C 2 H 6 , C 2 H 2 , and NH 3 with respect to H 2 O and C 2 H 6 and report stringent 3σ upper limits for H 2 CO and HC 3 N. We also discuss possible variability in the production rates of H 2 O and HCN in comet 46P post-perihelion. In Section 2, we review our observations and data reduction. In Section 3, we present our results. In Section 4, we discuss our results and place them into context with comets observed to date.
Observations and Data Reduction
iSHELL at the NASA Infrared Telescope Facility (IRTF) became available for cometary observations in 2016 (Rayner et al. 2012(Rayner et al. , 2016. This instrument is capable of both highresolution long-slit spectroscopy and imaging in the 1.1-5.3 μm range, with a spectral resolving power (λ/Δλ) of up to 7.5×10 4 using its narrowest slit (0 375). Extra slit widths are available for minimizing slit losses and accurate flux calibration. Owing to its cross-dispersed capability, iSHELL can measure a signal in more than ten consecutive echelle orders simultaneously, whereas its daytime observing capability allows for observations of objects best observed during daylight hours, namely comets close to the Sun. These features make iSHELL unique among contemporary spectrographs operating in the near-IR wavelength regime.
We observed 46P post-perihelion on UT 2018 December 21 at R h ∼1.06 au and geocentric distance (Δ) ∼0.082 au (see Table 1 for observing details). We used the iSHELL Lp1 and a custom L setting (hereafter L-custom) to sample emissions from the primary volatiles H 2 O, HCN, CH 3 OH, C 2 H 6 , C 2 H 2 , NH 3 , H 2 CO, and HC 3 N. Flux calibration was achieved using a bright nearby IR flux standard star (BS8781) using the 4″ wide slit. We acquired comet data with the 0 75 (6 pixel) wide slit using an ABBA nod sequence, with the A and B beams placed symmetrically about the midpoint along the 15″ long slit and separated by half its length. To cancel continuum emissions from the thermal background, instrumental biases, and sky emission (lines and continuum), the spectra were combined as A-B-B+A. The dark subtracted flats were applied to the data, which were subsequently cleaned of cosmic-ray hits and hot pixels. We alternated between two slit orientations while using the L-Custom setting: one along the Sun-comet line (position angle, PA 134°) and the other orthogonal to the Sun-comet line (PA 44°), with each slit orientation sampling a unique projection of the coma into the sky plane. Our observations spanned ∼2/3 of a complete rotation for 46P. In this way we obtained a total of four separate observations, with two sets each corresponding to the mutually perpendicular slit orientations (see Table 1).
The data-reduction procedures have been rigorously tested and are well-documented in the literature (see Bonev 2005;DiSanti et al. 2006DiSanti et al. , 2014Villanueva et al. 2009;Radeva et al. 2010). For their applications to unique aspects of iSHELL spectra, see Section 3.2 of DiSanti et al. (2017).
Contributions from continuum and gaseous emissions in our spectra were determined as previously described by DiSanti et al. (2016). We illustrate the procedure in Figure 1, which shows a sample spectrum of H 2 O fluorescent emissions in order 179 of the third PA set of the L-Custom setting spanning ∼3437.8-3465.8 cm −1 . We used the Planetary Spectrum Generator (https://psg.gsfc.nasa.gov/; Villanueva et al. 2018) to generate telluric transmittance models, to perform wavelength calibration of the spectra, and to determine column burdens of the absorbing molecules in the terrestrial atmosphere. The fully resolved transmittance function was convolved to the resolving power (∼4.5 × 10 4 ) of the instrument and scaled to the continuum level of the comet. The telluric model was then subtracted from the observed spectrum to isolate cometary emission lines. Intensities of these emission lines were compared to fluorescent emission models after correcting each modeled line intensity for the monochromatic atmospheric transmittance at its Doppler-shifted wavelength based on the geocentric velocity (∼3.4-3.9 km s −1 ) of the comet at the time of the observation.
Spatial Profiles
A comparison between spatial profiles of co-measured coma volatiles can indicate whether these species sublimated directly from the comet nucleus, from one or more extended outgassing sources within the coma, or a combination of both. In general, ices sublimating directly from the nucleus exhibit a spatial profile that peaks in intensity at or near the nucleus and then falls off as r −1 , where r is the projected nucleocentric distance. On the other hand, spatial profiles of molecules produced by photolysis or extended sources in the coma fall off more slowly with a flatter distribution. Figure 2 shows spatial profiles of comeasured HCN and H 2 O along with the dust continuum in each PA set of the L-Custom setting. In each of these sets, the gas profiles track each other somewhat closely whereas the dust profile is narrower. The gas profiles are also asymmetric and are extended in the projected anti-sunward direction. These asymmetries are more pronounced in the first two PA sets (Figures 2(A) and (B)). Figure 3 shows spatial profiles of co-measured CH 3 OH and C 2 H 6 overplotted with the dust profile. The relatively noisy CH 3 OH spatial profile is broader than both the co-measured C 2 H 6 as well as dust profile, whereas both of the gas profiles are broader than the dust profile. An anti-sunward extension in gas profiles is also evident. We note that spatial profiles of all of these gases appear to be consistent with their growth factors (GFs; defined as Q/Q NC where Q and Q NC are the global and nucleuscentered production rates, respectively); i.e., H 2 O and HCN GFs are relatively similar whereas the CH 3 OH GF is significantly larger than that of C 2 H 6 (see Table 2).
Molecular Fluorescence Analysis
The g-factors used in this work were generated with quantum mechanical models developed for H 2 O (Villanueva Note. Slit PA, R h , Δ, Δ-dot, and T int are the slit position angle, heliocentric distance, geocentric distance, geocentric velocity, and total on-source integration time, respectively. linear molecule, and its g-factors were obtained using a rotational constant of 0.151 74 cm −1 (Creswell et al. 1977). To fit fluorescent emissions from all molecules simultaneously in each echelle order, a Levenberg-Marquardt nonlinear minimization technique (Villanueva et al. 2008) was used. This technique allows for results with high precision even in spectrally crowded regions having many lines within a single resolution element of the instrument. Production rates for each targeted primary volatile were then determined from the corresponding synthetic model at a well-constrained rotational temperature (T rot ).
Determination of Rotational Temperature (T rot )
Calculating a robust rotational temperature (T rot ) is crucial for an accurate calculation of molecular production rates and, hence, mixing ratios. We determined T rot using correlation and excitation analyses (Bonev 2005;Bonev et al. 2008;DiSanti et al. 2006). In general, a well-constrained T rot can be derived for molecules with strong lines that span a broad range of excitation energies. For this work, these conditions were satisfied by combining lines from different orders to obtain T rot for H 2 O in each PA set of the L-Custom setting (see Table 2). These values were similar to the H 2 O T rot obtained on UT December 14 (84 ± 3) (Saki et al. 2020), UT December 17 (89 ± 2) and UT December 18 (87 ± 1) with NIRSPEC-2 , and (94 ± 5) on December 18 with iSHELL (Roth et al. 2021). We also obtained a relatively well-constrained T rot for HCN in each PA set by combining lines from different orders. In general, rotational temperatures calculated for different molecules using IR observations are consistent, and small variations in T rot result in only minor differences in production rates (Gibb et al. 2012). For this reason, we assumed a T rot of 80 K (consistent with the H 2 O and HCN T rot across all PA sets) for molecules where a rotational temperature could not be derived (i.e., C 2 H 6 , CH 3 OH, H 2 CO, C 2 H 2 , NH 3 , and HC 3 N).
Determination of Molecular Production Rates
Nucleus-centered production rates (Q NC ) and global production rates (Qs) were determined using the well-established Qcurve method described in Xie & Mumma (1996), Dello Russo et al. (1998), DiSanti et al. (2001, Bonev (2005), Bonev et al. (2006Bonev et al. ( , 2017, Villanueva et al. (2011a), and Gibb et al. (2012). This method provides a GF that corrects for atmospheric seeing, which suppresses the signal along lines of sight passing close to the nucleus due to the use of a narrow slit, as well as potential perpendicular drift of the comet during an exposure sequence. We assumed a canonical spherically symmetric outflow velocity ( ) = -v R 800 m s h gas 0.5 1 in determining production rates. This velocity is based on velocityresolved observations of several moderately bright comets at radio wavelengths (Biver et al. 2006;Cordiner et al. 2014; also see Bonev 2005 supporting this assumption). We note that our assumed outflow velocity (∼780 m s −1 ) is in good agreement with the sunward hemisphere and with mean expansion speeds (∼800 and ∼700 m s −1 , respectively) measured through velocity-resolved Atacama Large Millimeter/submillimeter Array (M. A. Cordiner et al. 2021, in preparation) and the Institut de Radioastronomie Millimetrique (N. Biver 2020, private communication) observations. Reasonably contemporaneous to our observations, Wang et al. (2020) and Coulson et al. (2020) reported outflow velocities of 500 and 600 m s −1 , respectively. Assuming these lower outflow velocities will decrease the overall production rates by ∼20%, but does not significantly change the mixing ratios. We obtained GFs for H 2 O and HCN in each PA set of the L-Custom setting and used them to calculate co-measured H 2 O and HCN production rates in each PA set. We could not obtain well-constrained GFs for the weaker species C 2 H 2 , NH 3 , and HC 3 N; therefore, we used a GF of 2.2 (consistent with the H 2 O GFs across all PA sets) for these species. Similarly, we obtained GFs for CH 3 OH and C 2 H 6 and used the CH 3 OH GF to get an upper limit on comeasured H 2 CO. GFs, Qs, and mixing ratios with respect to H 2 O (and to C 2 H 6 ) corresponding to all primary volatiles targeted in this work are listed in Table 2. We note that Qs for all molecules were obtained by adding lines from multiple orders. For deriving HCN mixing ratios in each PA set, we used the co-measured H 2 O production rate. For mixing ratios of all other molecules, we used the H 2 O production rate obtained by adding lines from multiple orders within a PA set and then coadding all of those L-Custom sets. Figure 4 shows variation in the production rates of H 2 O and HCN during observations spanning ∼6 hr. We obtained four sets using the L-Custom setting by varying the slit PA by 90°a fter each individual set (see Section 2). Both species are color coded and the corresponding error bars are also shown, along with range of UT time corresponding to each set. For clarity, HCN production rates have been vertically offset. H 2 O and HCN production rates obtained in each PA set of the L-Custom setting are shown in Table 2.
Coma Volatile Composition of 46P
Figures 5(A)-(C) shows spectra of HCN, C 2 H 2 , NH 3 , CH 3 OH, and C 2 H 6 . Fully leveraging the large spectral grasp of iSHELL, we combined lines for weaker species that were sampled in multiple orders simultaneously in each individual slit orientation set, followed by coadding all of the L-custom sets. In this way, we were able to increase the signal-to-noise and detect the generally weaker species C 2 H 2 and NH 3 , which are offset vertically in the figure.
Being the dominant volatile in most comet nuclei, H 2 O is used as a baseline for calculating mixing ratios of primary volatiles in comets (the exception being C/2016 R2, Pan-STARRS; McKay et al. 2019). In addition to its dominance of the volatile content of most comets, strong lines of H 2 O (or its proxy, OH * , prompt emission; Bonev et al. 2006) are available throughout the 2-5 μm region and can be sampled simultaneously with the lines of trace species, minimizing the effects of potential production rate variability when calculating mixing ratios.
Alternate compositional baselines utilizing other species satisfying these conditions can provide complementary insights into the volatile content of comets. In addition to H 2 O, we therefore calculated mixing ratios of primary volatiles with respect to C 2 H 6 . These measurements will help motivate the development of taxonomies based on alternative compositional baselines in future work. C 2 H 6 generally tends to exhibit a distinct outgassing behavior compared to H 2 O, its sublimation temperature is among the lowest, and it is relatively easy to detect in the near-IR wavelength range. These characteristics make C 2 H 6 one of the suitable molecules that can be used as an alternative compositional baseline (for a comprehensive discussion on the value of alternative baseline compositional studies and the case for C 2 H 6 , see Sections 4.4 and 5.4.2 of Bonev et al. (2021)).
Testing Possible Variability in Production Rates
Ground-based remote-sensing observations do not resolve the nucleus of a comet (and thus do not permit identifying individual active regions on the surface), and the limited observation time available generally inhibits sampling of the full surface of a comet during a rotation cycle. There are only a few comets for which the nucleus has been resolved-those visited by spacecraft. As a comet rotates on its axis, different regions of its surface are exposed to solar irradiation, resulting in the activation of distinct sublimation regions on the nucleus. If the nucleus is heterogeneous, this may lead to variability in the composition of coma primary volatiles (e.g., in comet 67P/ Churyumov-Gerasimenko; Hässig et al. 2015;Luspay-Kuti et al. 2015).
During our observations, the nucleus of 46P rotated by ∼225°, which is about two-thirds of its rotational period. During this time, the sub-solar point changed its position on the surface by ∼195° (Knight et al. 2020), and the illumination switched from one hemisphere to the other, emphasizing the significance of our time series of measurements. Production rates of H 2 O and HCN during this period showed a similar trend across the four PA sets: the second PA set showed relatively higher production rates for both species indicating setting. c Global production rate. For C 2 H 2 , NH 3 , and HC 3 N, we added lines from all PA sets to get production rates. For all other molecules, Qs were obtained by adding lines from multiple orders within a setting. Uncertainty in production rates includes line-by-line deviation between observed and modeled intensities and photon noise (see Dello Russo et al. 2004;Bonev 2005;Bonev et al. 2007). They also include uncertainties in GFs and flux calibration, which were determined by calculating the standard deviation of flux calibration from eight exposures. d Mixing ratio of global production rates with respect to H 2 O (in percent) and C 2 H 6 . For deriving the HCN mixing ratio in each PA set, we used the corresponding H 2 O production rate. For all other molecules, we used an H 2 O production rate of 584 × 10 25 mol s −1 obtained by adding lines from all PA sets of the L-Custom setting.
low-level variability, whereas the production rates corresponding to all other PA sets were consistent with each other within the uncertainty. The mixing ratio of HCN with respect to H 2 O, however, remained nearly constant across all of the PA sets. (see Table 2 and Figure 4). We note that the spatial profiles ( Figure 2) are similar in each PA set, and the enhancement in production rates is only in one, suggesting that the differences are due to time variability rather than due to nonuniform spatial distributions, for example. Wang et al. (2020) reported moderate time variability in the HCN production rate in 46P on December 14 and 15 using radio observations of the HCN (J=1-0) transition. They also reported asymmetric HCN outgassing, although they found that the asymmetric enhancement was in the sunward direction. Due to time constraints, we were not able to sample a full rotation period (∼9 hr) of 46P; therefore, searches for variability on timescales greater than or equal to a single rotation period must be addressed in future perihelion passages of 46P.
Composition of 46P in the Context of JFCs, OCCs, and the Comet Population Measured at Near-IR Wavelengths
The classification of comets based on their volatile composition (both primary and product species) is an important but complex task in cometary science. Over the past few decades, extensive work at optical wavelengths has resulted in a taxonomic classification of comets based on abundances of their product species. According to this scheme, comets can be classified as "typical" or "carbon-chain depleted" (A'Hearn et al. 1995;Cochran et al. 2012 and references therein); however, tying product species abundances directly to those of their parents is a complex endeavor given their potentially complicated lineage (i.e., multiple potential volatile parents as well as dust for a given product species). More recent work based on the composition of product species in comets (Schleicher & Bair 2014;Cochran et al. 2015) suggested that there can be as many as seven taxonomic groups owing to the complex chemical diversity in comets. Observations of comets using radio techniques have shown no evidence of clear taxonomic groupings (Crovisier et al. 2009; , CH 3 OH (B), and C 2 H 6 (C). In each of these panels, the upper portion shows the telluric absorption model (yellow; convolved to the instrumental resolution) overplotted on the observed cometary spectrum. Directly below, the total of individual fluorescent models (red) is overplotted on the cometary emission spectrum (after subtracting the telluric absorption model). Individual fluorescent models for each molecule (color coded by species) are also shown. The residual spectrum (after subtracting the telluric model and all fluorescent models) is shown at the bottom along with the 1σ uncertainty envelope. Models for species with weaker emission lines (C 2 H 2 and NH 3 ) have been vertically offset for clarity. Mumma & Charnley 2011 and references therein). Measurements obtained at near-IR wavelengths have resulted in a continually evolving compositional taxonomy based on primary volatiles (Mumma & Charnley 2011;Dello Russo et al. 2016). Table 3 shows the primary volatile mixing ratios of 46P on UT 2018 December 21, along with mean mixing ratios for each species among JFCs, OCCs, and the overall comet population measured at near-IR wavelengths (Dello Russo et al. 2016).
This comparison provides the following insights into the primary volatile composition of 46P in the context of dynamical classes of comets and the overall comet population: 1. Within the uncertainty, mixing ratios of HCN (0.23 ± 0.02%) and NH 3 (0.50 ± 0.06%) were consistent with their respective mean values among JFCs and OCCs. 2. The mixing ratio of C 2 H 2 (0.08 ± 0.01%) was consistent with the mean value among JFCs, but depleted compared to the mean value among OCCs. 3. CH 3 OH (4.26 ± 0.34%) was enriched compared to the mean abundance among JFCs and OCCs, whereas C 2 H 6 (0.71 ± 0.09%) was enriched compared to the mean abundance among JFCs and consistent with the mean value among OCCs. Our measurement of CH 3 OH represents one of the highest values reported in comets sampled at near-IR wavelengths to date. Comets 8P/ Tuttle, C/2007 N3 (Lulin), and 2P/Encke (Bonev et al. 2008;Gibb et al. 2012;Radeva et al. 2013) showed similarly overabundant CH 3 OH compared to other species. Very high abundances of CH 3 OH were reported in comets 252P/LINEAR (5.56 ± 0.66%, 4.62 ± 0.48% and 4.61 ± 0.68%; Paganini et al. 2019) and 45P/Honda-Mrkos-Pajdušáková (4.60 ± 0.76% and 4.41 ± 0.77%; Dello Russo et al. 2020). Bonev et al. (2021) also reported a high mixing ratio for CH 3 OH (3.03 ± 0.23%) in 46P on December 18, and suggested the possibility of an extended coma outgassing source, which is consistent with the broader spatial profile and higher GF (Figure 3, population, 46P showed enrichment in the mixing ratios of CH 3 OH and depletion in C 2 H 2 and H 2 CO (based on the 3σ upper limit), whereas C 2 H 6 , HCN, and NH 3 were consistent within the uncertainty with the mean value. We note that the CH 3 OH/H 2 CO ratio of >33 obtained for 46P is among the highest in comets (Dello Russo et al. 2016).
This comparison suggests that 46P is not adequately described as being enriched, depleted, or average in its volatile content, reinforcing the need for a greater sampling of comets in order to develop a more complete taxonomy of comets. Abundances from the 2018 apparition of 46P are an important addition to the ever-evolving repository of comets sampled at near-IR wavelengths and to the continually evolving compositional taxonomy based on these measurements.
Sensitive Upper Limit for Cyanoacetylene (HC 3 N)
Obtaining a stringent upper limit for molecules such as HC 3 N is challenging because of its low abundance in comets, the presence of emission lines from many other species that can potentially cause blending, and atmospheric extinction of some lines. The continuous spectral grasp of iSHELL in the L-Custom setting allows for the simultaneous sampling of many lines of HC 3 N. Combining multiple orders (spanning a frequency range of ∼3316-3338 cm −1 ) and all L-Custom observations on that night (a total of 4.715 hr of on-source integration time), coupled with the improved sensitivity of iSHELL, enabled us to derive a sensitive 3σ upper limit of HC 3 N/H 2 O of <0.007%. Obtaining a sensitive upper limit for an underrepresented molecule such as HC 3 N in a JFC (which are generally fainter and less productive than their Oort cloud counterparts) is important for discerning the lineage of the CN radical in comets. In some comets, the production rates, scale lengths, and spatial distributions of HCN and CN are not consistent (Fray et al. 2005, and references therein), implying that CN might be produced as a result of HC 3 N photolysis (Bockelée-Morvan & Crovisier 1985;Krasnopolsky et al. 1991). Similar 3σ upper limits (of the order of 10 −3 ) for HC 3 N were reported by Bockelée-Morvan et al. (1987) and Swade et al. (1987) in comet P/Halley observed at radio wavelengths. Crovisier et al. (1993) reported a 3σ upper limit of <0.000 19% in radio observations of comet Levy 1990 XX.
HC 3 N was first identified with radio observations of comet C/1995 O1 (Hale-Bopp; Bockelée- Morvan et al. 2000) with an abundance ratio (relative to H 2 O) of 0.02%. HC 3 N has since been detected in multiple comets at millimeter/submillimeter wavelengths through its pure rotational transitions, with abundances ranging from 0.002% to 0.068% (Bockelée-Morvan & Biver 2017). At near-IR wavelengths, HC 3 N has been sampled in comets that include C/2009 P1 (Garradd; Villanueva et al. 2012c) with a 3σ upper limit of <0.03% and comet 103P/Hartley 2 (Dello Russo et al. 2011) with a 3σ upper limit of <0.024%. Our 3σ upper limit (<0.007%) is Table 4 summarizes H 2 O production rates in 46P on different post-perihelion dates in 2018 December. These results suggest that H 2 O production rates obtained from iSHELL across other post-perihelion dates are consistent with our December 21 results, whereas those measured with NIRSPEC-2 on December 17 and 18 ) and with iSHELL on December 18 (Roth et al. 2021) are higher, despite only a marginal change in the geocentric and heliocentric distances of 46P between December 17 and 21 (R h varied from 1.057 to 1.061 au between these dates; Δ varied from 0.078 to 0.082 au). This might be an indication of variability in H 2 O production rates on a timescale of days (addressing possible variability in 46P during its 2018 apparition is the subject of future work). Using measurements from SOHO/SWAN, Combi et al. (2020a) reported that the overall H 2 O production rate in 46P decreased significantly throughout its 1997, 2002, 2008, and 2018 apparitions along with a large steepening of the change in H 2 O production rate with R h . While these measurements do not cover the dates listed in Table 4, the H 2 O production rate measured on December 22.976 (∼1.6 × 10 28 mol s −1 ) was significantly higher than our December 21 values of ∼6×10 27 mol s −1 with the caveat that the SOHO/ SWAN measurements may be more sensitive to an extended source of water in the coma than our near-IR measurements. With the exception of H 2 O, the production rates of all molecules we measured agree within 1-2σ uncertainty with values obtained from NIRSPEC-2 on UT December 17 and 18, whereas the mixing ratios we obtained agree within 1σ with values from NIRSPEC-2 with the exception that our CH 3 OH mixing ratio is higher.
Summary and Future Outlook
We utilized the exceptional observing conditions offered by 46P/Wirtanen's historical 2018 apparition to characterize its primary volatile content and to determine the spatial associations of species in the coma. Through our measurements we obtained the following results: (1) We obtained mixing ratios with respect to H 2 O (and C 2 H 6 ) of the primary volatiles H 2 O, HCN, C 2 H 2 , NH 3 , C 2 H 6 , and CH 3 OH on UT 2018 December 21 using the iSHELL spectrograph at the NASA IRTF. We obtained stringent 3σ upper limits for H 2 CO and also HC 3 N, a molecule that has been rarely studied in comets to date.
(2) We placed the chemical composition of 46P/Wirtanen in the context of comets observed at near-IR wavelengths and found that the comet does not follow the simple three-tiered taxonomic scheme, i.e., it is not systematically enriched, depleted, or averaged in the mixing ratios of its volatiles. (3) We were able to extract spatial profiles for co-measured HCN and H 2 O in multiple, independent slit orientations of the L-Custom setting. We also obtained spatial profiles for co-measured CH 3 OH and C 2 H 6 . Both gases exhibited broader profiles as compared to the profile from dust, whereas the CH 3 OH profile was broader as compared to co-measured C 2 H 6 . Spatial profiles of all of the gases were asymmetric and extended in the projected antisunward direction. (4) We compared production rates of HCN and H 2 O obtained from mutually perpendicular slit orientations to search for potential short-term variability and found that both of these species follow a similar trend and exhibit a lowlevel, short-term variability. (5) Our H 2 O production rates generally agreed with those obtained on other dates post-perihelion except for measurements with iSHELL on December 18 and those obtained with NIRSPEC-2 on December 17 and 18, which yielded higher Q(H 2 O), suggesting variability in the H 2 O production rate over a time span of a few days.
Additional observations obtained using iSHELL and NIR-SPEC during the exceptional 2018 apparition of 46P will enable future work testing for long-term variability in the comet, thus addressing the "snapshot" bias associated with cometary observations taken over a limited range of dates and/ or heliocentric distances. Comparisons between dates preperihelion, near-perihelion, and post-perihelion will test for potential seasonal effects in 46P (e.g., Hässig et al. 2015;McKay et al. 2015;Roth et al. 2018). Observations taken with sufficient geocentric velocity will enable the study of hypervolatiles CO and CH 4 by shifting their lines away from their telluric counterparts, increasing the sample size of these underrepresented molecules in studies of JFCs (e.g., DiSanti et al. 2017;McKay et al. 2021).
Data for this study were obtained at the NASA Infrared Telescope Facility (IRTF), operated by the University of Hawai'i under contract NNH14CK55B with the National | 8,541.4 | 2021-02-01T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Energy, Memory, and Runtime Tradeoffs for Implementing Collective Communication Operations
Collective operations are among the most important communication operations in shared and distributed-memory parallel applications. In this paper, we analyze the tradeoffs between energy, memory, and runtime of different algorithms to implement such operations. We show that each existing algorithms have varying behavior and no algorithm exists that is optimal in all three regards. We also show examples where of three different algorithms solving the same problem, each algorithm is best in a different metric. We conclude by posing the challenge to explore the resulting tradeoffs in a more structured manner.
Introduction
Collective operations are among the most important communication operations in shared-and distributed-memory parallel applications.In this paper, we analyze the tradeoffs between energy, memory, and runtime of different algorithms that implement such operations.We show that existing algorithms have varying behavior and that no known algorithm is optimal in all three regards.We also demonstrate examples where of three different algorithms solving the same problem, each algorithm is best in a different metric.We conclude by posing the challenge to explore the resulting tradeoffs in a more structured manner.The performance of collective operations often directly affects the performance of parallel applications significantly.Thus, many researchers designed fast algorithms and optimized implementations for various collective communication operations.The newest version of the Message Passing Interface (MPI) standard [30], the de-facto standard for distributed-memory parallel programming, offers a set of commonly-used collective communications.These operations cover most use-cases discovered in the last two centuries and we thus use them as a representative sample for our analyses.In general, collective patterns reflect key characteristics of parallel algorithms at large numbers of processing elements, for example, parallel reductions are used to implement parallel summation and alltoall is a key part of many parallel sorting algorithms and linear transformations.Recent hardware developments in large-scale computing increase the relative importance of other features besides pure execution time: energy and memory consumption may soon be key characteristics.Minimizing energy consumption is especially important in the context of large-scale systems or small battery-powered devices.Memory consumption is important in systems that offer hardware to support the execution of collective operations.Here, we assume state-of-the-art offloaded execution models (e.g., [23]) where communication schedules are downloaded into the network device that operates with severely limited resources.The increasing availability of such offload architectures motivates us to model the memory consumption of offloaded collective communications.In this work, we provide an overview and a classification of state-of-the-art algorithms for various collective operations.Our report is not meant to cover all possible algorithms for implementing collective operations of which there are far too many to fit in the space limitations of this short article.Instead, our classification and analysis shall establish a discussion basis for the fundamental tradeoffs between runtime, energy, and memory consumption.For each algorithm, we derive analytic models for all three key metrics.Our theoretical study shows, for example, that reducing the number of messages sent may reduce the performance but, at the same time, decrease energy and memory consumption.Furthermore, our analysis of existing algorithms allows us to point out gaps and define future research topics.In general, we argue for a more general design mechanism that considers the multi-objective optimization problem for time, energy, and memory.algorithms and algorithm design and simplifies the optimization problems in the context of real applications.However, models need to capture the main parameters that determine the performance of the implementation on the target architecture.Several such models for the performance of communication algorithms have been designed.The most prominent ones belong to the LogP family while many other models can either be expressed as subsets of LogP (e.g., alpha-beta) or have a similar character but increase the complexity of the parameters, (e.g., PlogP [25]).For the purpose of this paper, we use LogGP [1] as a model for the execution time because we believe that it expresses the most relevant architecture parameters while still allowing elegant formulations of optimization problems.We now proceed to discuss several communication technologies and mechanisms in the context of collective algorithms and the LogGP model.
Message Passing Message Passing is the basis of the design of LogGP.Here, L denotes the maximum communication latency between two endpoints.The parameter o represents the constant CPU overhead for sending or receiving a single message, e.g., the call to the message passing library.The parameter g is the equivalent overhead for sending or receiving a message caused by the network interface.The maximum of o and g limits the small-message injection rate, an important parameter of current interconnection networks.The model also implies that only L/g messages can be in flight between two processes at any time.The parameter G models the cost per injected Byte at the network interface, this is the reciprocal bandwidth.Finally, the number of processes is represented by P.
Noncoherent Shared Memory Noncoherent shared memory systems as used in remote direct memory access (RDMA) communications or for the data transfer between CPUs and GPUs are similar to message passing systems.The typical programming interface to such systems are put and get operations that store into or load from remote memory.The main difference to message passing is that the receiver is not explicitly involved and thus o is not charged at the destination.However, all other parameters remain.For the purpose of this article, we ignore this discrepancy with the traditional LogGP model.
Coherent Shared Memory Coherent memory systems are slightly more complex.Coherence between multiple caches is often guaranteed by a cache coherence protocol operating on blocks of memory (e.g., cache lines).The protocol ensures that each block always holds exactly one value in the whole system.Such protocols often allow for multiple readers (i.e., multiple identical copies of the block) but each write access requires exclusive ownership.Since all communication is implicitly performed during standard load/store accesses, performance characteristics are more complex and LogGP is only an approximate model for such transfers in the general case.Yet, if the amount of sharing is low (i.e., data is transferred from each writer to a single reader), then LogGP can model the performance characteristics accurately.Ramos and Hoefler [35] provide a detailed explanation of the intricacies of modeling for cache-coherent systems and related work.
Network Offload Architectures Some newer network architectures such as Portals IV [7] or CORE-Direct [14] allow to offload collective operations to the network device.This enables faster execution (messages do not need to travel to the CPU) and isolation (computations on the CPU and collective communications do not interfere and can progress independently).This reduces the impact of small delays on the CPU, often called system noise [19,47] and allows asynchronous execution of nonblocking collective operations [17].Communications are performed using messages and can thus be modeled using the LogGP model.Offload devices have limited resources to store communication schedules and we model the memory consumption of each algorithm in such devices.
Runtime Models We will use LogGP to model the approximate runtime of the algorithms on all target systems.Furthermore, in order to keep the models interpretable, we set o > g and assume that the LogGP CPU overhead o is also charged in offloading devices so that we never need to charge g (o for offloading devices is most likely much smaller than o on a general-purpose CPU).We also assume that the cost to transmit a message of size s is T msg = L + 2o + sG.We report the maximum finishing time that any process needs.
Energy Models Energy consumption can generally be split into two components: dynamic and static energy [28,29].The static energy is the leakage energy during the operation of an electronic device, regardless of the device's activity.Dynamic energy represents the energy that is consumed by activities such as computation, sending and receiving messages, or memory accesses.For the purpose of our analysis, we assume that computation and local memory operations (e.g., shuffling data) are free.These assumptions are similar to the LogGP model which also only considers network transactions.To model the energy for communication, we assume that each message consumes a fixed energy e.This represents the setup cost to send a zero-byte message and is similar to o and g in the LogP model, we do not separate CPU and network costs because energy consumption is additive and can thus be captured by a single parameter.Furthermore, we denote the energy required to transport each byte from the source's memory to the destination's memory as E, similar to LogGP's G parameter.This model assumes a fully connected network such that the energy consumption does not depend on the location of the source and destination.Thus, ignoring local computations, the total energy consumption of a collective operation is L = T • P + D where T is the runtime (e.g., modeled by LogGP), P is the leakage power, and D is the dynamic energy model.In our analysis, we derive dynamic energy models for the overall operation (the sum of all dynamic energies consumed at each process).
Memory Models Similarly, we derive a simple model for capturing memory overheads for offloading devices.To offload a collective operation to a network device, one copies some state (e.g., a set of triggers [7] or a set of management queue entries [14]) that models the execution schedule to the device.The device then generates messages based on arriving messages from other processes and the local state without CPU involvement.Here, we assume that each sent message has to be represented explicitly as a descriptor in the offloaded operation.We assume that these descriptors have the constant size d.This descriptor size does not depend on the size of the actual message to be sent or received.We report the maximum memory needed by any process.
Implementation Strategies for Collective Operations
Instead of describing algorithms for specific collectives, we discuss common algorithms to implement collective operations.For each of these algorithms, we develop runtime, energy, and memory overhead models.We then proceed to briefly describe each of MPI's collective operations and discuss how the algorithms can be used to implement it.This method reflects the state-of-the-art in which collective libraries often implement a set of algorithm skeletons and match them to particular collective implementations [12].
Existing Collective Algorithms
Each collective algorithm exploits a particular virtual topology, i.e., a directed graph representing message propagation between processes.We distinguish between three classes of collective algorithms: (1) trees in various shapes and forms, (2) distribution algorithms, and (3) specialized algorithms.
Trees can be used to implement any collective communication.In these algorithms, processes are arranged in a tree shape and messages are flowing from parents to children or vice versa, depending on the collective operation.Some collectives require personalized data (e.g., scatter/gather) such that the messages grow or shrink as they are sent along the tree while other operations either replicate or reduce the data (e.g., reduce, broadcast) leading to constant-size messages.Trees are often used for communicating small messages because in most cases, leave processes only receive messages and are thus not able to use their own send bandwidth.Simple pipelines (i.e., degenerated regular trees) that minimize the number of leaves often provide excellent and simple solutions for very large message sizes.We will also discuss double-tree algorithms that improve the latency over such simple pipelines.While trees can be used to implement any collective, they may incur a higher cost if they need to be combined.For example, unrooted collectives where all processes receive the result (e.g., allreduce) require communication up and down a tree.These communications can be efficiently implemented using distribution patterns that can also be seen as intertwined trees rooted at each process.A third class of specialized algorithms takes advantage of either specific hardware properties such as topology or multicast semantics or specific semantics of the collective problem.We now proceed to describe existing tree algorithms followed by distribution patterns.We conclude this subsection by referencing several specialized algorithms.A simple lower bound for the runtime of all algorithms is Ω(o log P ) + sG because data needs to reach all processes and data must be sent at least once.Similarly, a lower bound to the energy consumption is (P − 1)(e + sE) and a lower bound for the memory consumption is d because each process must receive the data once.We will provide exact and simplified models for each algorithm; the simplified models use mixed asymptotic notation for s → ∞ and P → ∞ to facilitate highest intuition.
Flat Tree Algorithms
We start with the simplest algorithm for collective operations-a flat tree (FT) [25] in which a single processor sends messages to all destinations directly.Figure 1a provides an example of such a tree for a non-personalized or personalized operation.The gray squares at communication edges denote the communicated data of size s.The annotations in this and the following figures denote the finishing times of the processes in the example.In all figures, we assume that data is sent to the children of a process in the order drawn, beginning with the leftmost.Though simplicity of the algorithm is a clear advantage, b) Binary tree (non-personal).
Regular Trees
A widely used topology for rooted collective operations is based on regular trees.In such trees, processes perform communications concurrently and thus achieve better performance than flat trees.Trees are called regular when each inner process has the same number of child nodes.We call trees with k such children per process k-ary trees; in this sense, flat trees can be seen as regular trees with k = (P − 1).
To illustrate the concept, Figures 1b and 1c show non-personalized and personalized communications along a binary tree, respectively.General k-ary trees (KT) require log k (P ) total parallel communication steps.In particular, the time of a k-ary tree algorithm for a non-personalized operation is . The dynamic energy model for the same algorithm is D KT = (P − 1)(e + sE) = P (e + sE) − O(s).The storage requirements for k-ary trees are M KT = kd because each process sends to at most k children.
For personalized communications on full trees (which we mark with a tilde above the virtual topology type, e.g., KT), the communication time can be modeled with T KT = log k P (L + o(k + 1)) + sG Here, one can simply count the packet along the rightmost path assuming that messages are sent to each left child first.The dynamic energy consumption is D KT = e(P − 1) + sE • k log k P log k P −1 i=0 ( log k P − i) 1 k i ≈ P (e + sE log k P ) + O(sP ) (for large k) and the memory consumption is M KT = kd as in the nonpersonalized case.Pjesivac-Grbovic et al. [33] use splitted binary trees (SB) to accelerate non-personalized communications.They use a normal binary tree but instead of distributing the whole message along each tree edge, the message is divided into two parts.The first part is sent to the nodes of the left subtree of the root, while the second part is distributed among nodes of the right subtree of the root.Once a node received the data and sent it on to its children, it also sends it to its own counterpart in the other subtree.The approximate time of the splitted binary tree algorithm is a combination of the normal binary tree non-personalized algorithm with s 2 data and a full exchange: . The estimated dynamic energy for this algorithm is D SB = 2(e+ s 2 E)(P −1) = P (2e+sE)−O(s) while the memory model is M SB = 3d.
Irregular Trees
While simplicity of regular tree algorithms is a strong advantage and they are asymptotically optimal for small messages, they are generally not strictly optimal.For example, Karp et al. [24] demonstrate that Fibonacci trees are optimal for single-item broadcasts and thus non-personalized tree communication in the LogP model.Figure 2a shows the optimal tree construction, each node is labeled with its arrival time and the best broadcast tree for P processes is constructed from the P nodes with the smallest labels.[24] (assuming g = 1, o = 0, G = 0).For personalized tree communication, Alexandrov et al. [1] as well as Iannello [22] show that in the LogGP model the usage of irregular trees for virtual topologies allows to achieve better performance.Both algorithms are hard to derive and have not been used in practice to the best of the authors knowledge.A much simpler class of irregular trees that improves over regular trees are k-nomial trees.Here, we discuss the most-used binomial tree (BT) (k = 2) as example and we assume that P is a power of two.The runtime of non-personalized binomial trees is T BT = (L + 2o + sG) log 2 P , their dynamic energy consumption is D BT = (P −1)(e+sE) = P (e+sE)−O(s), and their memory use is M BT = d log 2 P at the root process.The runtime of personalized binomial trees is T BT = (2o+L) log 2 P +sG(P −1) = (2o+L) log 2 P +sGP −O(s), their dynamic energy consumption is D BT = e(P −1)+sE P 2 log 2 P = P e + sE P 2 log 2 P − O(1), and their memory consumption is M BT = d log 2 P .Figures 2b and 2c show examples for personalized and non-personalized binomial trees.Binomial tree algorithms are commonly used for small messages; for larger messages, more complex algorithms provide better results (see, for example, various algorithms proposed by Van de Geijn et al. [5,41,44]).We will now discuss pipelined trees that have a similar goal to improve bandwidth.
Pipelined Tree Algorithms
Pipeline algorithms are based on the idea to divide a large message into multiple small pieces and to distribute these pieces among processors in a pipeline fashion [33,38].Here, different virtual topologies can be utilized for transmitting the data.Linear pipelines as illustrated in Figure 3a are simplest while tree pipelines as illustrated in Figure 3b allow to reduce latencies.As before, our models assume that data is sent down the left pipe first and then alternating.We also assume in this case that the send and receive overheads (o) can be charged simultaneously (e.g., in a multicore environment).Pipelines are often used as building blocks for more complex algorithms [40].For example, in a non-personalized setting, the runtime of a pipelined binary tree (PBT) algorithm can be estimated as b) Butterfly (non-personal).
Double Trees
While pipelined trees improve the overall bandwidth utilization, they are still not optimal.The reason for this is that the leaves in the tree never transmit messages and thus do not contribute their bandwidths.
To use the leaves' bandwidth, one can employ two trees with different structure (leave nodes) such that each node sends eventually.Sanders and Träff [39,40] demonstrate such a two-tree virtual topology that achieves full bandwidth, extending and simplifying an earlier algorithm [45].The authors utilize two trees so that the interior nodes of the first tree correspond to the leaf nodes of the second tree and vice versa (see Figure 3c).They also describe a scheduling algorithm to define from which parent node the data should be received at the current step and to which child node the data should be forwarded.The approach only applies to non-personal communications.The runtime of this double tree (DT) algorithm is and the memory consumption for this approach is M DT = 2dN .This algorithm concludes our treatment of successively more complex algorithms for rooted collective communications.We now proceed to discuss distribution patterns such as direct send, dissemination, and butterfly algorithms for unrooted collective communications.
Direct Sends
In unrooted collectives, typically all processes receive some data from every other process, either personalized or reduced.This can be achieved by a direct send (DS) topology among all processes.This is similar to a flat tree rooted at each process.The runtime for the personalized as well as the nonpersonalized variant is , the energy consumption is D DS = P (P − 1)(e + sE) = P 2 (e + sE) − O(P s), and the memory consumption at each process is M DS = (P − 1)d. Figure 4a illustrates the DS scheme.
Dissemination and Butterfly Algorithms
The well-known Butterfly (BF) graph [8] implements a binary scheme to quickly exchange data among all processes which can be applied if P is a power of two.The dissemination approach [15] generalizes this scheme to arbitrary numbers of processes.Here, we limit ourselves to the simpler case where P is a power of two.In the Butterfly pattern, data is communicated between processes with exponentially growing distances, i.e., in the k-th step, nodes at distance 2 k from each other exchange data.Thus, log 2 P steps are required to complete the communication.
The non-personalized version of butterfly executes in time T BF = (2o + sG + L) log 2 P , with a dynamic energy consumption of D BF = (e+sE)P log 2 P , and with a memory consumption of M BF = d log 2 P .The well-known recursive doubling algorithm [44] as well as the Bruck algorithm [9] implement a personalized variant of the Butterfly pattern.If we ignore local data shuffles, then the runtime of this personalized algorithm is T BF = (2o + L) log 2 P + Gs(P − 1) = (2o + L) log 2 P + sGP − O(s).Its energy consumption can be modeled as D BF = eP log 2 P +sE(P −1)P = P (e log 2 P +sEP )−O(sP ) and its memory requirement is M BF = d log 2 P .Each model increases with a multiplicative constant if the number of processes is not equal to a power of two [44].Figures 4b and 4c illustrate the Butterfly pattern with eight processes in non-personalized and personalized configurations, respectively.
More Specific Algorithms
Several researchers developed algorithms that are tuned to particular properties of the machine.For example, several algorithms that specialize to the network topology exist.Some others utilize special hardware features.We provide some examples here but this list is not meant to be complete.
Hardware-specific algorithms Ali et al. [2] provide algorithms for collective communications on the Cell B.E. chip, Panda et al. demonstrate a series of algorithms tuned to InfiniBand networks and RDMA systems [27,42], and Almasi et al. [3] show optimization techniques for the BlueGene/L Torus network.
Topology-aware algorithms There is a class of algorithms that take the network topology and congestion into account.For example, Sack and Gropp [36,37] introduce a congestion-aware model for network communication.In the same articles they propose a recursive-doubling distance-halving algorithms for the allgather and reduce scatter collectives for Clos and Torus networks.Payne et al. [32] describe several algorithms on how to implement some reduction operations on a 2-dimensional mesh and Barnett et al. [6] develop a broadcasting algorithm for the mesh topology.Watts and Van de Geijn [48] show a pipelined broadcast for mesh architectures and Chan et al. [10] show how to utilize all available links in Torus networks.
Using Unreliable Multicast Hardware Other algorithms base on special hardware features such as multicast [11].Multicast packets can be lost and in order to guarantee reliable transmission, recovery algorithms are necessary.One such recovery protocol is presented by Hoefler et al. [20].Their protocol combines InfiniBand (or Ethernet) unreliable multicast with reliable point-to-point messages to achieve a with high probability constant-time (O(1) complexity) broadcast operation.Using these special hardware features allows us to circumvent the logarithmic lower bound.
Implementing Collective Operations
We now briefly discuss how the modeled algorithms can be combined to implement collective operations.We follow our previous categorization into rooted collectives implemented by personalized or non-personalized trees and unrooted collectives implemented by personalized or non-personalized distribution algorithms.
Rooted Collectives
Table 1 shows an overview of the tradeoffs in various personalized and non-personalized tree algorithms.We use the previously introduced subscripts as abbreviation: FT for flat trees, KT for k-ary regular trees, BT for binomial trees, PBT for pipelined binary trees, and DT for double trees.Abbreviations with a tilde on top, e.g., FT, denote personalized versions of the algorithms.Broadcast/Reduce Broadcast and reduce are structurally similar but very different in their semantics.
In a broadcast, a single message of size s is distributed (copied) from a designated root process to all other P − 1 processes.In a reduction, each process contributes a message of size s.The associative (and often commutative) operator ⊕ combines all P messages into a single result of size s at a designated root process: Both collectives can be implemented with non-personalized tree algorithms.Binomial and binary trees are commonly used for implementations of small-message broadcast and reduction [43,44].Largemessage operations can be implemented with double trees.Our models in Table 1 show that, for nonpersonalized communications, double-trees are the best contenders in terms of runtime (for all s and P ).However, they require more dynamic energy and memory due to the pipelining of messages.The exact number of additional messages sent depends on the number of pipeline segments N , which in turn is chosen based on the LogGP parameters and s.If the memory is constrained, then pipelining would be limited, possibly leading to suboptimal performance.All non-pipelined algorithms are work-optimal and thus consume the minimal energy.Regular k-ary trees have only constant memory overhead and are thus best for execution in very limited offload settings.
Scatter/Gather In a scatter, a designated process (root) sends personalized messages, each of size s, to P − 1 other processes.In a gather, the root process receives different messages, each of size s, from P − 1 processes and stores them locally.Both collectives can be implemented using personalized tree algorithms.For example, Binomial trees have been used to perform both, scatter and gather [4].
Our models in Table 1 show that, for personalized communications with small P , flat trees are best.Other regular and irregular trees reduce the latency to a logarithmic term and thus benefit large P but they are not work-optimal and send multiple messages multiple times and thus harm large s.For large s and small P one can use linear pipelines to utilize the bandwidth of all processes as discussed before.Alexandrov et al. [1] formulate the condition for an optimal gather tree in LogGP but to the best of the authors' knowledge, no practical algorithm is known that achieves this bound.In terms of energy, we remark that all tree algorithms increase dynamic energy consumption significantly in comparison to a flat tree.Memory consumption is similar to the non-personalized algorithms where the pipelining versions may dominate and k-ary regular trees are minimal for small k.
Unrooted Collectives
Table 2 shows an overview of various distribution algorithms and trees that can be used for unrooted collectives.We use the previously defined abbreviations for distribution algorithms: DS for direct send and BF for Butterfly.We compare these to implementations with two combined trees, such as a k-ary tree to reduce data towards a root followed by a second k-ary tree to broadcast data to all processes, which we denote as 2xKT.We only combine trees of similar nature and show some select examples even though combinations of any two trees can be used in practice.
Allreduce/Barrier Allreduce is similar to reduce in that all processes contribute a message of size s and r = m 1 ⊕ m 2 ⊕ m 3 ⊕ • • • ⊕ m P is computed.However, as opposed to reduce, the final r will be distributed to all processes.The Barrier collective guarantees that no process completes the operation before all processes called it.It is similar to allreduce with a zero-sized message and is commonly implemented using the same algorithms.Both collectives can be implemented using two trees, a reduction to a root followed by a broadcast to all processes as in [21].However, a more time-efficient implementation would be non-personalized distribution such as the Butterfly pattern [31,34,49].The models in Table 2 suggest that, for non-personalized communication, Butterfly patterns are fastest for all s and P .However, their dynamic energy consumption is asymptotically higher than the combination of two trees.Combining two pipelined trees can improve tree performance for large messages.Butterfly consumes logarithmically growing memory at each node, two k-ary trees could reduce this memory consumption to a constant.
Allgather/Alltoall Allgather is similar to a gather but the result is distributed to all processes.A simple but slow implementation would be a gather followed by a broadcast.In alltoall, each process has P messages of size s.Each of these messages is sent to another target process, so that each process sends and receives P −1 messages (and an implicit message to itself).Direct send or Bruck's algorithm (using a personalized Butterfly communication) can be used to implement such collective operations.In addition, these operations can be implemented using personalized trees that gather the result to a single node and broadcast it to all nodes.The models in Table 2 suggest that, for personalized communication, Butterfly patterns are fastest for all small s and large P but quickly become inefficient with growing s.Direct sends are most efficient for large s and small P .Tree patterns are always more expensive in terms of runtime and energy consumption than distribution patterns.However, tree patterns can provide a constant memory consumption while other patterns have linear or logarithmic memory requirements in P .
Other Collectives
Scans/Reduce Scatter In prefix scan operations, each process specifies a message of size s and received the partial sum of all messages specified by processes with a lower id than itself.I.e., the process with id k receives A reduce scatter performs a reduction of a message of size P s specified at each process.Then, messages of size s are scattered to each P process.Both steps are performed together so that algorithms can optimize them as a single step.Reduce scatter can be implemented by a simple reduce followed by a scatter and scans can be implemented by rooting a different reduction tree at each process.However, merging the trees can lead to substantial performance improvements for reduce scatter [22] as well as scans.
Neighborhood Collectives MPI-3 introduces neighborhood collective operations [18] where the programmer can specify any communication pattern and in this way build his own collective communication operation.For example, one can express all non-reduction collective operations as neighborhood collectives.However, the expressiveness of this operation comes at the cost of optimizability.Thus, there are no generic optimization algorithms for these operations yet.
For the purpose of the analyses in this paper, we ignore irregular/vector collective operations.
Discussion and Open Problems
We now conclude our theoretical analyses with a brief summary of the lessons learned followed by an outlook to important open problems and future research directions in the area of optimizing collective communications.
Approaching the Optimal
Some systems combine existing algorithms using an auto-tuning approach for algorithm selection [46].Pjesivac-Grbovic et al. [33] for example utilize decision trees to select the best algorithm at runtime while Faraj and Yuan [13] use collective building blocks to tune them to a particular network topology.Yet, all these approaches are not strictly optimal.Selecting different algorithms and parameters for them automatically may yield significant speedups over any single algorithm.However, the problem of attaining the best bounds in terms of latency and bandwidth in the full spectrum of possible datasizes s and process numbers P remains open for many personalized communication algorithms.
Problem 1: Runtime-optimal collective algorithms We identified four essential classes of algorithms that need to be developed to attack this problem: trees with personalized and non-personalized data and dissemination mechanisms with personalized and non-personalized data.While several discrete algorithms exist for both, we expect that a general latency-and bandwidth-optimal solution will significantly improve upon the state-of-the-art.
Energy, Memory, and Runtime Tradeoffs
In our analysis, we identified several problems where algorithms with a smaller runtime consume more energy than algorithms with a larger runtime and vice-versa.In addition, we found that the best algorithms are generally not space optimal.This means that offloading devices with strictly limited resources may not be able to use the best known algorithms.To illustrate the tradeoff, we plot our models for a set of parameters chosen to represent an InfiniBand network architecture.These parameters are approximate and vary across installations, however, they provide insight into the tradeoffs between energy consumption and runtime.
As LogGP parameters, we use previously reported values measured for InfiniBand using MPI: L = 6 µs, o = 4.7 µs, G = 0.73 ns/B [16].Kim et al. [26] model the memory read and write power consumption per MTU packet (2048 B) per switch as 8.1 pJ.We use this data to approximate the NIC power consumption assuming that each Byte in a packet is read and written once and a single packet is needed to send a 0-Byte messages.Thus, we assume e = 16.5 pJ, E = 8.1 nJ/B, and a static NIC chip power of P = 0.5 W for our model.For the memory overhead, we assume that each descriptor stores a pointer, an offset, a trigger counter, and a target address.We assume that each of these fields is represented by a 64-Bit number, thus d = 32 B. Figure 5 shows one particular example for a non-personal distribution communication that could be used to implement allreduce.We compare only three different options: two 2-ary trees, two binary trees, and Butterfly to instantiate the intuition from Table 2 with real-world parameters.The runtime model shows that the Butterfly algorithm is by far the best option followed by the binomial tree and the binary tree.However, in the energy model, Butterfly is far worse than both, binomial and binary trees for large numbers of processes.In fact, its dynamic energy consumption is always higher than the trees but for small process counts, the performance advantage reduces the static energy consumption in comparison to the trees.The memory model shows that the regular binary tree has the lowest, even constant memory consumption per process followed by Butterfly and binary tree.We observe that depending on the target metric, each of the three algorithms can perform best: Butterfly has the best performance, binomial trees use the least energy, and binary trees require the least memory in the network interface.
Problem 2: Energy-optimal collective algorithms Finding the energy-optimal algorithm for a given set of parameters (the dynamic energy consumption with e and E and the static power consumption P ) for each collective operation remains an open and challenging topic as it requires to optimize time to minimize static energy in combination with the dynamic energy consumption.The optimal algorithm in terms of dynamic energy is often the simple linear algorithm that would result in excessive static energy consumption.The exact tradeoff between these algorithms is determined by the energy and runtime models as well as the energy and runtime parameters.
Problem 3: Pareto-optimum for energy and runtime If both previous problems are attained, one could phrase the Pareto-optimal region for the energy consumption versus the runtime.This allows to optimize the runtime in energy-constrained systems as well as the energy consumption in real-time systems.In power-constrained settings, one could also limit the dynamic energy consumption to stay within certain limits.
Problem 4: Optimal neighborhood collective operations The problem of optimizing neighborhood collectives is not well understood.Since they can represent any arbitrary collective operation, an optimal solution (in terms of energy consumption or runtime) would also yield optimal solutions for all MPI collectives.
Tradeoffs for Offload Architectures
Collective offload architectures often offer limited space on the device.The optimization problem (in terms of power and energy) can now be formulated under the restriction of limited space on the device.
Our models show that each algorithm can be implemented with constant space per device.However, we also show that the necessary algorithms are slower than the best known algorithms.Interestingly, the slowdown of the constant-space algorithms seems to be limited to a factor of two compared to the best known practical algorithm.The difference may be higher when compared to close-to-optimal solutions such as Fibonacci trees and optimal personalized schedules.We also found that many best known algorithms utilize pipelining, a technique where the memory consumption grows with the size of the sent data.Designers of offload architectures may consider to support pipelining of N messages with a constant-size operation.In addition, one could allow to offload simple programs to the network card that generate sends on the fly without pre-programming everything at initialization time.
Problem 5: Optimal memory-constrained collectives The problem to determine the runtime-or energy-optimal schedule under the constraint of space on the offloading device may be important to support future collective offload architectures.
Conclusions
This study provides an overview of existing collective algorithms and implementations.We describe the most common algorithms for implementing collective operations in practice.However, our list is not meant to be exhaustive.We classify these algorithms into three groups: tree-shaped algorithms, distribution algorithms, and optimized schedules.The first two groups base on virtual topologies which can be used in a personalized and non-personalized setting.The last group includes optimized and specialized messaging schedules for particular cases.We derive runtime, energy, and memory consumption models for each algorithm and compare the algorithms within each group.Our models and comparisons provide fundamental insights into the nature of these algorithms and various tradeoffs involved.For example, we show that runtime-optimal algorithms always exhibit non-optimal dynamic energy consumption.In the case of non-personalized distribution, the energy consumption of the fastest algorithm is asymptotically higher than the consumption of an algorithm that is only a slower by a constant.We also show that optimal algorithms always require more memory in offload devices than other algorithms that are only slower by a constant.This provides interesting optimization problems to find the best tradeoffs between runtime, energy, and memory consumption in offload devices.In our theoretical study, we identified several research problems and open questions.We believe that it is most important to understand the tradeoff between energy and runtime and possibly memory consumption in offload devices.It is also interesting to design offloading protocols and devices that require minimal storage in the network architecture.In addition, a generic framework to design close-to-optimal schedules for predefined as well as neighborhood collective operations would be a valuable contribution to the state of the art.
Figure 1 .
Figure 1.Flat and binary trees (k = 2) with seven processes (P = 7) in personal and non-personal configurations.
Figure 2 .
Figure 2. Optimal Fibonacci trees and binomial trees with eight processes (P = 8) in personal and non-personal configurations.
Figure 3 .
Figure 3. Non-personalized pipelined trees and double trees with seven or eight processes.
Figure 4 .
Figure 4. Different distribution algorithms for unrooted collectives.Only one data packet is shown at each stage for readability.
Karp et al. also state that, if f n and f n+1 are the consecutive members of the generalized Fibonacci sequence s.t.f n < P −1 < f n+1 , the lower bound for broadcasting s items is n+1+L+(s−1)−
Table 1 .
Overview of tree algorithms for rooted collectives (minor terms are dropped, lg stands for log 2 ).
Table 2 .
Overview of algorithms for unrooted collectives (minor terms are dropped, α | 8,968.4 | 2014-07-09T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Application of the Variety-Generator Approach to Searches of Personal Names in Bibliographic Data Bases-Part 2. Optimization of Key-Sets, and Evaluation of Their Retrieval Efficiency
Keys consisting of variable-length chamcter strings from the front and rear of surnames, derived by analysis of author names in a particular data base, am used to provide approximate representations of author names. When combined in appropriate mtios, and used together with keys for each of the first two initials of personal names, they provide a high degme of discrimination in search. Methods for optimization of key-sets are desc1·ibed, and the perform ance of key-sets varying in size between 150 and 300 is determined at file sizes of up to 50,000 name entries. The effects of varying the proportions of the queries present in the file are also examined. The results obtained with fixed-length keys are compared with those f01' variable-length keys, showing the latter to be greatly superior.
INTRODUCTION
In Part I of this series the development of variety generators, or sets of variable-length keys with high relative entropies of occurrence, from the initial and terminal character strings of authors' surnames was described. 1heir purpose, used singly or in combination, is to provide a high and constant degree of discrimination among personal names so as to facilitate searches for them.In this paper the selection of optimal combinations of the keys and evaluation of their efficiency in search are described.The performance of combined key-sets of various compositions is determined at a range of file sizes and compared with fixed-length keys.In addition, the extent of statistical associations among keys from different positions in the names is determined.
BALANCING OF KEY-SETS
The relative entropies of distribution of the first and last letters of the surnames of authors in the file of 100,000 entries from the INSPEC data base differ significantly, the former being 0.92 and the latter 0.86.As a result, a larger key-set has to be produced from the back of the surnames to reach the same value of the relative entropy as that of a key-set of given size from the front of the surname.For instance, the value of 0.954 is reached by a key-set comprising 41 keys from the front of the name, but a set of 101 keys from the back is needed to attain this value.It seemed reasonable to assume that keys from the front and rear should be combined in different proportions in order to maximize the relative entropy of the combined system, and that their proportions should reflect the redundancies of each distribution (redundancy = 1 -Hr).In order to test this, a series of combined key-sets of different total sizes was produced, in which the proportions of keys were varied around the ratio of the redundancies of the first and last character positions, i.e., ( 1 -0.92): ( 1 -0.86), or 8:14.The relative entropies of the name representations provided by combining these key-sets with keys for the first and second initials were determined by applying them to the 50,000 name file, and the entropy value used to determine the optimal ratio of keys.In one case, the correlation between the value of the relative entropy and retrieval efficiency, as measured by the precision ratio, was also studied, and shown to be high.
The sizes of the combined key-sets studied were 148 and 296, with an intermediate set of 254 keys.The values of 148 and 296 were chosen in view of the projected implementation in the serial-parallel file organization. 2his relates the size of the key-set to the number of blocks on one cylinder of a disc.(The 30Mbyte disc cartridges available to us have 296 blocks per cylinder.)Otherwise the choice of key-set is arbitrary, and can be varied at will.The minimum key-set size is 106, consisting of 26 letters each for the first and last letter of the surname, and 27 ( 26 letters and the space symbol) each for the first and second initials.The numbers of n-gram keys ( n ::::,.2) required for the key-sets numbering 148, 254, and 296 in size are .thus 42, 148, and 190.Full details are given of the composition of the first and third of these sets.
A slight refinement to key-set generation was employed to ensure as close an approximation to equifrequency as possible, especially with the smallest key-sets.Precise application of a threshold frequency may occasionally result in arbitrary inclusion of either very high or very low frequency keys.Thus, if almost all the occurrences of a longer key are accounted for by a shorter key (as with -MANN and -ANN), only the shorter n-gram is included.
OPTIMAL SET OF 148 KEYS
The number of n-gram keys ( n ::::::, .2) to be added to the minimum set of 106 keys is 42, the presumed optimum proportion being 8:14, which implies about 16 keys from the front of the name and 26 from the back.In order to examine the relationship between the ratio of keys from the front and rear of the surname and the relative entropy of the combined sets, the ratios were varied at intervals between 1:1 and 1:3 so that the numbers of n-grams varied from 21 and 21 to 11 and 31 respectively.For each ratio the keys were applied to the 50,000 name entries, and the distribution of the resultant descriptions determined.The ratios, the number of n-gram keys, and the relative entropies of the distributions are shown in Table 1.The maximum value of the entropy is taken to be log250,000.In this case the balancing point, with the key-set including 16 n-gram keys from the front and 26 from the back, corresponds with the ratio of the redundancies of the first and last letters of the surnames.Table 2 shows the composition of the optimal key-set of 148 keys, while Table 3 gives the distribution of the name representations compiled from the combined key-set, and its corresponding relative entropy.
OPTIMAL SET OF 296 KEYS
A similar procedure to that used for the optimal148-key key-set was also applied in this instance.Here the ratios of front and rear n-gram keys varied from 57 and 133 to 69 and 121 respectively.For each of the sets chosen, the distributions of the entries resulting from application of the combined key-sets to the file of 50,000 names were determined.These showed virtually no difference in terms of the relative entropy alone, although the total number of different entries differed slightly between keysets, and the highest value was used to choose the optimal set, detailed in Table 4.The range of combinations studied is shown in Table 5, and the distribution of the entries for the optimal set is given in Table 6 In this instance, the ratio of n-gram keys from the front and back of the surnames has been displaced from the ratio of the redundancies of the first and last characters of the surnames, i.e., 8:14 (1:1.7).Here the ratio is roughly 1:2.This is undoubtedly due to the fact that the relative entropies of key-sets from the back of the surname increase less rapidly than those of key-sets from the front, and hence larger sets must be employed.
EVALUATION OF RETRIEVAL EFFECTIVENESS
The keys in the optimized key-sets represent name entries in an approxi- mate manner only, so that when a search for a name is performed, additional entries represented by the same combination of keys are identified.While these may be eliminated in a subsequent character-by-character match of the candidate hits, the proportion of unwanted items should remain low if the method is to offer advantages.
In evaluating the effectiveness of the key-sets in the retrieval, the names in the search file were represented by concatenating the codes for the keys from the front and back of the surnames and the initials, and subjecting the query names to the same procedure.The matching procedure produced lists of candidate entries, of which the desired entries were a subset.The final determination was carried out manually.
The tests were performed first with names sampled from the search file, so that correct items were retrieved for each query.Since searches for name entries may be performed with varying probabilities that the authors' names are present in the file (especially in current-awareness searches), varying proportions of names of the same provenance, but known not to be present in the search file, were also added.In these cases candidate items were selected which included none of the desired entries.Recall tests were also performed and recall shown to be complete.
The measure used in determining the performance of the variety-generator search method is the precision ratio, defined as the ratio of correctly identified names to all names retrieved.It is presented both as the ratio of averages (i.e., the summation of items retrieved in the search and calculation of the average) and as the average of ratios (i.e., averaging the figures for individual searches).The latter gives higher figures, since many of the individual searches give 100 percent precision ratios.
The precision ratio was found to be dependent on file size and to fall somewhat as the size of file increases.This is due to the fact that the keysets provided only a limited, if very high, total number of possible combinations, while the total possible variety of personal names is virtually unlimited.
The evaluation was performed with a sample of 700 names, selected by interval sampling.This number ensured a 99 percent confidence limit in the results.A comparison of the interval sampled query names with randomly sampled names showed that no bias was introduced by interval sampling.
A test to confirm that the retrieval effectiveness reached a peak at the maximum value of the relative entropy of a balanced key-set was performed first.This was carried out on a file of 25,000 names, using as queries names selected from the file and the optimal 148-key key-set.As shown in Table 1, the values of the precision ratio (ratio of averages) and of the relative entropy both peak at the same ratio of n-gram keys from the front and back of the surnames.
The performance of the optimal key-sets of 148, 254, and 296 keys with files of 10,000, 25,000, and 50,000 names is shown in Table 7. Calculated as the ratio of averages, the smallest key-set ( 148 keys) shows a precision ratio of 64 percent with a file of 50,000 names, which means that of every three names identified in the variety-generator search, two are those desired.With the largest key-set ( 296 keys), this rises to nine correctly identified names in every ten retrieved at this stage.On the other hand, calculated as the average of ratios, the precision ratios rise to 81 percent and 94 percent respectively.For smaller file sizes-typical, for instance, of current-awareness searches-the figures for all of these are cotTespondingly higher.The effect of sampling from a larger file, so that increasing proportions of the names searched for are not present in the search file, is shown in Table 8 for a file of 25,000 names.In this case, the proportion of correctly identified names in the total falls, so that overall performance is somewhat reduced.Thus, depending both on file size and on the expected proportion of queries identifying hits, the key-set size can be adjusted to reach a desired level of performance.In addition, tests to determine the applicability of a key-set optimized for one file of 50,000 names to another file of the same provenance and size were carried out.The three key-sets derived from the first file were applied to the second, query names sampled from the latter, and the precision ratios determined.Some reduction in performance was observed; expressed as ratio of averages, the precision with the 296-key key-set fell from 90 to 83 percent, with the 254-key keyset from 87 to 82 percent, and with the 148-key key-set from 64 to 56 percent, figures which seem unlikely to prejudice the net performance in any marked way.Nonetheless, monitoring of performance and of data base name characteristics over a period of operation might well be advisable.
DISTRIBUTION CHARACTERISTICS OF OTHER TYPES OF KEYS
It is particularly instructive to examine the distribution characteristics of other types of keys, including those of fixed length, generated from various positions in the names, and to compare them with those of the optimal key-sets employed in the variety-generator approach.To this end, the file of 50,000 names was processed to produce the following keys or keysets: 1. Initial digram of surname.
2. Initial trigram of surname.3. Key-set of ninety-four n-grams from the front of the surname, with first and second initials.4. Key-set consisting of first and last character of surname, with first and second initials.The figures (Table 9) show clearly that all have distributions which leave no doubt as to their relative inadequacy in resolving power, where this is defined as the ratio of distinct name representations provided by the key-set used to the number of different name entries ( 41,469) in the file.At the digram level, the value of the resolving power is 0.009, i.e., each digram represents, on average, 110 different name entries, while no fewer than thirty-two specific digrams each represent between 500 and 1,000 different names.At the trigram level, the value of the resolving power rises to 0.08, a tenfold increase; however, one trigram still represents between 500 and 1,000 different names.
Use of the first and last letters of the surname plus the initials again increases the value of the resolving power to 0.627, or 1.6 distinct names per entry; eight of the representations now account for between thirty-one and forty distinct entries.In contrast, however, the key-set of 148 keys comprising ninety-four n-gram keys from the front of the name and the first and second initials, although almost 50 percent larger than the fourcharacter representation, has a resolving power of only 0.438 (or 2.28 entries per representation).This contrast provides particularly strong evidence for the superiority of keys from the front and rear of the surnames over those from the front alone, even when the latter are variable in length.As expected, the precision ratio of the four-character representation is low, at 37 percent (ratio of averages), compared with 64 percent for the optimal148-key key-set.
EXTENT OF STATISTICAL ASSOCIATION AMONG KEYS Thus far, the frequency of occurrence of variable-length character strings from the front and back of the surnames is the only factor considered in their selection as keys.It is well known in other areas that statistical associations among keys can influence the effectiveness of their combinations. 3Where a strong positive association between two keys exists, their intersection results in only a small reduction of the number of items retrieved over that obtained by using each independently.When the association is strongly negative, the result of intersection may be much greater than that predicted on the basis of the product of the individual probabilities of the keys.
To assess the extent of associations among keys from the front and rear of surnames and initials, sets of both fixed-and variable-length keys from each of these positions were examined.•The Kendall correlation coefficient V was calculated for each of the twenty most frequent combinations of these.This is related to the chi-square value by the expression X2 =m V2 where m is the file size, or 50,000.Table 10 shows the values of the association coefficient for certain of the characters in the full name.Those above .012are significant at a 99 percent confidence level.Positive associations are more frequent than negative.The figures indicate that intersection of certain of these characters as keys in search would result in some slight diminution in performance against that expected.The figures for the association coefficients among the twenty most frequent combinations of keys from the front and back of surnames in the 148and 296-key key-sets show magnitudes (mostly positive) which are substantially greater than those for single characters (see Table 11).The reasons for these values are obvious; in certain instances, e.g., MILLER, JONES, and MARTIN, common complete names are apparent, while in one case, LEE, an overlap between keys from the front and rear exists.In others, linguistic variations on common names can be discerned, as with BR N-BROWN or BRAUN.Such associations are inevitable.When the selection of keys is based solely on frequency, some deviation from the ideal of independence must result, becoming larger as the size of the key-sets increases, and as the length of certain of the keys increases.However, since its effect in the most extreme cases is merely to lead to virtually exact definition of the most frequent surnames, no particular disadvantage results.
POSSIBLE IMPLEMENTATIONS OF THE VARIETY-GENERATOR NAME SEARCH APPROACH
The variety-generator approach permits a number of possible implementations of searches for personal names to be considered, if only in outline f ( f•j/ at this stage, using a variety of file organization methods.The most widely known methods (apart from purely sequential files) are direct access (utilizing hash-addressing), chained, and index sequential files.
Direct application of the concatenated key-numbers as the basis for hash-address computation appears attractive in instances where the personal name is used alone or in combination (as, for instance, with a part of the document title).The almost random distribution of the bits in this code should result in a general diminution of the collision and overflow problems commonly encountered with fixed-length keys.
Since only four keys are used to represent each name, and the four sets of keys from which these are selected are limited in number and of approximately equal probability, the keys can be used to construct chained indexes, to which, however, the usual constraints still apply.
Index sequential storage again offers opportunities, in particular since the low variety of key types means that the sorting operations which this entails can be eliminated.In effect, each name entry would be represented by an entry in each of four lists of document numbers or addresses, and documents retrieved by intersection of the lists.While four such numbers are stored for each name, in contrast to a single entry for the more conventional name list, the removal of the name list itself would more than compensate for the additional storage required for the lists.
In the index sequential mode, the lists of document addresses or numbers stored with each key are more or less equally long.They may thus be replaced by bit-vectors in which the position of a bit corresponds to a name or document number.If the number of keys bears a simple relation to the number of blocks on a disc cylinder, the vectors can be stored in predetermined positions within a cylinder, resulting in the serial-parallel file.
The usefulness of this file organization has yet to be fully evaluated; however, it also promises substantial economies in storage.On average, only four of the bits are set at the positions in the vectors corresponding to the name or document entry.On average, then, the density of 1-bits is very low, and long runs of zeros occur in the vectors.They can, therefore, be compressed using run-length coding, for instance as applied by Bradley.3• 4 Preliminary work with the 296-key key-set has indicated already that a gross compression ratio of nine to one is attainable, so that the explicit storage requirements to identify the association between a name and a document number would be just over thirty bits.
CONCLUSIONS
The work described here relates solely to searches for individual occurrences of personal names.Clearly, in operational systems in which one or more author names are associated with a particular bibliographical item, it will be necessary to provide for description of each of these for access.If this is provided solely on the basis of a document number, some false coordination will occur-for instance, when the initials of one entry are combined with the surname of another.A number of strategies can be envisaged to overcome this problem., The performance figures show clearly that a small number of characteristics-between 100 and 300 in this study-are sufficient to characterize the entries in large files of personal names and to provide a high degree of resolution in searches for them.While performance in much larger files, involving the extension of key-set sizes to larger munbers, has yet to be studied, the logical application of the concept of variety generation would appear to open the way to novel approaches to searches for documents associated with particular personal names, which seem likely to offer advantages in terms of the overall economic performance of search systems, not only in bibliographic but also in more general computer-based information systems.
Table 4 .
Composition of Balanced Key-Set of 296 Keys ' * Key-set with highest number of different entries.
Table 7 .
Precision Ratios Obtained in Variety-Generator Searches of Personal Names-Queries Sampled from Sea1'ch File (Confidence Level= 99 Pm•cent)
Table B .
Effect of Varying Proportion of Query Names Not Present in Search File of 25,000
Table 9 .
Distributions of a Variety of Other Representations of Personal Names in a File
Table 10 .
A8sociation Coefficients for Sets of the Most Frequent Digrams from Various Posi-
Table 11 .
Association Coefficients in the Twenty Most Frequent Key Combinations from Front and Back of Surnames in Two Key-Sets | 4,879 | 1974-09-01T00:00:00.000 | [
"Computer Science"
] |
On G-invexity-type nonlinear programming problems
In this paper, we introduce the concepts of KT -G-invexity and WD-G-invexity for the considered differentiable optimization problem with inequality constraints. Using KT -G-invexity notion, we prove new necessary and sufficient optimality conditions for a new class of such nonconvex differentiable optimization problems. Further, the so-called G-Wolfe dual problem is defined for the considered extremum problem with inequality constraints. Under WD-G-invexity assumption, the necessary and sufficient conditions for weak duality between the primal optimization problem and its G-Wolfe dual problem are also established.
Introduction
In the paper, we consider the following constrained optimization problem: minimize f (x) subject to g j (x) ≦ 0, j ∈ J = {1, ...m} , x ∈ X, (P) where f : X → R and g j : X → R, j ∈ J, are differentiable functions defined on a nonempty open set X ⊂ R n .
For the purpose of simplifying our presentation, we will next introduce some notation which will be used frequently throughout this paper.Let D := {x ∈ X : g j (x) ≦ 0, j ∈ J} be the set of all feasible solutions in problem (P).
Further, we denote an index set of active inequality constraints at point x ∈ X as follows: In recent years, attempts are made by several authors to define various classes of nonconvex functions and to study their optimality criteria and duality results in solving such types of optimization problems.One of a such generalization of a convex function is invexity notion introduced by Hanson [11] for differentiable mathematical programming problems.The term invex (which means invariant convex) was suggested later by Craven [10].Over the years, many generalizations of this concept have been given in the literature (see, for instance, [1], [2], [3], [5], [6], [7], [8], [9], [12], [13], [14], [15], [16], and others).
In [14], Martin showed that elementary relaxations of the conditions defining invexity lead to Corresponding Author.Email<EMAIL_ADDRESS>modified invexity notions which are both necessary and sufficient for weak duality and Kuhn-Tucker sufficiency.Hence, Martin introduced the definition of Kuhn-Tucker invex (KT -invex) optimization problem and he proved that every Kuhn-Tucker point of the optimization problem with inequality constraints is a global minimizer if and only if this extremum problem is Kuhn-Tucker invex.Also Martin gave the necessary and sufficient conditions for weak duality to hold.Namely, he introduced the concept of W D-invexity and he showed that weak duality between the considered optimization problem with inequality constraints and its Wolfe dual problem holds if the primal extremum problem is W D-invex.
In [4], Antczak generalized Hanson's definition of a (differentiable) invex function and he introduced the concept of G-invexity for differentiable constrained optimization problems.He formulated and proved new necessary optimality conditions of G-F.John and G-Karush-Kuhn-Tucker type for differentiable constrained mathematical programming problems and, under Ginvexity assumptions, he established the sufficiency of these necessary optimality conditions.Further, for the considered extremum problem with inequality constraints, Antczak [4] formulated the so-called G-Mond-Weir-type dual and he proved various duality results by assuming the functions involved to be G-invex with respect to the same function η and with respect to, not necessarily, the same function G.
In this paper, following Martin [14] and Antczak [4], we introduce the definitions of KT -G-invexity and W D-G-invexity notions for the considered differentiable optimization problem (P) with inequality constraints.For such an extremum problem (P), we define the socalled G-Karush-Kuhn-Tucker point (G-KKTpoint) and we prove that every G-Karush-Kuhn-Tucker point of problem (P) is its global minimizer if and only if problem (P) is KT -G-invex.Thus, we extend the result established by Ben-Israel and Mond [6] to the case of a new class of nonconvex optimization problems.Further, for the considered constrained optimization problem (P), we define a modified dual problem in the sense of Wolfe -we call it the G-Wolfe dual problem (G-W D).Under assumption that the primal problem (P) is W D-G-invex, we prove the necessary and sufficient conditions for weak duality to hold between problems (P) and (G-W D).Thus, the main purpose of this paper is to use the introduced concepts of KT -G-invexity and W D-G-invexity in proving the necessary and sufficient optimality conditions and the necessary and sufficient conditions for weak duality for a new class of nonconvex differentiable optimization problems.
Optimality
The following convention for equalities and inequalities will be used in the paper.
For any x = (x 1 , x 2 , ..., x n ) T , y = (y 1 , y 2 , ..., y n ) T , we define: Now, for the considered constrained optimization problem (P), we define the concept of KT -G-invexity.Let f : X → R and g : X → R be defined as in the formulation of problem (P) and, moreover, I f (D) and I g (D) be the range of f and g, that is, the image of D under f and the image of D under g, respectively.
Definition 2. The constrained optimization problem (P) is said to be Kuhn-Tucker-G-invex (shortly, KT -G-invex) at u ∈ D on D if there exist real-valued differentiable increasing functions G f : I f (D) → R, G g j : I g j (D) → R, j ∈ J, and a vector-valued function η : D × D → R n such that, the following relations hold, where J M ax (u) = {j ∈ J : G g j (g j (u)) = M ax{G g j (g j (x)) : x ∈ D}}.
If the relations (1) are satisfied at any point u ∈ D, then problem (P) is said to be KT -Ginvex on D. Remark 3. In the case when G f (a) ≡ a for any a ∈ I f (X), G g j (a) ≡ a, j ∈ J, for any a ∈ I g j (X), it follows that J M ax (u) = J(u) and we obtain the definition of KT -invexity introduced by Martin [14] for differentiable optimization problems.Now, we give the definition of a modified Kuhn-Tucker point in the considered optimization problem (P) and we call it a G-Karush-Kuhn-Tucker point.
Definition 4. [4]
A point x ∈ D (if it exists) is said to be a G-Karush-Kuhn-Tucker point in the considered optimization problem (P) if there exists ξ ∈ R m such that the following relations are satisfied, where G f is a real-valued differentiable increasing function defined on I f (D), and G g j , j ∈ J, is a real-valued differentiable increasing function defined on Remark 5. We call the relations (2)-( 4) the G-Karush-Kuhn-Tucker necessary optimality conditions (see [4]) for the considered optimization problem (P).
We now prove the necessary and sufficient optimality conditions for the considered optimization problem (P) under the assumption that it is KT -G-invex.Theorem 6.Every G-Karush-Kuhn-Tucker point is a global minimizer in problem (P) if and only if problem (P) is KT -G-invex.
Proof. (Sufficiency). Assume that problem (P)
is KT -G-invex.Let x be a G-Karush-Kuhn-Tucker point in problem (P).Then, there exists a Lagrange multiplier ξ ∈ R m such that the G-Karush-Kuhn-Tucker necessary optimality conditions (2)-( 4) are satisfied.By (2), it follows Using the first relation in (1) together with (5), we get Since x is a G-Karush-Kuhn-Tucker point in problem (P), it is feasible in problem (P).As it follows from (3), if ξ j = 0 for some j ∈ J, then G g j (g j (x)) = M ax{G g j (g j (x)) : x ∈ D}, that is, j ∈ J M ax (x).Since ξ ≧ 0, therefore, for j ∈ J M ax (x), the second relation in (1) implies that the following relation holds for all x ∈ D. Combining ( 6) and ( 7), we obtain that the inequality is satisfied for all x ∈ D. Since G f is an increasing function on its domain, the following inequality holds for all x ∈ D. This means that x is a global minimizer in problem (P).
(Necessity).Assume that every G-Karush-Kuhn-Tucker point of problem (P) is a global minimizer.For any pair of points x, x ∈ D, we consider the following cases: (i) Assume that x and x are feasible points in problem (P) satisfying the inequality f (x) < f (x).Then, by definition, x is not a global minimizer in problem (P).By assumption, therefore, it is not a G-Karush-Kuhn-Tucker point for problem (P).This means that there exists no a set of multipliers such that (2)-( 4) are fulfilled, that is, there exist no where G f is a real-valued differentiable increasing function defined on I f (D), and G g j , j ∈ J, is a real-valued differentiable increasing function defined on Note that, if the equality (8) was satisfied, then the multipliers λ, ξ = (ξ 1 , . . ., ξ m ) with ξ j = 0 for all j / ∈ J M ax (x), would verify (2)-( 4).According to Tucker's theorem of the alternative, it follows that there exists a vector w ∈ R n , depending upon x, such that and Hence, by (11), we have By assumption, f (x) < f (x) and G f is an increasing on its domain.Thus, we have As it follows from ( 9) and ( 13), the scalar factor in (11) is negative.Then, multiplying (10) by this factor, we get G ′ g j (g j (x)) ∇g j (x) η (x, x) ≦ 0, j ∈ J M ax (x) .( 14) (ii) Now, assume that x, x ∈ D are feasible points in problem (P) satisfying the inequality f (x) ≧ f (x).Since G f is an increasing on its domain, the above inequality implies In this case, therefore, it is sufficient to set that η (x, x) = 0 (15) to ensure that the inequality is satisfied.Moreover, by (15), for all j ∈ J M ax (x), we have Thus, assuming only that every G-Karush-Kuhn-Tucker point is a global minimum in problem (P), we have shown the existence of a function η : D × D → R n that meets requirements of Definition 2. This is a conclusion of the necessity and completes the proof of theorem.
Remark 7. Note that to prove that every G-Karush-Kuhn-Tucker point is a global minimizer in problem (P) it is sufficient to assume that problem (P) is KT -G-invex at x on D.
In order to illustrate this result, we present an example of KT -G-invex optimization problem.
Example 8. Consider the following nonconvex optimization problem Note that D = {x ∈ R : x ≧ 0} and x = 0 is a feasible solution in the considered optimization problem (P1).We now show that x = 0 is a G-Karush-Kuhn-Tucker point in problem (P1).
In order to do it, we set G f (t) = exp (t) and G g (t) = − ln (1 − t).Then, it is not difficult to show that there exist ξ = 1 such that the conditions ( 2)-( 4) are satisfied with the functions G f and G g defined above.Then, by Definition 4, x = 0 is a G-Kuhn-Tucker point in problem (P1).Now, we show that the considered optimization problem (P1) is KT -G-invex at x on D (with respect to functions G f and G g defined above).We set η (x, x) = x − x.Then, by Definition 2, it that the considered optimization problem (P1) is KT -G-invex at x on D (with respect to η, G f and G g given above).Thus, x = 0 is a global minimizer in the considered optimization problem (P1).Further, note that it is not possible to use the concept of invexity introduced by Hanson [11] to prove that x = 0 is a global minimizer in the considered optimization problem (P1).It is not difficult to show that the functions constituting problem (P1) are not invex at x on D with respect to the same function η defined by η : D ×D → R.
In some cases of nonconvex optimization problems, it is easier to show that the considered optimization problem is KT -G-invex than KTinvex in the sense of definition introduced by Martin [14].In some of such cases, the function η has more complex form in the definition of KT -invexity than in the formulation of KT -G-invexity and, therefore, it is more difficult to find such a function η satisfying the definition of KT -invexity.Now, we give an example of such a nonconvex optimization problem.
Example 9. Consider the following nonconvex optimization problem: Note that D = {x ∈ R : x ≧ 0} and x = 0 is a feasible solution in the considered optimization problem (P2).Now, we show that x = 0 is a G-Kuhn-Tucker point in problem (P2).In order to do it, we set G f (t) = tan (t) and G g (t) = ln (t + 1).Then, it is not difficult to show that there exists ξ = 2 such that the conditions ( 2)-( 4) are satisfied for such defined functions G f and G g .Then, by Definition 4, x = 0 is a G-Karush-Kuhn-Tucker point in problem (P1).Now, we show that the considered optimization problem (P2) is KT -G-invex at x on D (with respect to functions G f and G g defined above).We set η (x, x) = x − x.Then, by Definition 2, it follows that the considered optimization problem (P1) is KT -G-invex at x on D (with respect to η, G f and G g given above).It is not difficult to show by the definition of KT -invexity given by Martin [14] that problem (P2) is not KT -invex x on D with respect to η given above.In order to prove that it is KT -invex at x on D, we set η (x, x) = 1 2 (arctan (x) − arctan (x)).Then, by definition, it is possible to show that problem (P2) is KT -invex x on D with respect to η given above.However, it is not difficult to see that the form of the function η with respect to which problem (P2) is KT is more complex than the function η with respect to which problem (P2) is KT -G-invex.The fact that the function η with respect to which the given optimization problem is KT -G-invex is less complex than in the case of KT -invexity is an useful property from the practical point of view.
Duality
In this section, for the considered optimization problem (P), we consider the modified Wolfe dual problem (G-W D), the so-called G-Wolfe dual problem.We give the necessary and sufficient conditions for weak duality between problems (P) and (G-W D).To do this, we use the concept of W D-G-invexity introduced in this section.
For the considered optimization problem (P), consider the G-Wolfe dual problem in the following form: where G f is a real-valued differentiable increasing function defined on I f (X), and G g j , j ∈ J, is a real-valued differentiable increasing function defined on I g j (X).We denote by W the set of all feasible solutions in the G-Wolfe dual problem (G-W D), that is, the set Now, we introduce the definition of W D-Ginvexity for the considered optimization problem (P).
Remark 11.In the case when G f (a) ≡ a and G g j (b) ≡ b, j = 1, ..., m, we obtain the definition of of W D-invexity introduced by Martin [14] for differentiable optimization problems.
Definition 12. Weak duality is said to hold between problems (P) and (G-W D) if, for every feasible point x for the primal optimization problem (P) and every feasible pair (y, ξ) ∈ W for its G-Wolfe dual problem (G-W D), we have Now, under assumption of W D-G-invexity, we prove the necessary and sufficient conditions for weak duality between problems (P) and (G-W D).
Theorem 13.Weak duality holds between the primal optimization problem (P) and its G-Wolfe dual problem (G-W D) if and only if problem (P) is WD-G-invex on X.
Proof.(Necessity).Assume that the G-weak duality between problems (P) and (G-W D) holds.This means that for any feasible solutions x and (y, ξ) in problems (P) and (G-W D), the system . By Tucker's theorem of the alternative, this, in turn, is equivalent to the consistency of the system 0 . . . 1 If the first component in the above system is strictly negative, that is, ϑ < 0, then we may take ϑ = −1 to conclude that If the first argument the above system is equal to zero, that is, ϑ = 0, then the second must be strictly negative.Therefore, we have This means that there exists a vector valued function η : X × X → R n such that the inequalities (18) or (19) are satisfied.This means that (P) is W D-G-invex on X.
(Sufficiency).Let x and (y, ξ) be any feasible points in problems (P) and (G-W D), respectively.Assume that the considered optimization problem (P) is W D-G-invex on X.We consider the case when the following inequalities −G ′ g j (g j (y)) ∇g j (y) η (x, y) ≧ 0, j ∈ J.
(20) Multiplying the second inequality above by ξ j ≧ 0 and then adding both sides of the obtained inequalities, we get Adding both sides of the first inequality in ( 20) and ( 21), we obtain From the feasibility of (y, ξ) in G-Wolfe dual problem (G-W D), it follows that ξ j G g j (g j (y)) .Now, assume that the following inequalities in Definition 10 are satisfied for all x ∈ D and all y ∈ X. Multiplying the second inequality above by ξ j ≧ 0 and then adding both sides of the obtained inequalities, we get By the above inequality, we conclude that (y, ξ) is not feasible in G-Wolfe dual problem (G-W D) for any multiplier vector ξ = (ξ 1 , ..., ξ m ) ∈ R m , ξ ≧ 0. This means that such a point plays no role in determining whether or not G-weak duality holds.
This completes the proof of theorem.
Conclusions
In the paper, new concepts of generalized invexity have been defined for differentiable optimization problems.The so-called KT -G-invexity and W D-G-invexity notions defined for the considered differentiable optimization problem (P) with inequality constraints are generalizations the Ginvexity notions introduced by Antczak [4] and the concepts of KT -invexity and W D-invexity introduced by Martin [14], respectively.It has turned out that the introduced KT -G-invexity notion is a necessary and sufficient condition for optimality in a new class of nonconvex differentiable optimization problems.Namely, it was proved that every so-called G-Kuhn-Tucker point in problem (P) is its global minimizer if and only if problem (P) is KT -G-invex.Moreover, as it follows from the proof of this result, some characterization of a function η (with respect to which the given constrained optimization problem is KT -G-invex) is given.The property that the function η could be less complex in the case of KT -G-invexity than in the case of KT -invexity for some nonconvex optimization problems is important from the practical point of view.Note that, for some nonconvex optimization problems, we are not in a position to establish the optimality of a feasible point satisfying necessary optimality conditions under invexity, but the concept of KT -G-invexity turned out to be useful in proving this result.Further, for the considered differentiable optimization problem (P), the so-called G-Wolfe dual problem (W D-G) has been defined.The concept W D-G-invexity defined in the paper has turned out to be a necessary and sufficient condition to weak duality holds between problems (P) and (W D-G) In this way, this result was proved for a new class of nonconvex differentiable optimization problems.Some interesting topics for further research remain.It would be of interest to investigate whether the results established in the paper are true also for a larger class of nonconvex constrained optimization problems, for instance, for a class of nonconvex nondifferentiable extremum problems.Thus, further research can focus on the usefulness of these concepts of generalized invexity in proving optimality conditions and duality results for other classes of nonconvex optimization problems.It seems that the techniques employed in this paper can be used in proving similarly results for the constrained vector optimization problems.We shall investigate these questions in subsequent papers.
≥ y if and only if x ≧ y and x = y. | 4,775.4 | 2015-01-01T00:00:00.000 | [
"Mathematics"
] |
Learning an Input Filter for Argument Structure Acquisition
How do children learn a verb’s argument structure when their input contains nonbasic clauses that obscure verb transitivity? Here we present a new model that infers verb transitivity by learning to filter out non-basic clauses that were likely parsed in error. In simulations with child-directed speech, we show that this model accurately categorizes the majority of 50 frequent transitive, intransitive and alternating verbs, and jointly learns appropriate parameters for filtering parsing errors. Our model is thus able to filter out problematic data for verb learning without knowing in advance which data need to be filtered.
Introduction
Young language learners are limited by partial knowledge in identifying the structure of sentences they hear and of their language in general.This partial knowledge may lead to inaccurate parses of their input, resulting in data that are misleading about the true structure of their language.Here we investigate a problem of misleading data in verb learning: how learners identify verbs' syntactic properties despite the presence of unknown grammatical structures that obscure those properties.
We propose a new model for the acquisition of argument structure, the syntactic property of a verb that determines which types of clauses it can occur in (Chomsky, 1965;Chomsky, 1981;Grimshaw, 1990).We model how a learner can use verb distributions to infer whether a verb can occur in a transitive clause with both a subject and an object, an intransitive clause with only a subject, or both.This inference depends on the ability to accurately perceive the arguments in a clause: whether a clause has a subject and an object, or only a sub-ject.Identifying these arguments is straightforward for "basic" clause types like (1) and ( 2), but more difficult for "non-basic" clause types that do not follow the subject-verb-object word order typical of English, like (3): (1) John ate a sandwich.Amy threw a frisbee.
A learner tracking when direct objects are present after the verb would notice that sentences like (1) contain both subjects and objects, and sentences like (2) contain only subjects.It would then follow that throw is obligatorily transitive whereas eat can alternate.But this strategy is complicated by the wh-object questions in (3).These questions do not have direct objects after the verb, but do have two arguments: the wh-word what stands in for the verb's object.These data may be misleading for a child who has not yet learned how to identify wh-questions in her language.She might note the absence of a direct object after the verb and perceive the sentences in (3) as intransitive, mistakenly concluding that throw can alternate just like eat.
To learn verb transitivity, learners need some way to filter out the non-basic clauses in their input; this ability is assumed in prominent theories of verb learning such as syntactic and semantic bootstrapping (Gleitman, 1990;Lidz and Gleitman, 2004;Pinker, 1984;Pinker, 1989).In this paper we present a Bayesian model that learns the parameters for such a filter solely by tracking the distributions of verbs with and without direct objects.The model does so under the assumption that some sentence observations will be generated in error, due to mis-parses of non-basic clauses like (3).In simulations with child-directed speech, we show that this model can learn the parameters for filtering these parsing errors in order to categorize 50 verbs as transitive, intransitive, or alternating.We thus demonstrate that it is possible for a learner to learn an input filter for verb learning without knowing in advance what needs to be filtered.
Filtering Input
Many have proposed that learners need some way to filter the data they use in acquiring their language (Adriaans and Swingley, 2012;Lidz and Gleitman, 2004;Pearl and Lidz, 2009;Pinker, 1984).This filtering is important for theories of verb learning under which children rely on systematic relations between verbs' syntactic properties and their meanings, e.g.semantic and syntactic bootstrapping (Fisher et al., 2010;Gleitman, 1990;Landau and Gleitman, 1985;Lasnik, 1989;Pinker, 1984;Pinker, 1989).Non-basic clauses obscure these relations, so learners need a way to filter them out of the data they use for verb learning.Pinker (1984) posits two solutions: either parents might do the filtering and avoid producing these sentences in their children's presence, or children might internally filter these sentences themselves.Parental filtering does not seem to occur: even before their second birthday, English-learning children hear a large number of wh-questions (between 10-17% of their total input), the majority of which do not follow the typical word order of English (Stromswold, 1995).These non-basic clause types are thus prevalent in child-directed speech, necessitating a different filtering solution.
The second logical solution is internal filtering: perhaps children can filter out non-basic clauses themselves.This proposal implicitly assumes that children know which sentences to filter out.However, experimental evidence suggests that children may not have the ability to recognize which sentences contain wh-questions before the age of 20 months, an age at which substantial verb learning is already taking place (Gagliardi et al., 2016).Furthermore, this ability may depend on prior verb knowledge.Identifying that sentences like (3) contain object wh-questions requires the learner to detect when a fronted phrase (like what, who, or which NP) stands in relation to a verb that needs a patient argument and is locally missing one.But this requires the learner to know which verbs take patient arguments, in order to notice when those arguments are needed and missing.In other words, the learner needs to detect that these sentences con-tain direct object gaps, rather than intransitive uses of these verbs-but in order to do so, the learner must know which verbs are transitive.
The filtering problem thus risks being circular: learners need to know which verbs are transitive in order to detect the signals of non-basic clause types like object wh-questions, but they also need to filter out sentences containing these clause types in order to learn which verbs are transitive.Pinker (1984) posits that children might avoid this problem by using sentence meaning, context, and intonation to identify a filter on their input.Our approach, by contrast, does not require learners to know the parameters of the input filter before they are able to learn verbs.Instead of fixing one of these pieces of knowledge to learn the other, children may jointly infer which verbs are transitive and the parameters for filtering sentences containing non-basic clauses.We thus model a learner who can filter out errors in parsing non-basic clauses, without needing to first identify where those errors came from.
Model
Our model uses the distribution of direct objects within and across verbs to infer both verb transitivity and the parameters for filtering non-basic clauses.We adopt a Bayesian framework, in which a learner observes a data pattern and infers the probability of some properties of the system that may have generated that data.This framework conveniently allows us to specify the alternative systems (verb transitivity properties vs. mis-parses of nonbasic clauses) that our learner considers for the verb distributions it observes.Our model follows other Bayesian approaches to argument structure acquisition (Alishahi and Stevenson, 2008;Perfors et al., 2010), but considers a different problem than the one explored in that literature.Instead of learning which verb classes exist in a particular language, our model is designed to solve the problem of learning which verbs map to which known transitivity classes despite input that obscures these mappings.
Generative Model
The model learns from observations of direct objects or no direct objects in sentences containing particular verbs.These observations are formalized as the Bernoulli random variable X in the graphical model in Figure 1.Each X (v) represents an observation from a sentence containing verb v in the model's input, with a value of 1 if the sentence Figure 1: Graphical Model contains a direct object and 0 if it does not.These observations can be generated by two processes: the transitivity of verb v, represented by the variables T and θ in the upper half of the model, or an internal parsing error, represented by the variables e, , and δ in the lower half of the model.We will describe each of these processes in turn.
In the upper half of the model, each X (v) is conditioned on the parameter θ (v) , a continuous random variable defined for values from 0 to 1 inclusive.This parameter controls how frequently a verb v will be used with a direct object: the learner assumes that for every observation X (v) , a biased coin is flipped to determine whether the sentence contains a direct object, with probability θ (v) , or does not, with probability 1 − θ (v) .
The parameter θ (v) is conditioned on the variable T (v) , which represents the transitivity of verb v. T is a discrete random variable that can take on three values, corresponding to transitive, intransitive, and alternating verbs.Each of these values determines a different distribution over θ.For the transitive category of T , θ always equals 1: the verb should always occur with a direct object.For the intransitive category, θ always equals 0: the verb should never occur with a direct object.For the alternating category, θ takes a value between 0 and 1 inclusive.The prior probability distribution over θ in this case is a uniform Beta(1, 1) distribution.
In the lower half of the model, each X is conditioned on a Bernoulli random variable e, which represents the input filter.If e was generated by θ (v) and T (v) , and accurately reflects the transitivity of verb v.But if e was generated by an internal parsing error, meaning the learner did not have adequate grammatical knowledge to parse the sentence correctly.This observation was not generated by θ (v) and T (v) , and may not accurately reflect the transitivity of verb v, so it should be ignored for the purpose of inferring T (v) .Each e (v) is conditioned on the variable , which represents the probability of an internal parsing error occurring for any verb in the input.The model learns a single parameter value for across all verbs.
The second parameter of the input filter is δ, which represents the probability of observing a direct object when an observation was generated by an internal parsing error.Thus, whether a sentence contains a direct object or no direct object depends on one of two biased coins.If e (v) i = 0 and the observation accurately reflects the verb's transitivity properties, then one biased coin is flipped and the sentence contains a direct object with probability θ (v) .If e (v) i = 0 and the observation was generated by a parsing error, then a different biased coin is flipped and the sentence contains a direct object with probability δ.Like , δ is a shared parameter across all verbs.We assume that both and δ have a uniform Beta(1, 1) prior distribution.
Joint Inference
We use Gibbs sampling (Geman and Geman, 1984) to jointly infer T , , and δ, integrating over θ and summing over e, with Metropolis-Hastings (Hastings, 1970) proposals for and δ.
We begin by randomly initializing and δ, and sampling values of T for each verb given values for those input filter parameters.From observations of a verb with and without direct objects, the model determines which value of T was most likely to have generated those observations.For k (v) direct objects in n (v) sentences containing verb v, we use Bayes' rule to compute the posterior probability of each value for T (v) , Bayes' Rule tells us that the posterior probability of a particular value of T given k (v) and the other model parameters is proportional to the likelihood, the probability of k (v) given that value of T and those parameters, and the prior, the probability of T before seeing any data.We assume that T is independent of and δ, and that all three values of T have equal prior probability.
To calculate the likelihood, we must sum over e.This sum is intractable, but because all of the values of e for the same verb and the same direct object status are exchangeable, we make the computation more tractable by simply considering how many errors were generated for sentences with and without direct objects for a particular verb.We divide the k (v) observed direct objects for a verb into k errorful observations.We then calculate the likelihood by marginalizing over n 1 , again assuming independence among T , , and δ, The first term in the inner sum is equivalent to p(k 0 , δ), assuming we know n (v) , the total number of observations for a particular verb.This is the probability of observing k (v) 0 errorful direct objects out of n (v) 0 errorful observations, which follows a binomial distribution with parameter δ.The second term in the inner sum is the probability of observing k 1 accurate observations, which follows a binomial distribution with parameter θ (v) .Recall that θ (v) = 1 for the transitive category of T , and θ (v) = 0 for the intransitive category of T .For the alternating verb category, θ (v) is unknown, so we integrate over all possible values of θ (v) to obtain . The last term in ( 5) is the probability of observing n (v) 1 accurate observations out of the total n (v) observations for verb v, which follows a binomial distribution with parameter 1 − .
After sampling values for T for each verb in the dataset, we then sample values for and δ.If T denotes the set of values T (1) , T (2) , ..., T (V ) , and k denotes the full set of observations of direct objects k (1) , k (2) , ..., k (V ) for all V verbs in the input, we can define functions proportional to the posterior distributions on and δ, f ( ) ∝ p( |T, k, δ) and g(δ) ∝ p(δ|T, k, ), as where the likelihood p(k|T, , δ) is the product over all verbs v of p(k (v) |T (v) , , δ), as calculated in (5).
Within the Gibbs sampler, we resample using 10 iterations of a Metropolis-Hastings algorithm.We begin by randomly initializing .At each iteration, we propose a new value , sampled from the proposal distribution Q( | ) = N ( , 0.25).Because the proposal distribution is symmetric, this new value is accepted with probability If the new value has higher probability given T , k and δ under equation ( 6), it is accepted.If it has lower probability under equation ( 6), it is accepted at a rate corresponding to the ratio of its probability and the probability of the old value of .
After sampling , we resample δ with 10 iterations of Metropolis-Hastings.The proposal and acceptance functions are analogous to those for .
We ran multiple chains from different starting points to test convergence of T , , and δ.For the simulations reported below, we ran 1,000 iterations of Gibbs sampling.We took every tenth value from the last 500 iterations as samples from the posterior distribution over T , , and δ.
Data
We tested the model on a dataset selected from the CHILDES Treebank (Pearl and Sprouse, 2013).We used four corpora of child-directed speech (803,188 total words), which were parsed using the Charniak or Stanford parser and hand-checked by undergraduates.See Table 1 for corpus details.
Our dataset consists of sentences containing the 50 most frequent action verbs in these corpora that could be characterized as transitive, intransitive, or alternating.We excluded verbs that were obligatorily ditransitive or frequently took clausal or verbal complements: these included mental state verbs (e.g.want), aspectual verbs (e.g.start), modals (e.g.should), auxiliaries (e.g.have), and light verbs (e.g.take).
English verb classes described in Levin (1993), supplemented by our own intuitions for verbs not represented in that work.These classes provide a target for learning meant to align with adult speaker intuitions, independent of the corpus data that the model learns from.The transitive and intransitive categories are conservative; verbs like jump are considered alternating even though they occur infrequently in their possible transitive uses (e.g.jump the horses over the fence).These target categories thus set a high bar for our model to reach.
We then conducted an automated search over the Treebank trees for the total occurrences of each verb in the corpora, in all inflections, and the total occurrences with overt (pronounced) direct objects.These direct object counts included transitive basic clauses like those in (1), but not wh-object questions with object gaps like those in (3).Table 2 lists these 50 verbs along with their counts and percentage occurrences with direct objects.
Simulations
We tested our model on the dataset described in the previous section.We compare our model's performance to an oracle model that already knows the parameters of the input filter, and two baselines.The percentage of verbs categorized correctly by the model is reported in Table 3.The model achieves highest accuracy in categorizing the intransitive verbs: for all but one of these verbs, the model assigns highest probability to the intransitive category.The exception is the verb wait, which the model assigns highest probability under the alternating category.This is due to prevalent uses of temporal adjuncts, as in wait a minute, that were parsed as direct objects in the CHILDES Treebank.Thus, a learner who likewise misparses these adjuncts as direct objects would infer that wait is an alternating rather than intransitive verb.
Joint Inference Model
The model assigns 6 out of the 9 transitive verbs highest probability under the transitive category.Three transitive verbs are assigned highest probability under the alternating rather than the transitive category: catch, hold, and wear.This is likely because these verbs display different behavior than the other transitive verbs in the corpus.The verb hold occurs frequently in verb-particle constructions (e.g.hold on), which might be treated differently than simple verbs by learners.The verbs catch and wear appear to occur at much higher rates than other transitive verbs in non-basic clauses: catch occurs frequently in passive constructions (e.g.get caught), and wear occurs frequently in The model assigns highest probability for most of the alternating verbs to the alternating verb category.There are 13 exceptions.The verbs pick, drop, lose, close, touch, leave, wash, and pull are assigned highest probability under the transitive category because they infrequently occur in their possible intransitive uses in child-directed speech.The verbs run, swim, walk, jump, and sit are assigned highest probability under the intransitive category because these verbs very infrequently occur in their possible transitive uses.Thus, the model over-regularizes the alternating verbs that alternate infrequently, preferring the more deterministic transitive and intransitive verb categories.
In order to evaluate the model's inference of and δ, we estimated the true values of these parameters in our dataset.The proportion of transitive verbs with missing direct objects in the dataset gives us an estimate of (1 − δ) × , and the proportion of intransitive verbs with spurious direct objects (e.g.wait a minute) gives us an estimate of δ × .Solving these two equations, we find that δ = 0.18 and = 0.24.The posterior probability distribution over δ inferred by our model has a mean of 0.23, and the probability distribution over has a mean of 0.22.Our model thus slightly over-estimates the value of δ and under-estimates the value of .However, it infers values for these parameters close to the true values in the corpus, enabling it to infer the correct transitivity properties for 2/3 of the verbs in our dataset.
Oracle Model
To evaluate our model's performance, we compare it against an oracle model in which δ is fixed to 0.18 and to 0.24 in order to reflect their estimated true values in our dataset.This allows us to see how our model compares to a model that knows the parameters for the input filter in advance.
The posterior probability distributions over verb categories inferred by the oracle model are displayed in Figure 4. Our joint inference model performs identically to the oracle model with intransitive verbs, and almost as well with transitive verbs: the oracle model succeeds in identifying one more transitive verb, catch, as transitive.Our joint inference model performs better than the oracle model in categorizing alternating verbs: the oracle model has an even higher tendency to over-regularize the verbs that alternate infrequently.
Inferring the parameters of the input filter thus results in comparable, and maybe slightly better, accuracy in categorizing verbs than knowing these parameters in advance.It should be noted that the values of these parameters are important: when we run a version of the oracle model with inappropriate values for and δ, performance decreases substantially.Thus, our model performs comparably to
No-Filter Baseline
We've seen that our model accurately categorizes 2/3 of the verbs in our dataset by inferring appropriate parameters of a filter on its input, and performs comparably overall with a model that knows those parameters in advance.To determine how much the input filter matters in this inference, we compare our model to a baseline that lacks this filter.
We can instantiate a model with no filter by setting to zero, representing zero probability of parsing errors.Because every verb in our dataset occurs some but not all of the time with direct objects, and this model assumes there are no parsing errors to filter out, it assigns every verb to the alternating category.It thus categorizes 100% of the alternating verbs correctly, achieving 70% overall accuracy because alternating verbs make up 70% of our dataset.However, this accuracy comes at the cost of failing to categorize any verbs as transitive or intransitive.Our joint inference model performs substantially better in this regard, categorizing the majority of transitive and intransitive verbs correctly.Thus, an input filter is important for differentiating alternating from non-alternating verbs.
Random Baseline
We finally compare our model against a baseline that assigns verbs randomly to transitivity categories, assuming that each value of T has equal prior probability.For each verb in our dataset this model flips a fair 3-sided coin to determine its transitivity category.This model thus categorizes 1/3 of the transitive, intransitive, and alternating verbs correctly, resulting in 34% overall accuracy.Our joint inference model performs significantly better on each verb class, and nearly twice as well overall.
Summary
We find that inferring an appropriate filter on the input matters for verb transitivity learning, but that the parameters of this input filter can be learned.Our model performs comparably to an oracle model that knows these values in advance.It performs substantially better in categorizing transitive and intransitive verbs than a baseline model that lacks an input filter altogether, and performs twice as well overall as a random baseline.These results demonstrate that our model is able to infer reasonable values for the input filter parameters, allowing it to accurately categorize the majority of transitive, intransitive, and alternating verbs.
Discussion
In this paper we introduce a model that infers the parameters of a filter on its input for argument struc-ture acquisition.Our model accurately categorizes 2/3 of the most frequent transitive, intransitive, and alternating verbs in child-directed speech on the basis of their distributions with and without direct objects, by learning to filter out sentences that were likely mis-parsed.This enables the learner to avoid drawing faulty inferences about verb transitivity from non-basic clause types, such as whobject questions, that may be mistaken for intransitive clauses.Our model performs substantially better than baseline models that lack an input filter and performs comparably to an oracle model that knows these input filter parameters in advance, demonstrating that this input filter both matters for verb learning and can be learned.
Our model offers a novel solution to the problem of identifying an appropriate input filter for verb learning (Lidz and Gleitman, 2004;Pinker, 1984) : where previous approaches have implicitly assumed that children must have a way of identifying the sentences to be filtered, our model learns an input filter without knowing its parameters in advance.Instead, the learner infers the input filter parameters jointly with verb transitivity.This reduces the prior knowledge needed for initial verb learning: the child does not need to identify which sentences likely contain wh-questions and other non-basic clause types as a prerequisite for learning which verbs are transitive.Note that we do not claim that the Bayesian joint inference performed by our model represents the exact algorithms performed by child learners; although there is substantial literature on young children's statistical inference capabilities (Gomez and Gerken, 2000), this model is intended only as a proof of concept that such joint inference is possible.
The model makes two types of errors in inferring verb categories.First, it is unable to correctly categorize some transitive and intransitive verbs that behave differently than other verbs in their category, such as catch, hold, wear, and wait.Further investigation is necessary to determine whether these verbs pose difficulties for child learners as well.A second type of error is over-regularizing alternating verbs that alternate infrequently: the model prefers to assign these verbs to the transitive and intransitive categories.This is an example of a learner preferring a more deterministic analysis for probabilistic input, a tendency also found in child learners in artificial language studies (Hudson Kam and Newport, 2009).The error-filtering mechanism we present here could thus potentially provide a way to model other forms of over-regularization in learning.
Other future directions include extending this model to cross-linguistic data, particularly to languages with free object-drop.Chinese, Korean, and Japanese have a syntactic mechanism for dropping the direct object of any transitive verb, unlike in English where object-drop is a lexical property of specific verbs.As as result, learners of these languages might be subject to even higher rates of parsing errors if they perceive object-drop sentences as intransitive.For this reason, these languages are potentially problematic for syntactic bootstrapping strategies that rely on learners accurately identifying transitive verbs (Lee and Naigles, 2005;Lee and Naigles, 2008).But if appropriate parameters for filtering out problematic object-drop constructions in these languages can be inferred, our model may help address concerns with the feasibility of syntactic bootstrapping in these languages.
Finally, this model learns verb transitivity by effectively filtering out sentences containing nonbasic clause types, without identifying what these clause types are.But children do eventually learn to identify non-basic constructions in their language.If children initially filter out certain sentences as parsing errors for the purposes of verb learning, they must eventually learn that many of these sentences are generated by systematic syntactic operations, such as those that create wh-questions in English.In future work, we aim to investigate whether a learner can identify which non-basic constructions are present in sentences that were parsed in error.Learning verb transitivity can likely help the child identify these constructions: if a child expects a direct object for a verb and encounters sentences where this object does not appear, that child may be compelled to examine those anomalous sentences to determine the cause of the missing object.Thus, a strategy of filtering out non-basic constructions by initially treating them as parsing errors may eventually help the learner identify not only verb transitivity, but also the nature of those non-basic constructions themselves.
that were observed in error.The total n(v) observations for verb v are likewise divided into n (v) 1 accurate observations and n (v) 0
Figure 2
Figure2displays the posterior probability distributions over verb categories inferred by our joint inference model for each verb.Black bars represent the probability assigned to the transitive category, dark gray bars represent the probability assigned to the intransitive category, and light gray bars represent the probability assigned to the alternating category.The true categories for each verb are shown below the horizontal axis.Figure3displays the posterior distributions inferred for and δ.The percentage of verbs categorized correctly by the model is reported in Table3.The model achieves highest accuracy in categorizing the intransitive verbs: for all but one of these verbs, the model assigns highest probability to the intransitive category.The exception is the verb wait, which the model assigns highest probability under the alternating category.This is due to prevalent uses of temporal adjuncts, as in wait a minute, that were parsed as direct objects in the CHILDES Treebank.Thus, a learner who likewise misparses these adjuncts as direct objects would infer that wait is an alternating rather than intransitive verb.The model assigns 6 out of the 9 transitive verbs highest probability under the transitive category.Three transitive verbs are assigned highest probability under the alternating rather than the transitive category: catch, hold, and wear.This is likely because these verbs display different behavior than the other transitive verbs in the corpus.The verb hold occurs frequently in verb-particle constructions (e.g.hold on), which might be treated differently than simple verbs by learners.The verbs catch and wear appear to occur at much higher rates than other transitive verbs in non-basic clauses: catch occurs frequently in passive constructions (e.g.get caught), and wear occurs frequently in
Table 2 :
Counts and percentage uses with direct objects (DO) of 50 verbs in dataset.
Table 3 :
Percentages of Verbs Categorized Correctly. | 6,675.2 | 2017-04-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
A comparative analysis of stably expressed genes across diverse angiosperms exposes flexibility in underlying promoter architecture
Abstract Promoters regulate both the amplitude and pattern of gene expression—key factors needed for optimization of many synthetic biology applications. Previous work in Arabidopsis found that promoters that contain a TATA-box element tend to be expressed only under specific conditions or in particular tissues, while promoters that lack any known promoter elements, thus designated as Coreless, tend to be expressed more uniformly. To test whether this trend represents a conserved promoter design rule, we identified stably expressed genes across multiple angiosperm species using publicly available RNA-seq data. Comparisons between core promoter architectures and gene expression stability revealed differences in core promoter usage in monocots and eudicots. Furthermore, when tracing the evolution of a given promoter across species, we found that core promoter type was not a strong predictor of expression pattern. Our analysis suggests that core promoter types are correlative rather than causative in promoter expression patterns and highlights the challenges in finding or building constitutive promoters that will work across diverse plant species.
Introduction
Precise control over gene expression is essential for development and survival.One of the first regulatory steps in expression regulation is transcription initiation, which is controlled by DNA regions designated as promoters.Current understanding of eukaryotic promoters is still remarkably limited, and we have difficulty even identifying a precise promoter region given an arbitrary sequence (Donczew and Hahn 2017).A core promoter region is functionally defined as the minimal region required for transcription initiation, associated with binding of RNA polymerase II (RNAPII) and general transcription factors (GTFs).Proximal and distal cis-regulatory elements contribute to the modulation of the core promoter's activity and give it its characteristic expression profile.A sequence containing the proximal cis-regulatory elements as well as the core promoters is often referred to as the "promoter" region (Biłas et al. 2016;Haberle and Stark 2018;Andersson and Sandelin 2020;Schmitz et al. 2022).In practice, cloning and analysis projects often pick an arbitrary length (e.g. up to 2,000 base pairs or until the next coding sequence) upstream of the transcription start site (TSS) to define as the promoter region (Andersson and Sandelin 2020;Schmitz et al. 2022).
Many core promoter elements have been identified within the core promoter region, which are important in directing RNAPII and determining the TSS.The TATA-box motif is the most well-understood of the core promoter elements, yet TATA-box-containing promoters only account for about 20% of eukaryotic promoters and about 30% of Arabidopsis promoters (Molina and Grotewold 2005;Donczew and Hahn 2017).In plants, additional core promoter types were proposed by Yamamoto et al. (2007Yamamoto et al. ( , 2009) ) based on their identification of overrepresented motifs around a fixed distance from the TSS.Y patch, or pyrimidine patch, motifs are C and T rich motifs whose presence had been recently shown experimentally to associate with stronger expression (Jores et al. 2021).CA and GA are additional core promoter elements, represented in approximately 20 and 1% of genic promoters, respectively (Yamamoto et al. 2009).Unlike the TATA-box, which has a known GTF-binding protein associated with it, the molecular mechanisms of the Y patch, CA, and GA elements remain largely unknown.Core promoters that do not contain any of the identified core promoter types have been termed Coreless (Yamamoto et al. 2009(Yamamoto et al. , 2011)).In Arabidopsis, Coreless promoters tend to be expressed more weakly but more broadly than those that contain TATA-boxes (Yamamoto et al. 2011;Das and Bansal 2019).
Constitutive promoters, defined here as promoters that are expressing in all tissues at all times, are versatile tools in synthetic biology due to their desirable expression pattern (Yang and Nemhauser 2022;Zhou et al. 2023).They are often used to drive expression of components used in synthetic circuits or metabolic engineering (Wu et al. 2014;South et al. 2019;Patron 2020;Brophy et al. 2022).Core promoter regions of constitutive promoters (such as the Cauliflower Mosaic Virus 35S promoter) have often been used as the starting point to build synthetic promoters by introducing natural cis-elements or synthetic TF-binding sites upstream of these core promoter regions to artificially tune expression strength or confer new expression patterns (Brückner et al. 2015;Ali and Kim 2019;Belcher et al. 2020;Cai et al. 2020;Brophy et al. 2022;Moreno-Giménez et al. 2022).However, a lack of understanding of the design constraints around promoters had made engineering synthetic promoters challenging.Current approaches often require trial and error or high throughput screening to identify functional synthetic promoters (Brückner et al. 2015;Belcher et al. 2020;Cai et al. 2020;Brophy et al. 2022;Moreno-Giménez et al. 2022).A better understanding of the contributions and limitations of core promoters in controlling expression patterns can therefore be essential in engineering better synthetic promoters.
Here, by leveraging publicly available RNA-seq atlases of 15 angiosperms, we were able to map gene expression pattern onto core promoter type in multiple genomic contexts.While TATA-box-containing promoters are overrepresented in conditionally expressed genes in all of the species we examined, the pattern for Coreless promoters was less clear.In most eudicots, Coreless promoters were overrepresented in uniformly expressed genes, but the opposite trend was observed in monocots.Additionally, by identifying orthologous gene groups within these species, we were able to track changes in core promoter type and expression pattern for groups of evolutionarily related promoters.We found that stably expressed genes are also more likely to have orthologs in other species compared to unstably expressed genes, and the orthologs tend to retain similar expression patterns.Last, we show that changes in core promoter types do not explain changes in expression pattern.This evolution-guided approach reveals design rules surrounding core promoter architecture and expression patterns.
Phylogenetic tree
A phylogenetic tree was constructed referencing NCBI's Taxonomy Browser and Li et al. 2021.
RNA-seq dataset processing (Relevant files: 0_Slurm_Pipeline)
RNA-seq atlases were located in the NCBI Sequence Read Archive database.The references for the datasets can be found in Supplementary Table 1.The individual datasets were retrieved using sratoolkit-3.0.1 prefetch followed by fasterq-dump functions.Fastqc-0.11.9 were used to generate a QC report for each dataset.Trimmomatic-0.39 were used for adaptor and low-quality ends trimming using the following settings: "SLIDINGWINDOW:4:20 MINLEN:36."ILLUMINACLIP files TruSEq3-PE-2.fawere supplied for paired end data and TruSEq3-SE.fa were supplied for single end data.Reference transcriptomes were downloaded from Ensembl Plants (http://plants.ensembl.org/index.html) for Arabidopsis thaliana, Camelina sativa, Cucumis melo, Glycine max, Phaseolus vulgaris, Pisum sativum, Vigna unguiculata, Sorghum bicolor, Zea mays, Solanum lycopersicum, Actinidia chinensis, Triticum aestivum and Phytozome (https://phytozome-next.jgi.doe.gov) for Arachis hypogaea, Cicer arietinum, and S. tuberosum (Goodstein et al. 2012;Cunningham et al. 2021).An index file was generated, and the reads aligned and counted using Kallisto-0.44.0 with "-o counts -b 500."For single end data, fragment length and standard deviation were required, but the information is difficult to locate, and so a default value of "-l 200 -s 20" was used across the board.
Another Fastqc was performed on the trimmed files, and a final MultiQC-1.13was run on the entire folder encompassing all the log files that Fastqc, Trimmomatic, and Kallisto generated.The MultiQC report was inspected to ensure the trimming step improved read quality and there were no major warnings.
Normalizing count, calculating CV, and percent ranking (Relevant files: 1_Metadata_from_RUNselector.Rmd,2_MOR_Normalization. Rmd) Using an R script, the raw counts for each species were normalized using the DESeq2 package using a metadata file curated from the original study for the RNA-seq datasets.The coefficient of variation (CV) across all samples for a given atlas was used as a metric for stability for each gene, and the percentile ranking for each gene was calculated.The geometric mean for each gene was also calculated across all samples.
Extracting intergenic region and 5′UTR (Relevant files: 3_ExtractPromUTR(ALL_Transcripts).ipynb, 8_Extract PromUTR(Orthologs).ipynb)
Gff3 annotation files and reference genomes were downloaded from Ensembl or Phytozome depending on where the reference transcriptomes were retrieved from.Forty percent of transcripts were selected from the total transcriptome, and their intergenic region and 5′UTR were extracted from the Gff3 annotation.Intergenic region and 5′UTRs of identified orthologs were extracted in a similar manner.
Labeling core promoter types (Relevant files: 4_Label_Promoters.Rmd,9_Motif_Scan.Rmd,10_Octamer_ Scan.ipynb)Motif scan: Intergenic regions and 5′UTR sequences are trimmed to those regions to be scanned for each core promoter types: TATA box (−100 to TSS), Y patch (−100 to +100), and Inr (−10 to +10).Intergenic regions shorter than 100 bp were excluded from analysis.Each region was scanned for their respective motifs using motif files as well as methods outlined in Jores et al. (2021).A motif is considered to be present when the relative motif scores are above 0.85.
Octamer scan: Intergenic regions and 5′UTR sequences are trimmed based on the positions relative to the TSS outlined in Yamamoto et al. (2009) (TATA, −45 to −18; Y Patch, −50 to +50; CA, −35 to −1; GA, −35 to +75).Each region was scanned for the presence of octamer motifs from the TATA, Y patch, GA, and CA lists outlined in Yamamoto et al. (2009).If the specified region contained at least one motif for a given promoter type, it was labeled as positive.
Ortholog analysis
(Relevant files: 5_At_gene_ranking.Rmd, 6_Identifying_orthologs.Rmd, 7_Processing_orthologs.Rmd) The Arabidopsis transcriptome was filtered to only include primary transcripts, and mitochondria as well as chloroplast transcripts were removed.Top 5% stable genes by CV, bottom 5% stable genes by CV, and a random set of 1,343 genes (5%) were randomly selected.
Using biomaRt in R, the Ensembl and Phytozome databases were queried for orthologs for the selected set of Arabdiopsis genes for each species (Durinck et al. 2009).Orthologs from A. hypogaea, C. arietinum, and S. tuberosum were retrieved from Phytozome, and the rest of the species from Ensembl.For an analysis in Fig. 3b, significance tests were done by ANOVA followed by Tukey's HSD (honestly significant difference).For each target gene that matched to an Arabidopsis transcript, only the highest expressing transcript was kept.If an Arabidopsis transcript retrieved more than one orthologs from a target species, these pairs of orthologs were removed from analysis.We only kept orthologous gene groups that had a "change" in expression pattern, defined as crossing the 50th percentile CV, in 2 target species, and the remaining candidates were manually mapped onto the phylogenetic tree to identify gene groups that had changes in expression patterns that are consistent with the tree.This means having changes in expression patterns that are mostly found in the same clade.Gene trees were built for these candidates using blast-align-tree (https://github.com/steinbrennerlab/blast-align-tree),and the candidate lists were further trimmed based on the gene trees to ensure a 1:1 relationship between all members in the gene group.
Results
We began this project by identifying species with RNA-seq Atlases, which we defined as datasets containing at least 10 different tissue samples and with samples that represented at least 2 distinct developmental stages.Although RNA-seq measures RNA levels and thus can be affected by posttranscriptional regulation, it is the best available proxy for transcriptional activity (Wang et al. 2009).Details regarding the dataset and their references can be found in Supplementary Yao et al. 2017;McCormick et al. 2017aMcCormick et al. , 2017bMcCormick et al. , 2017c;;Penin et al. 2018;Ramírez-González et al. 2018;Brian et al. 2021).Figure 1a shows a phylogenetic tree of the 15 species that fit our criteria, which spans a range of angiosperms including multiple monocots and eudicots.The datasets were processed through a custom pipeline (Fig. 1b-d).In brief, Kallisto was used for RNA-seq quantification, and MultiQC was used to summarize all the outputs up till DESeq2 (Supplementary Data 1) (Bray et al. 2016;Ewels et al. 2016).For each species, normalized counts from each tissue were then converted to expression uniformity information using the CV as a metric.In this analysis, lower CV corresponds to more uniform expression, meaning comparable expression in all tissues.Higher CV, on the other hand, means less uniform and more tissue-specific expression.To facilitate comparison between species, we used percentile rank of CV as the primary metric, which represents the percentage of CVs that are less than or equal to a given value.
To determine whether the characteristic differences in expression patterns between different core promoter types seen in Arabidopsis hold across all the species in our dataset, we extracted the −100 bp to +100 bp region around the TSS as the "core promoter region" for 40% of all promoters in each species (Fig. 1d).TATA-box, Y patch, and Inr motifs were screened according to the methods detailed in Jores et al. (2021).The regions scanned for each motif are more relaxed than their known regions in Arabidopsis, as we applied the scan to multiple species and wanted to avoid falsely labeling promoters as Coreless.Illustration of the regions scanned for each core promoter type is illustrated in Fig. 1e.
The subset of promoters for each species was labeled as either TATA or Y patch.If a promoter did not contain either element, we labeled them as "Coreless."It is important to note that the definition of Coreless promoters introduced by Yamamoto et al. ( 2009) is somewhat more strict than the definition used here, as they also screened for the relatively rare CA and GA core promoter elements.We then plotted the distribution of CV for each species, broken down by core promoter types (Fig. 2).Similar results for Y patch, Inr, and a random set of promoters that serve as a control are in Supplementary Fig. 1.
Using microarray data, Yamamoto et al. (2011) had found that Coreless promoters are underrepresented in genes that respond to stimulus (i.e. more constitutively expressed).However, we did not see the same trend until we removed the lowest expressing transcripts from the analysis (transcripts with an average of less than 1 read).These extremely low read counts are likely to be unreliable, and an analysis of the weak-expressing genes that we removed revealed that they bias toward higher CV when compared to the rest of the genes in the dataset (Supplementary Fig. 2).This same minimum read number requirement was then applied to the rest of the species.
Overall, the expected trend of TATA-box-containing promoters being overrepresented in conditionally expressed genes is observed across all the species analyzed (Fig. 2a).In contrast, the Evolution of gene expression control | 3 trend of Coreless promoters being associated with more uniformly expressed genes was weaker and only observed in a subset of the angiosperms.The monocots (Z.mays, T. aestivum, and S. bicolor) all exhibited a strong trend of Coreless promoters associating with conditionally expressed genes (e.g.those with higher CV values), along with an enrichment of Y patch-containing promoters being associated with uniform expression (Fig. 2b and Supplementary Fig. 1).This inverted pattern could be explained in 2 ways given that a promoter not labeled as containing a TATA-box or Y patch is labeled as Coreless.Under this classification scheme, an apparent enrichment by one category of promoters could reflect a surplus of that type of promoter in a particular CV ranking bin or a depletion of the other 2 promoter categories in that same bin.The latter explanation seems more likely for the Y patch promoters in monocots, but further experimental tests are required to fully resolve this question.The surprising pattern of Coreless genes "flipping" their behavior in monocots might also reflect an as-yet-undefined promoter element that is lumped into the Coreless category here.For example, there may be slight differences in TATA motif, as has been described for maize (Mejía-Guerra et al. 2015).Accounting for this known source of variation, we did not see any significant decrease in the Coreless trend toward conditionally expressed genes (Supplementary Fig. 1).
Our analysis was able to identify correlations between promoter type and expression pattern across many genes and led us to wonder whether the presence or absence of a specific core promoter type was sufficient to determine expression pattern.To test this hypothesis, we decided to focus on orthologous genes found across the species examined in this study (Fig. 1c).This approach allows us to test if changes in core promoter architecture during evolution led to changes in the uniformity of expression.We started by finding orthologs of Arabidopsis genes, as Arabidopsis has the most well-annotated genome and has 47,684 transcripts with a nonzero transcript count in at least one of the sampled tissues.Of this total, we retained only the primary transcripts of each nonmitochondrial and nonchloroplast genes, resulting in a final total of 26,842 genes.The top 5% most uniformly expressed and top 5% most conditionally expressed genes were selected based on CV, along with a randomly selected control set of equal size (n = 1,343 genes in each category).The sets of genes were used to query the Ensembl or Phytozome database for orthologs in the rest of the 14 species in our dataset (Goodstein et al. 2012;Cunningham et al. 2021).The orthologs were searched for in the database where their reference transcriptome was downloaded to ensure matching of the target transcript name with the transcript counts.Orthologs of A. hypogaea, C. arietinum, and S. tuberosum were found using Phytozome, and the remaining species were found in Ensembl.
Orthologous genes tended to retain their expression pattern across species (Fig. 3a).While orthologs corresponding to the random set of Arabidopsis genes were spread quite uniformly across distribution of CV rankings, the orthologs of the top 5% uniformly expressed set of Arabidopsis genes were skewed heavily toward the more uniform, lower percentage CV rankings.The orthologs of the 5% most conditionally expressed set of Arabidopsis genes showed a more subtle skew toward higher CV ranking.This trend was more visible in some species than others, partially due to the overall lower gene counts.One notable trend was that the most conditionally expressed gene set retrieved significantly fewer orthologs compared to the random or most uniformly expressed gene sets (Fig. 3b).This is possibly because uniformly expressed genes are associated with more fundamental cellular functions and therefore more likely to be conserved across species (Klepikova et al. 2016a).Following a similar logic, conditionally expressed genes tend to be more tissue-specific and therefore are more easily lost during species divergence.
Even when looking at genes that fell at the tail ends of the expression uniformity distribution from Arabidopsis, we could find orthologs positioned across the full range of CV rankings (Fig. 3a).In other words, expression uniformity of a given gene can vary dramatically across species.These instances give us a unique opportunity to examine if the change in expression uniformity is predictive of changes in core promoter architecture.To investigate this further, we curated a set of evolutionarily related genes that showed this type of switching behavior.We limited our analysis to a set of 1:1 orthologs, where each species contributes a maximum of one gene for each orthologous gene set (Zahn-Zabal et al. 2020).This is to maximize confidence in evolutionary relatedness and minimize complications from gene duplications.Starting with the set of all the orthologs retrieved through Ensembl and Phytozome, we first filtered the target orthologs to count only the highest expressing transcript for each gene, thereby limiting each gene to a single representative transcript.We filtered the list of orthologs to include Arabidopsis transcripts that had only a single ortholog found in the transcriptome of each other species.We considered any target transcripts that crossed the 50th percentile in CV as "changing expression pattern," and we limited the Arabidopsis transcripts to those where transcripts changed expression pattern in at least 2 different species.These changes were mapped onto the phylogenetic tree to identify clusters where changes could be associated with a specific phylogenetic node.
For the most promising Arabidopsis transcripts, de novo gene trees were built by performing BLAST searches against the rest of the species.When more than one ortholog was found in any of the species, that species was removed from the set (Fig. 1c).With our stringent selection criteria, 7 high-confidence orthologous gene groups were found with 3 Arabidopsis transcripts (AT3G17020.1,AT3G18215.1,and AT4G40045.1)that are from the top 5% uniformly expressed genes list and 4 Arabidopsis transcripts (AT1G04700.1,AT5G17400.1,AT5G18910.1,and AT5G20410.1)from the top 5% conditionally expressed genes list.A summary of the filters and numbers of target orthologs as well as Arabidopsis query transcripts left after each step can be found in Supplementary Table 2.
The promoters for these 7 sets of orthologs were extracted, and TATA, Y patch, and Inr motifs were screened as described above (for clarity, this analysis will be referred to as motif scan) (Fig. 1d).In parallel, these promoters were also screened for TATA, Y patch, Inr, CA, and GA octamers as defined in Yamamoto et al. (2009) (octamer scan), and an illustration of the regions scanned for each octamers can be found in Supplementary Fig. 3. Comparing the 2 methods, the motif scan resulted in more identified core promoters due to its more relaxed parameters.Only 2 promoters were labeled as Y patch by the octamer scan but not the motif scan.A core promoter element was considered present if either method returned a positive result.The identification of the genes and a complete list of core promoter elements identified can be found in Supplementary Table 3. Within each orthologous gene group, changes in the presence of TATA or Y patch elements did not appear to correlate with changes in expression patterns (Fig. 4).In each group, there are examples of promoters having the same core promoter type but different expression patterns, as well as cases of promoters having the same expression pattern but different core promoter types.Since there were only 7 TATA-box-containing promoters (∼15.5% of the promoters), we were not able to observe instances where 2 related TATA-box-containing promoters have different expression patterns, but there are multiple instances where changes in presence of TATA motif did not change expression pattern.This result suggests that the presence or absence of a TATA or Y patch is not sufficient to change expression pattern.
Discussion
Understanding the rules that govern the performance of natural promoters could inspire the construction of synthetic promoters that are able to retain their behavior over multiple generations in transgenic plants.Here, we mined RNA-seq atlases from 15 different angiosperms to extract patterns connected to the relative specificity or uniformity of gene expression across developmental stages and tissue types.We found that the previously observed trend that TATA-box-containing promoters are overrepresented in conditionally expressed genes is highly conserved.In contrast, the relative uniformity vs specificity of expression from Coreless promoters is not as well conserved.Coreless promoters from eudicots analyzed in this study were, in general, more highly associated with uniform expression patterns.Coreless promoters from monocot species, however, exhibited the opposite trend.In addition, we found that promoters tend to maintain their expression pattern across species, with the caveat that uniformly expressed genes are more likely to have identifiable orthologs when compared to conditionally expressed genes.Last, by tracking expression pattern and promoter type within the evolutionary trajectory of individual genes, we could test the hypothesis that promoter architecture is responsible for the level and pattern of gene expression.We found that none of the core promoter types screened for in this work is consistently associated with changes in expression pattern or strength.This suggests that while there may be a correlation between promoter architecture and transcription parameters, the underlying molecular mechanism that determines whether a gene is conditionally or specifically expressed remains unknown.) are grouped by percentile ranking of 0.66-1.00(high), 0.33-0.66(mid), or 0.00-0.33(low) and color coded accordingly.Presence or absence (gray) of TATA and Y patch motifs is indicated.*A.thaliana has no identifiable core promoter as the intergenic region is only 8 bp.
From a synthetic biology perspective, there are 2 major implications from the analysis described here.First, the hope of finding strong, constitutive natural promoters that work across diverse species may be even more challenging than we originally thought.For example, it is unlikely that there are natural promoter architectures that will work equally well as constitutive promoters in monocot and eudicot crops.Second, and more hopefully, our analysis suggests that the approach currently being taken by multiple labs for engineering synthetic promoters (Belcher et al. 2020;Brophy et al. 2022;Cai et al. 2020;Moreno-Giménez et al. 2022) is likely to find solutions that work well across species.The overall scheme of many of these groups is to take a core promoter region containing a TATA-box and then add natural cis-elements or synthetic transcription factor target sequences upstream of the core promoter to modulate expression strength or pattern.We found that the same core promoter could support widely varied expression patterns across evolution.
While the general trend that TATA-box-containing promoters are found in genes that are expressed in specific times and/or locations was highly conserved, close study of single gene phylogenies revealed that core promoters are not the determinant for expression pattern.The overall lack of pattern for TATA and Y patch motifs on the phylogenetic tree also suggests that the gain and loss of these promoter elements, at least in the genes studied here, are sporadic events that do not experience strong positive selection for maintenance.Our analysis leaves us with the question of why there is a discrepancy between the observed general preferences for core promoter types regarding expression uniformity but simultaneously a lack of contribution of core promoters to expression pattern, and whether there are mechanistic differences between Coreless and TATA promoters when they can achieve similar expression patterns.It is likely that constitutive expression can be achieved in at least 2 ways: by combining multiple tissue-specific elements that work together to achieve constitutive expression or by including "universal" elements that are broadly recognized across tissues.The analysis of the Cauliflower Mosaic Virus 35S promoter (p35S) showed that progressive deletion reduced promoter activity without affecting expression pattern and identified a short enhancer element that conferred constitutive expression (Odell et al. 1985;Fang et al. 1989;Hayashi et al. 1992).A more recent analysis of the p35S and other constitutive promoters, however, revealed multiple tissue-specific transcription factor binding sites (Cai et al. 2020).Performing a similar functional analysis on multiple Coreless promoters will be needed to determine whether the 2 classes of promoters achieve uniform expression using similar mechanisms.In addition, a more granular deletion analysis targeting individual cis-elements for both classes of promoters in multiple species, along with close examination of expression pattern, will be needed to fully map out promoter logic sufficiently to guide future engineering efforts.
Fig. 1 .
Fig. 1.An outline of the bioinformatics pipelines.a) The 15 angiosperms included in this study and their phylogenetic relationship.b-d) The 3 major data processing steps performed in the study.Detailed parameters are included in the Methods section.Reference genomes, transcriptomes, and gene orthologs were retrieved via either Ensembl (Cunningham et al. 2021) or Phytozome (Goodstein et al. 2012) databases depending on the species.e) Regions searched for each core promoter motif.
Fig. 2 .
Fig. 2. Distribution of relative specificity or uniformity of TATA-box-containing and Coreless promoters.Higher CV rankings indicate more specificity, while lower CV rankings indicate more uniformity.A random subsampling of 40% of promoters from each species are shown here.a) TATA-box containing promoters, and b) promoters termed Coreless as they lacked both TATA-box and Y-path motifs.Colors correspond to phylogeny shown in Fig. 1a.
Fig. 3 .
Fig. 3. Genes that show uniform expression in A. thaliana tend to behave similarly in other species.a) Distribution of CVs for orthologs of uniformly expressed (triangle), conditionally expressed (square), or random (circle) A. thaliana genes.The color of boxes around species names corresponds to Fig. 1a.b) Percent of orthologs found for each set of A. thaliana genes for each species.Each dot corresponds to a single species.Statistical tests were performed by 1-way ANOVA followed by Tukey HSD.All 3 groups are significantly different from one another.
Fig. 4 .
Fig. 4. Individual gene trees where expression uniformity changes can be observed.a-d) The gene is conditionally expressed in A. thaliana but uniformly expressed in another species.e-g) The gene is uniformly expressed in A. thaliana but conditionally expressed in another species.The Arabidopsis genes in each group are as follows: a) AT1G04700, b) AT5G17400, c) AT5G18910, d) AT5G20410, e) AT3G17020, f) AT3G18215, GAT4G40045.CV and expression strength (Exp.) are grouped by percentile ranking of 0.66-1.00(high), 0.33-0.66(mid), or 0.00-0.33(low) and color coded accordingly.Presence or absence (gray) of TATA and Y patch motifs is indicated.*A.thaliana has no identifiable core promoter as the intergenic region is only 8 bp. | 6,428.8 | 2023-09-11T00:00:00.000 | [
"Biology"
] |
On conformal field theories based on Takiff superalgebras
We revisit the construction of conformal field theories based on Takiff algebras and superalgebras that was introduced by Babichenko and Ridout. Takiff superalgebras can be thought of as truncated current superalgebras with Z-grading which arise from taking p copies of a Lie superalgebra g and placing them in the degrees s=0,...,p-1. Using suitably defined non-degenerate invariant forms we show that Takiff superalgebras give rise to families of conformal field theories with central charge c=p sdim(g). The resulting conformal field theories are defined in the standard way, i.e. they lend themselves to a Lagrangian description in terms of a WZW model and their chiral energy momentum tensor is the one obtained naturally from the usual Sugawara construction. In view of their intricate representation theory they provide interesting examples of conformal field theories.
Introduction
Conformal field theories (CFTs) based on affine Lie algebras and superalgebras are the basic building block for many constructions within CFT, for instance the GKO coset construction [1], orbifolds [2] or quantum Hamiltonian reduction [3,4]. Also known under the name Wess-Zumino-Witten models (WZW models), the defining data of such CFTs consists of a finite-dimensional Lie (super)algebra g, an associated Lie (super)group G and an associated invariant bilinear form ·, · : g ⊗ g → C with suitable non-degeneracy properties. The basic properties of WZW models have been illucidated in a series of papers [5,6,7], both from a geometric and an algebraic perspective.
Historically, the attention mainly focused on WZW models based on simple or abelian Lie algebras. With these ingredients one can readily understand WZW models based on (compact) reductive groups. The first example of a WZW model based on a non-reductive group is the famous Nappi-Witten model [8] which describes string theory on a plane wave background. This paper immediately triggered a lot of activity in this area which is neatly summarized in [9]. One of the milestones was the exact and complete solution of the H 4 plane wave model [10,11,12,13] which is based on theİnönü-Wigner contraction or Penrose limit of SL(2, R) × U (1).
More recently, Babichenko and Ridout took up the subject again in an effort to provide new examples of solvable logarithmic conformal field theories [14]. They introduced a special class of CFTs based on affine Takiff superalgebras which were obtained by combining p = 2 copies of the underlying current algebra in an indecomposable way. In a different line of research, Rasmussen and Raymond studied what they called Galilean contractions of affine Lie algebras and W-algebras [15,16], thereby arriving at order p generalizations of affine Takiff algebras. It should be noted that both of these constructions were completely algebraic and in some regards seemed rather ad hoc, at least from a physical perspective.
The goal of the present note is to put the results of [14,15,16] in an appropriate structural context and point out that the construction is indeed very natural, from both a physical and mathematical perspective. In particular, we emphasize the point that the conformal field theories considered in [14] can be regarded as arising from genuine WZW models. Together with general results on Lie (super)algebras with non-degenerate invariant form [9] this implies the existence of a conformal energymomentum tensor which has central charge c = p sdim(g). Maybe the most important aspect of our work is that it enables the use of geometric methods in the solution of these models. For the special case of a Takiff superalgebra based on GL(1|1) this has already been employed in [17].
The paper is organized as follows. In Section 2 we first of all introduce Takiff superalgebras of arbitrary order p and discuss some of the associated structures such as invariant forms and automorphisms. The affinization of general finite dimensional Lie superalgebras with invariant form is reviewed in Section 3. It is shown that the affinization of Takiff superagebras gives rise to a conformal energy momentum tensor by means of the Sugawara construction and hence to a CFT. We also argue that (or rather in which sense) this construction gives the same result as applying the Takiff construction to affine Lie superalgebras. Finally, Section 4 applies the abstract considerations to specific examples in order to relate this work to the existing literature before the Conclusions summarize the present work and point out directions for future research.
Definition and structure
The basic theory of Lie superalgebras was developed by Kac [18]. For our current paper we only need two relatively simple concepts, the definition of a Lie superalgebra and the notion of a metric. 1 A Lie superalgebra g is a Z 2 -graded vector space g = g 0 ⊕g 1 with a Lie bracket [·, ·] : g⊗g → g which respects the grading in the sense that [g i , g j ] ⊂ g i+j and, moreover, satisfies the usual properties such as graded anti-symmetry and graded Jacobi identity [18]. A metric is an even bilinear form ·, · : g ⊗ g → C which is non-degenerate, graded symmetric and which, in addition, satisfies the invariance property Throughout the text we will work with a basis of generators J a of g. The generators will always be assumed to be homogenous elements of g, i.e. have a well-defined degree d a ∈ Z 2 . With respect to this fixed basis we can define the structure constants f ab c and metric tensor κ ab via The two tensors f ab c and κ ab satisfy obvious relations that reflect the structural properties of g and its metric that have been mentioned before.
Following the reasoning of [14] (see also [19]) we would like to extend the algebra g by taking p ≥ 2 copies of it and by defining a new Lie bracket on the resulting space T p (g) = g (0) ⊕ · · · ⊕ g (p−1) . Here g (s) = g as vector spaces and the Z 2 -grading is simply inherited from g. The construction becomes most transparent if we introduce a formal nilpotent even variable Θ that satisfies Θ p = 0 but Θ p−1 = 0. The space g (s) can then be identified with the vector space g ⊗ CΘ s and the Lie bracket is defined by the simple assignment where X, Y ∈ g. Using the properties of g, it can easily be checked that this definition gives rise to a Lie superalgebra. Following the suggestion of [14], we will call this Lie superalgebra T p (g) a Takiff superalgebra.
The Takiff superalgebra T p (g) has a natural Z-grading which is localized in the degrees s = 0, . . . , p − 1 and which has the original Lie superalgebra g as its grade 0 Lie subsuperalgebra. By setting g (s) = {0} for s < 0 and s ≥ p we can write For any choice s ∈ Z the space I s = r≥s g (r) is an ideal. In particular, we realize that the top degree subspace g (p−1) is an abelian ideal. It is thus evident from the construction that T p (g) is not semi-simple and generally not even reductive.
Invariant forms
The construction of the Takiff superalgebra T p (g) suggests that all relevant structures such as invariant forms, automorphisms etc. are inherited from its grade 0 Lie subsuperalgebra g, at least as long as they are assumed to respect the grading. As we shall now discuss, this is indeed the case for the metric, at least if g is semi-simple. Let us start by defining natural metrics on T p (g). For this purpose let ·, · s with s = 0, . . . , p − 1 be a family of metrics on g. This collection of metrics defines a metric on T p (g) by using the assignment In this equation it is understood that the right hand side vanishes whenever r + s ≥ p. It is easy to check that the form ·, · defined in this way is even, graded anti-symmetric and invariant. Actually, in order to ensure the non-degeneracy of the metric it is only required that the top degree metric ·, · p−1 is non-degenerate while the other metrics ·, · s with s < p − 1 could equally well be degenerate. While it is straightforward to check that the assignment (5) satisfies all desired properties, it is important to know that all metrics have to be of this form, at least if g is semi-simple. In order to prove this assertion let us consider an arbitrary metric ·, · on T p (g). The first observation is that X ⊗ Θ r , Y ⊗ Θ s can only depend on r + s but not on r and s individually. Indeed, if g is semi-simple there exist elements U α and V α such that X = α [U α , V α ]. One then obtains the chain of equalities This proves that the metric has to be of the form (5) but it leaves open whether the right hand side is g-invariant and graded symmetric. However, the latter properties follow easily from the corresponding properties of ·, · . Apart from proving our assertion, our calculation also shows that the metric has to vanish for r + s ≥ p. If g fails to be semi-simple, there are more possibilities for defining metrics on T p (g). This is obvious for Takiff superalgebras based on abelian Lie algebras g (where the Z-grading ceases to have a special meaning) and can also be verified in specific examples such as T 2 gl(1|1) . Let us finally verify under which conditions the form defined in (5) is non-degenerate. Indeed, for fixed X ∈ g and t ∈ {0, . . . , p − 1} consider the equation When choosing s = p − 1 − t ∈ {0, 1, . . . , p − 1} the scalar product on the right hand side becomes X, Y p−1 . If we assume the form ·, · p−1 to be non-degenerate it then follows that Y = 0 and hence Y ⊗ Θ s = 0. It is important to emphasize that only the top form ·, · p−1 needs to be invariant while ·, · r for 0 ≤ r < p − 1 can well be degenerate. It is not possible to single out another component r < p as a non-degenerate form instead since then s = r − t would need to be negative for t = p − 1.
The invariant form in Eq. (5) can be further simplified if we assume that g is simple. In this case all invariant forms are proportional to one standard non-degenerate form K on g which for most of the cases can be chosen to be the Killing form. 2 In other words, we have ·, · r = k r K(·, ·). In our treatment of conformal field theories later on the constants of proportionality k r will play the role of what usually is called the level of a WZW model. A Takiff algebra T p (g) based on a simple Lie superalgebra g thus admits p distinct levels and non-degeneracy of the associated metric (only) requires k p−1 = 0. In this special case we recover a relation to the description of affine Takiff superalgebras as higher order Galilean affine superalgebras as discussed in [16]. Affine Takiff superalgebras will be discussed in more detail in Section 3.
It is well known that every Lie superalgebra admits a natural invariant form, the Killing form K(·, ·). For a Takiff superalgebra, the Killing form turns out to be generally degenerate even if the Killing form on g itself is non-degenerate. Indeed, a simple calculation yields where on the right hand side the symbol K(·, ·) refers to the Killing form on g. The previous equation can easily be obtained from the definition of the Killing form, together with an explicit evaluation on a basis of generators, The factor δ r+s,0 = δ r0 δ s0 in (8) arises due to the requirement r + s + t = t that is imposed by taking the supertrace (recall that r, s ≥ 0). The factor p results from the summation over t = 0, . . . , p − 1.
We wish to stress the fact that, even though the Killing form is degenerate (for p ≥ 2), the Takiff superalgebra T p (g) admits whole families of natural non-degenerate invariant forms, see eq. (5). Also, the Killing form (8) is consistent with the form (5), i.e. it may be thought of as being induced from a family of invariant forms on g.
In this paper we will exclusively be concerned with Takiff superalgebras T p (g) which come equipped with the additional structure of an invariant form. Except stated otherwise this invariant form will always be assumed to arise from a family of invariant forms on g in the sense of definition (5).
Automorphisms
For the study of representations of a Lie superalgebra but also for the investigation of D-branes in the associated WZW model it is important to have a detailed knowledge about its automorphisms, especially the isometric automorphisms. Let Ω be an automorphism of g. The automorphism Ω induces a grade-preserving automorphism on the associated Takiff superalgebra T p (g) by setting It can easily be checked by explicit calculation that this indeed defines an automorphism, = 2 Exceptions arise in parts of the A-and the D-series where the Killing form might vanish identically [18].
If g comes equipped with a family of invariant forms ·, · s and Ω is isometric on g for all of these, the same will be true for the induced automorphism on T p (g) with the natural induced metric. Indeed, a straightforward calculation gives This statement will turn out to be important in the context of studying affinizations in Section 3. As a typical example let us mention the case of a simple Lie superalgebra g. As pointed out in Section 2.2 in this case all invariant forms ·, · s are proportional to a single invariant form and the condition on the isometry of Ω is much less restrictive than in the general case.
3 The affinization of finite dimensional Takiff superalgebras
Definition and Sugawara construction
In their paper [14], Babichenko and Ridout showed that the application of the Takiff construction (with p = 2) to an affine Lie superalgebra leads to a current superalgebra which gives rise to a conformal energy momentum tensor. A similar philosophy was adopted in [15,16]. In fact, as we will point out now, it seems more natural to start with a finite dimensional Takiff superalgebra with a metric and then to define the associated affinization. Both constructions commute in a sense as will be shown, and should therefore be regarded as equivalent. However, our way of thinking allows for a geometric interpretation and establishes immediately that the conformal field theories considered in [14] are just ordinary WZW models (even though associated with non-reductive Lie groups). Using the knowledge that has been gained about WZW models on simple or reductive Lie groups [5,6,7] and, more recently, supergroups [20,21,22,23,24], our work opens a realistic perspective of being able to solve explicit examples, see also [17]. The affinization of a Lie superalgebra h with invariant form ·, · follows the standard recipe. In order to simplify notation we will explain it for a general Lie superalgebra h with generators J A . 3 The affinizationĥ of h (associated with the invariant form ·, · ) is the central extension of the loop superalgebra h ⊗ C[t, t −1 ] which is defined by the relations where K denotes the central element. In physical applications, this algebra is always realized on representations where K assumes a fixed number k. If h is simple and the invariant form is normalized appropriately k is known as the level. In any case, after replacing K by a number we can drop K from Eq. (14) and absorb the corresponding constant into a redefinition of the invariant form ·, · and this convention will be understood from now on. The relation between affinizations and conformal field theories is best understood in terms of the Sugawara construction which assigns a Virasoro algebra to the affinization, at least for generic choices of the invariant form. 4 The Sugawara construction is best explained in terms of the currents J A (z) = n∈Z J A n z −n−1 which can be interpreted as the generating function for the modes J A n = J A ⊗ t n . With the help of the matrix κ AB = J A , J B and the structure constants [J A , J B ] = if AB C J C one can then rewrite the relation (14) in the formalism of operator product expansions as 3 The Takiff superalgebra Tp(g) is of course contained as a special case. However, it is not advisable to write down the expressions in the basis J a ⊗ θ s since there would be too many indices floating around. One can think of the index A as a multi-label (a, s). 4 One will need to avoid the "critical level" and what this means will become clear below.
It is well-known that the current algebra (15), for generic choices of κ AB , implies the existence of a conformal energy momentum tensor T (z) which satisfies The central charge c is a characteristic of the underlying conformal field theory which depends on the Lie algebra h and on the metric ·, · . 5 In order to define T (z) in terms of the currents J A (z) one needs to renormalize the metric κ AB [25,9]. The relevant metric is Ω AB = κ AB + K AB /2 where K AB is the Killing form (see Section 2). The energy momentum tensor is then given as the normal ordered product The matrix Ω AB is the inverse of the metric Ω AB . If this metric is not invertible the model is said to be at the critical level and the energy momentum tensor does not exists. 6 Since all these facts are well-established and frequently referred to in the literature we will not repeat the calculations (see however [16]). Instead we focus on the derivation of the central charge for the special case when h = T p (g) is a Takiff superalgebra. The general expression for the central charge is given by [25,9] c = Ω AB κ AB = str Ω −1 κ .
Our analysis will show that in the case of Takiff superalgebras (with p ≥ 2), the central charge can be evaluated explicitly, thereby giving rise to the value independently of the choice of metric. That the result is an integer is actually an immediate consequence of general statements that have been established in [9]. In order to understand the result (20) we need to have more explicit expressions for the original metric κ AB and the renormalized metric Ω AB . Since the latter is obtained from the former by addition of the Killing form K AB we focus on κ AB first, returning to our original labels a and s. According to Section 2, the metric κ may be written in block-diagonal form as where we used the abbreviations κ ab s = J a , J b s . However, while this form makes clear that this matrix possesses an inverse, its concrete form is not immediately obvious. It turns out to be useful to combine the degrees s = 1, . . . , p − 2 into one block and the remaining two degrees s = 0 and s = p − 1 into a separate block. The corresponding change of basis results in the matrix form The inverse can now be explicitly determined and it reads In the renormalized metric Ω which defines the energy momentum tensor, the contribution κ 0 is renormalized to a new metric κ ′ 0 (whose form is not important for the central charge), see Eq. (8). Using the general formula (19), the central charge of the energy momentum tensor can finally be evaluated to be The asterisks * symbolizes entries which are known and which can be written down but which are not relevant for the final result. During the calculation it is important to note that the supertrace is taken in the internal space, not in the matrix space.
Affinization commutes with Takiffization
Instead of starting with a finite dimensional Takiff superalgebra and considering its affinization one can also first affinize a finite dimensional Lie superalgebra and then Takiffize the result. We will now investigate in which sense both procedures can be regarded as equivalent.
The main difference in both approaches is that the affinizationĝ of a Lie superalgebra g with invariant form ·, · will introduce one central element K which will then be duplicated upon Takiffinization. The resulting elements K s = K ⊗ Θ s are all central. On the other hand, if we first proceed to the Takiff superalgebra and then affinize there will be a single central element K. We will now explain this problem in more detail and point out how this apparent mismatch can be resolved on the level of representations by absorbing the choice of some free constants ("levels") into the metric.
If J a is a basis of generators of g then J a ⊗ t n together with the central element K is a basis of generators ofĝ. The commutation relations read In a second step we Takiffize this algebra and end up with a basis of generators J a ⊗ t n ⊗ Θ s and K ⊗ Θ s and commutation relations We notice that there are p distinct central elements K s = K ⊗ Θ s appearing on the right hand side.
On the other hand we can start with the natural basis J a ⊗ Θ s of the Takiff superalgebra T p (g) and the associated family of metrics ·, · s . In the associated affinization the basis consists of J a ⊗ Θ s ⊗ t n as well as a central element K and the commutation relations become So, while there is an obvious bijection between the generators J a ⊗t m ⊗Θ r of the first and J a ⊗Θ r ⊗t m of the second approach, the number of central elements is different and consequently there is no isomorphism between the two superalgebras in question.
On the other hand the first approach is employing a unique metric ·, · while the second approach makes use of a family of metrics ·, · s where s = 0, . . . , p − 1. This permits to precisely match the number of free parameters if the central elements are treated as numbers, e.g. if we think about the action of the superalgebras in a suitable representation. In that case we could make the assumptions K ⊗ Θ s = k s ∈ C and K = k ∈ C together with the identification k ·, · s = k s ·, · .
Morally, the Takiffization thus commutes with affinization if the free parameters in both constructions are chosen appropriately.
Automorphisms
In Section 2.3 we have established that any automorphism Ω of a Lie superalgebra g can be lifted to the associated Takiff superalgebra T p (g). It is a matter of straightforward calculation that this automorphism also lifts to the affinization T p (g) provided it is compatible with the family of metrics ·, · s used to construct the central extension. Indeed, with the abuse of notation Ω(X ⊗ Θ s ⊗ t m ) = Ω(X) ⊗ Θ s ⊗ t m and the convention Ω(K) = K we immediately find Moreover, every such isometric automorphism preserves the Sugawara energy momentum tensor defined in Eq. (16) and hence gives rise to an automorphism of the full underlying vertex operator algebra. 7 This property is important for the discussion of conformal boundary conditions (D-branes) since the automorphisms may be used to glue chiral currents at the boundary of the world-sheet, see [27] and references therein. It is conceivable that there are many more automorphisms of T p (g) that are relevant in a CFT context. To name just a single example, spectral flow automorphisms which involve shifts of the mode indices and which therefore do not have an analogue in the underlying finite dimensional Takiff superalgebra T p (g) play a significant role in WZW models based on non-compact groups [28] or at fractional level [29,30]. Since the discussion of spectral flow automorphisms will rely on additional structure on g and hence T p (g) we will refrain from presenting further details in this note.
Abelian Takiff superalgebras
As was mentioned already in [14], the Takiff superalgebra T p (g) associated with an abelian superalgebra g is abelian. For this reason, the whole structure coming with the natural Z-grading of a Takiffization is somewhat void since there is no intrinsic reason for additional structures such as metrics and automorphisms to be compatible with the Z-grading. As abelian subalgebras may be interesting from a physical point of view but not from a Takiff point of view, we will disregard them in what follows.
Takiff superalgebras of order 2
Restricting our attention to Takiff superalgebras of order p = 2 allows us to make contact to the results of Babichenko and Ridout [14]. To simplify notation, we will identify J a ⊗ 1 with J a 0 and J a ⊗ Θ with J a 1 . According to the results of Section 2.1, the assignment defines a natural invariant form on T 2 (g). If g is simple, we may write κ 0 = k 0 κ and κ 1 = k 1 κ where κ is an arbitrary reference metric and k 0 and k 1 are two numbers. In matrix form the metric and its inverse now assume the simple form Here κ ab denotes the inverse of κ ab . In order to write down the energy momentum tensor for the affinization of T 2 (g) we need to renormalize the level k 0 to k 0 + g ∨ where g ∨ is the dual Coxeter number of g. Comparing with the general formula (18), we end up with The result coincides with the expression found in [14]. However, while in the latter reference the definition of the energy momentum tensor seemed to be somewhat ad hoc we now understand its precise origin.
Takiff superalgebras of order 3
In order to get some intuition for higher order Takiff superalgebras, let us finally have a closer look at the case p = 3. In this case we work with generators J a s = J a ⊗ Θ s . Restricting our attention to simple Lie superalgebras g, the metric and its inverse may be written as The corresponding energy momentum tensor reads, again taking into account the proper renormalization of the level k 0 , Of course the analysis could easily be extended to higher values of the order p. However, since the expressions get increasingly lengthy we refrain from presenting explicit formulas. The interested reader may refer to Ref. [16] where Takiff algebras are constructed and discussed from the perspective of Galilean contractions.
Acknowledgements
TQ would like to thank David Ridout for comments on the manuscript. This research was conducted by the Australian Research Council Centre of Excellence for Mathematical and Statistical Frontiers (project number CE140100049) and partially funded by the Australian Government.
Conclusions
We have provided a detailed discussion of Takiff superalgebras T p (g) and their relation to conformal field theory. We showed that every Takiff superalgebra equipped with a generic metric defines an associated WZW model with a canonical energy momentum tensor that is obtained by means of the standard Sugawara construction. As a by-product we established that, in this sense, the conformal field theories studied in [14] (for p = 2) are ordinary WZW models. We also extended the analysis to higher order Takiff superalgebras with p > 2 and this allowed us to connect to recent work on Galilean contractions of affine Lie algebras [15,16]. The main benefit of our result is that it provides a natural geometric interpretation of these conformal field theories and hence access to a rich toolkit involving, e.g. methods of harmonic analysis. Also, many questions, e.g. of representation theoretic nature, that may be quite intricate to discuss directly on the level of infinite dimensional Lie algebras can presumably be reduced to the finite dimensional setting or at least informed by the latter.
Our results are very general and with generality in mind we had to limit our exposition. In particular, we did not discuss the representation theory of the finite and affine Takiff superalgebras we constructed since we did not make any assumptions on the underlying Lie superalgebra g except for the existence of an invariant metric. For the actual solution of specific models this is the most urgent point that needs to be addressed. It is likely that key insights can be gained from induction of representations from g to T p (g) or from lifting a potential root space decomposition from g to T p (g) and analyzing the associated Verma modules. In light of the results of [15,16] an alternative avenue may consist in studying the effect of performing contractions on direct sums of g or its affinizationsĝ. This last perspective has been very successful when solving the H 4 model whose underlying Heisenberg group arises from a contraction of SL(2, R) × U (1) [10,11,12,13]. All these considerations can and should be complemented by considering the harmonic analysis on Lie supergroups associated with T p (g) and its relation to the harmonic analysis on Lie supergroups associated with g.
Recent work on WZW models based on simple Lie superalgebras has shown that the types of representations that play a role in the solution of the CFT differ quite drastically depending on whether the level is fractional or integral [30,31,32,33]. It is thus important to gain a better understanding of what it means for Takiff superalgebras to have an integral as opposed to a fractional level. This is also deeply related to the geometric question of how to describe integral 3-forms or, from a more elaborate perspective, bundle gerbes [34] on the associated Lie supergroup. Indeed, an integral 3-form is required in order to be able to define the WZW Lagrangian which includes a topological Wess-Zumino term. As far as we are aware, most of these questions have not been addressed systematically for non-reductive Lie groups, yet alone supergroups, and hence provide a strong motivation for further work in this direction. Let us also note that all of our considerations should admit a natural generalization to multi-graded Takiff superalgebras that have been introduced in the recent paper [35].
Let us finally observe that Takiff superalgebras have played a prominent role recently beyond CFT, in the discussion of integrable systems [36,37]. We hope that the point of view developed in this paper will also prove useful in that connection. | 7,361.2 | 2020-04-14T00:00:00.000 | [
"Physics"
] |
The Optogalvanic Spectrum of Neutral Lanthanum between 5610 and 6110 Å
: We report on a complete optogalvanic spectrum of a discharge burning in a La-Ar gas mixture, in the spectral range 5610–6110 Å (17,851 to 16,364 cm − 1 ). About 1900 overlapping laser scans, each between 1 and 1.5 cm − 1 wide, were necessary to cover this range. The resolution of the spectra is limited by the Doppler width of the spectral features to about 0.03 cm − 1 (or ca. 0.01 Å) and is comparable with a Fourier-transform spectrum, but the sensitivity is much higher. Indeed, we could find more than 1800 lines, from which about 800 could be classified as transitions between known energy levels. The main focus of the investigations was to discover previously unknown energy levels by means of excitation of unclassified spectral features.
between 17,851 and 16,364 cm −1 ), a complete optogalvanic spectrum was recorded in the present work in order to perform systematic investigations. About 1900 overlapping laser scans, each between 1 and 1.5 cm −1 wide, were necessary to cover this range. A small part of this spectrum was already treated in [16]. In the spectrum, more than 1800 spectral lines could be found, many more than listed in wavelength tables (e.g., [2]). Around 800 of these lines could be explained as transitions between already known energy levels due to their wavelength and their hf pattern. Additional information from a wavenumber calibrated Fourier-transform (FT) spectrum (see [22][23][24]) is also used.
A closer investigation of lines that could not be classified lead to the discovery of new energy levels. In this paper we report on 13 previously unknown energy levels. Further analysis of the spectrum is in progress.
Experiment
The source of free La atoms was a discharge, burning in an La-Ar-plasma. We used, as in several works before (e.g., [6]), a see-through hollow cathode discharge ( Figure 1). The cylindric cathode had a length of 20 mm and was made of copper. The inner part of this cylinder was bushed out with a La rod of 6 mm in diameter, and in this La rod we drilled a hole of 3 mm in diameter. Two anode rings, made of aluminum, were mounted on both sides of the cathode, at a distance of ca. 0.8 mm from the La inset, by means of ceramic holders. First the discharge housing was evacuated to approximately 10 −3 mbar, then we filled in Ar with a pressure of ca. 0.2 mbar. After applying voltage, the discharge started in Ar (weak gray-blue light), but after some minutes a sputtering process set on and the discharge emitted bright white light, indicating that now the discharge was burning mainly in metal vapor. Ar was chosen since (i) heavier noble gases support more the sputtering process, and (ii) the appearance of Ar lines in the OG spectrum can be used for precise wavelength determination of La lines within the same record.
The power supply was operated in constant current mode at 90 mA. The discharge region was cooled by a bath of liquid nitrogen in order to (i) decrease the Doppler width of the spectral lines, (ii) to increase the sputtering efficiency, and (iii) to lower the electrical noise of the discharge.
Tunable laser light was generated by a homemade ring dye laser, using a Coherent "Verdi" laser (frequency-doubled Nd-based laser, 532 nm, 5-8 W) as pump laser. The scan range of the dye laser was increased to 1.5 cm −1 by an active regulation of the angle of the thin etalon. We used the R6G dye, and we were able to operate the laser between 5600 and 6110 Å. The output power was ca. 100 mW at the edges of the wavelength range and 500 mW at the center. Before entering the discharge region, the laser light was intensity-modulated by means of a mechanical chopper. For confirmation of newly found levels later also the dyes R110 and DCM were used, covering in total the range 5500-6800 Å. First the discharge housing was evacuated to approximately 10 −3 mbar, then we filled in Ar with a pressure of ca. 0.2 mbar. After applying voltage, the discharge started in Ar (weak gray-blue light), but after some minutes a sputtering process set on and the discharge emitted bright white light, indicating that now the discharge was burning mainly in metal vapor. Ar was chosen since (i) heavier noble gases support more the sputtering process, and (ii) the appearance of Ar lines in the OG spectrum can be used for precise wavelength determination of La lines within the same record.
The power supply was operated in constant current mode at 90 mA. The discharge region was cooled by a bath of liquid nitrogen in order to (i) decrease the Doppler width of the spectral lines, (ii) to increase the sputtering efficiency, and (iii) to lower the electrical noise of the discharge.
Tunable laser light was generated by a homemade ring dye laser, using a Coherent "Verdi" laser (frequency-doubled Nd-based laser, 532 nm, 5-8 W) as pump laser. The scan range of the dye laser was increased to 1.5 cm −1 by an active regulation of the angle of the thin etalon. We used the R6G dye, and we were able to operate the laser between 5600 and 6110 Å. The output power was ca. 100 mW at the edges of the wavelength range and 500 mW at the center. Before entering the discharge region, the laser light was intensity-modulated by means of a mechanical chopper. For confirmation of newly found levels later also the dyes R110 and DCM were used, covering in total the range 5500-6800 Å.
Detected was either the optogalvanic (OG) signal (change of the power supply voltage in dependence of the laser wavelength) or the laser-induced fluorescence (LIF) light intensity, recorded by means of a lock-in amplifier, synchronized with the mechanical chopper. The light emitted from the discharge is focused by means of quartz lenses onto the entrance slit of a monochromator and the transmitted light is detected by a photo multiplier. The lock-in amplifier signal is different from zero only for lines which intensity is modulated with the chopper frequency, thus we did not notice the strong background emission of the discharge. In this way, a LIF signal was observed only if the population of the upper level of the LIF line was influenced by the laser excitation. By tuning the monochromator, we could find LIF lines and were able to determine their wavelengths. Moreover, when the monochromator was then set to a LIF wavelength, a laser scan showed the dependency of the LIF intensity versus laser frequency and mirrored the hf structure of the laser-driven transition.
The combination of all data (LIF and laser wavelengths and hf pattern of the investigated transition) allowed finding new energy levels as described in [15,16]. A sketch of the apparatus used can be found, for example, [12]. Besides lines of La I, we also find lines belonging to La II and to the carrier gas of the discharge (Ar I, Ar II). No lines belonging to cathode materials other than La (e.g., copper, aluminum) were found.
Optogalvanic Spectra
For taking OG spectra, the fact is used that laser excitation of a transition of a chemical element, participating on a gas discharge, changes the detailed equilibrium in the plasma. If the discharge is operated in constant current mode, the change of the voltage versus laser frequency mirrors the probability of excited transitions. We tried also to operate the discharge in constant voltage mode. In this mode, the voltage drop on a ballast resistor is proportional to the OG signal. There was no difference in the SNR of the OG records, and we worked in constant current mode. Usually, the voltage change is quite small, thus one needs very sensitive detection methods, such as phase-sensitive amplification, performed by a mechanical chopper modulating the laser light intensity and a lock-in amplifier. Let us consider the general level structure of La atoms shown in Figure 2 (given in more detail in [4]). Even-parity levels are distributed between the ground state and the ionization limit, while the lowest odd-parity level has an energy of 13,260 cm −1 . All strong emission lines are transitions between odd-parity levels (level 1 in the left part of Figure 2) and low-lying even-parity levels (λ nLIF1 , blue dashed line). Detected was either the optogalvanic (OG) signal (change of the power supply voltage in dependence of the laser wavelength) or the laser-induced fluorescence (LIF) light intensity, recorded by means of a lock-in amplifier, synchronized with the mechanical chopper. The light emitted from the discharge is focused by means of quartz lenses onto the entrance slit of a monochromator and the transmitted light is detected by a photo multiplier. The lock-in amplifier signal is different from zero only for lines which intensity is modulated with the chopper frequency, thus we did not notice the strong background emission of the discharge. In this way, a LIF signal was observed only if the population of the upper level of the LIF line was influenced by the laser excitation. By tuning the monochromator, we could find LIF lines and were able to determine their wavelengths. Moreover, when the monochromator was then set to a LIF wavelength, a laser scan showed the dependency of the LIF intensity versus laser frequency and mirrored the hf structure of the laser-driven transition.
The combination of all data (LIF and laser wavelengths and hf pattern of the investigated transition) allowed finding new energy levels as described in [15,16]. A sketch of the apparatus used can be found, for example, [12]. Besides lines of La I, we also find lines belonging to La II and to the carrier gas of the discharge (Ar I, Ar II). No lines belonging to cathode materials other than La (e.g., copper, aluminum) were found.
Optogalvanic Spectra
For taking OG spectra, the fact is used that laser excitation of a transition of a chemical element, participating on a gas discharge, changes the detailed equilibrium in the plasma. If the discharge is operated in constant current mode, the change of the voltage versus laser frequency mirrors the probability of excited transitions. We tried also to operate the discharge in constant voltage mode. In this mode, the voltage drop on a ballast resistor is proportional to the OG signal. There was no difference in the SNR of the OG records, and we worked in constant current mode. Usually, the voltage change is quite small, thus one needs very sensitive detection methods, such as phasesensitive amplification, performed by a mechanical chopper modulating the laser light intensity and a lock-in amplifier. Let us consider the general level structure of La atoms shown in Figure 2 (given in more detail in [4]). Even-parity levels are distributed between the ground state and the ionization limit, while the lowest odd-parity level has an energy of 13,260 cm −1 . All strong emission lines are transitions between odd-parity levels (level 1 in the left part of Figure 2) and low-lying even-parity levels (λnLIF1, blue dashed line). In our experiments we could learn that most sensitive, with regard to the OG effect, are transitions between odd-parity levels (level 1) and high-lying even-parity levels (λexc1) or-as shown in the right part of Figure 2-even-parity levels (level 2) and high-lying odd-parity levels (λexc2). For In our experiments we could learn that most sensitive, with regard to the OG effect, are transitions between odd-parity levels (level 1) and high-lying even-parity levels (λ exc1 ) or-as shown in the right part of Figure 2-even-parity levels (level 2) and high-lying odd-parity levels (λ exc2 ). For all such transitions we can expect a high OG signal. The excitation of the high-lying levels can, in principle, be detected via their decay (λ LIF1 , λ LIF2 , red dashed lines). But for levels close to the ionization limit (energies above approximately 40,000 cm −1 ), such fluorescence lines are not detected (not distinguished from the noise), and we have to conclude that the ionization probabilities due to collisions in the plasma (arrows "ion1" and "ion2") are much higher than the radiative decay rates.
Laser excitation influences the detailed equilibrium of the population of all energy levels, and the OG signal is proportional to the change of the resistivity of the plasma. Excitation of transitions between metastable low-lying levels and upper levels in the medium energy range (blue dashed in the left part of Figure 2), even their probability is high, does not change the detailed equilibrium in the discharge substantially. Thus the signal is comparable in strength to the OG signal observed when exciting from medium levels to high-lying levels (λ exc1 ). For such transitions, the laser excitation populates levels close to the ionization limit (for La, it is 44,980 cm −1 [25]), changing the detailed equilibrium in the discharge much more.
This observation can be supported by the spectrum displayed in Figure 3. In trace (c) a part of the FT spectrum [22,23] is given, showing the hf pattern of line at 6108.4821(3) Å (transition between 24,046.093 cm −1 , odd parity, and 7679.944 cm −1 , even parity), allowing to determine its center of gravity (cg) wavelength with low uncertainty. In the OG signal (trace (a)) this line is also visible, but in the high-frequency part of this trace we see two additionally lines: 6108.319 and 6108.435 Å, classified as transitions between medium-energy odd-parity levels and high-lying even-parity levels (see Table 1). The signal of these lines is of comparable strength. In the FT emission spectrum the line intensity is determined by the population of the upper levels and the transition probability, and the latter lines are not noticeable. In contrary, in the OG absorption spectrum the change of the discharge resistivity is displayed (proportional to the population of the lower level and the transition probability), and we observe high sensitivity with respect to transitions to high-lying levels. Trace (b) shows a fit of trace (a). all such transitions we can expect a high OG signal. The excitation of the high-lying levels can, in principle, be detected via their decay (λLIF1, λLIF2, red dashed lines). But for levels close to the ionization limit (energies above approximately 40,000 cm -1 ), such fluorescence lines are not detected (not distinguished from the noise), and we have to conclude that the ionization probabilities due to collisions in the plasma (arrows "ion1" and "ion2") are much higher than the radiative decay rates. Laser excitation influences the detailed equilibrium of the population of all energy levels, and the OG signal is proportional to the change of the resistivity of the plasma. Excitation of transitions between metastable low-lying levels and upper levels in the medium energy range (blue dashed in the left part of Figure 2), even their probability is high, does not change the detailed equilibrium in the discharge substantially. Thus the signal is comparable in strength to the OG signal observed when exciting from medium levels to high-lying levels (λexc1). For such transitions, the laser excitation populates levels close to the ionization limit (for La, it is 44,980 cm −1 [25]), changing the detailed equilibrium in the discharge much more.
This observation can be supported by the spectrum displayed in Figure 3. In trace (c) a part of the FT spectrum [22,23] is given, showing the hf pattern of line at 6108.4821(3) Å (transition between 24,046.093 cm −1 , odd parity, and 7679.944 cm −1 , even parity), allowing to determine its center of gravity (cg) wavelength with low uncertainty. In the OG signal (trace (a)) this line is also visible, but in the high-frequency part of this trace we see two additionally lines: 6108.319 and 6108.435 Å, classified as transitions between medium-energy odd-parity levels and high-lying even-parity levels (see Table 1). The signal of these lines is of comparable strength. In the FT emission spectrum the line intensity is determined by the population of the upper levels and the transition probability, and the latter lines are not noticeable. In contrary, in the OG absorption spectrum the change of the discharge resistivity is displayed (proportional to the population of the lower level and the transition probability), and we observe high sensitivity with respect to transitions to high-lying levels. Trace (b) shows a fit of trace (a). The OG signal (e.g., Figure 3, trace (a)) does not tell us between which levels the observed transition takes place. While scanning the laser frequency, all possible transitions having suitable frequency are excited. Due to the high-level density of the La atom, several transitions are usually displayed within one laser scan of 1.5 cm −1 . Sometimes they are partly or completely overlapping The OG signal (e.g., Figure 3, trace (a)) does not tell us between which levels the observed transition takes place. While scanning the laser frequency, all possible transitions having suitable frequency are excited. Due to the high-level density of the La atom, several transitions are usually displayed within one laser scan of 1.5 cm −1 . Sometimes they are partly or completely overlapping (blend situations) with other La I transitions, as well as transitions in Ar I or Ar II. La II transitions may also be detected.
Fortunately, all the La lines show a characteristic hf pattern, caused by the hf splitting of the involved levels. For classification of the observed lines, we use a computer program called "Elements" [15,26], which calculates transitions possible for a certain observed wavelength and shows predicted hf patterns (within the spectra of La I, La II, Ar I and Ar II). A comparison between observed and predicted hf pattern usually allows the classification. The hf constants of unclassified lines were determined from a fit of the observed pattern by means of the program "Fitter" [27]. Since the value of Q is quite small, the hf constants B of La levels are small and hard to be determined. Within this study, only the constants A could be determined. [22,23]. ** wavelength calculated from level energies. The other wavelengths were determined from wavenumber differences with respect to classified lines. wl-wavelength in standard air. tw-this work. SNR-signal-to-noise ratio in the FT spectrum. cont-continuation. ampl-amplification of the used lock-in amplifier. In column "Ref." the source of the hf constants A and B of the levels is given. If an observed structure cannot be explained as transition between already known energies, we have to conclude that at least one of the involved energy levels is unknown. For finding such unknown levels, we need to identify a known level involved. Since in the mid-energy range of the level scheme most of the levels are known, we expect that most of the unknown levels have high energies above 40,000 cm −1 . As mentioned before, LIF lines from such levels (red dashed arrows in Figure 2) are usually not observed.
If the lower level of the excited transition has odd parity (level 1 in Figure 2), this level can decay to a low-lying even-parity level (λ nLIF1 , blue dashed arrow in Figure 2). Laser excitation lowers the population of level 1, thus the intensity of λ nLIF1 is lowered when the laser light is on. This lowering is detected by the lock-in amplifier as a signal having an opposite phase compared to the decay of an upper excited level. We define such LIF lines as having "negative" LIF intensity. The laser wavelength is set to the highest peak of the hf pattern of the unclassified line, and the monochromator detecting LIF lines is tuned in a wide range until we find lines showing a "negative" LIF signal.
During our investigations it turned out that observation of such "negative" LIF lines is a very efficient method to identify the lower level of the excited transition, if the observed line can be explained as transition between a medium-energy odd-parity level and a previously unknown high-lying even-parity level. The energy of the new level is simply given by addition of the transition wave number to the energy of the mid-energy level. Once a new level is found in this way, its existence has to be confirmed by excitation from other lower odd-parity levels.
A short summary of how we found the new even-parity energy levels is as follows: (1) We selected an unclassified structure in the OG record (preferable having high SNR and well-resolved hf pattern). (2) The laser light was set to the highest peak of the structure.
(3) The monochromator was tuned in order to find at least one "negative" LIF line. Unfortunately, we observe sometimes quite strong structures in the OG signal for which we cannot find "negative" LIF lines. We suppose that these structures are caused by excitations of medium-energy even-parity levels (level 2 in Figure 2). From the excited high-lying odd-parity levels we do not expect LIF lines (λ LIF2 , red dashed) due to the high ionization probability. The "negative" LIF lines (λ nLIF2 , blue dotted arrow) also cannot be observed, since there are no odd-parity levels below 13,260 cm −1 . Thus it is usually not possible to classify such transitions, but some exceptions are possible (see the discovery of the level at 35,233.558 cm −1 , discussed further below). This may explain the fact that only four of the 233 new levels, found by laser spectroscopy, have odd parity; all others have even parity.
In Figure 4 we show some characteristic parts of the OG spectrum discussed in this paper, before we present some newly found levels. As can be noticed, only very rarely in an interval of 1 cm −1 no lines are detected (e.g., in trace (e)). Data for the classified lines (with the center of gravity (cg) wavelengths for all lines) showing up in the sample spectra of Figures 3 and 4 are given in Table 1. Energy values used in this paper are determined in a global fit from more than 2200 lines with wave numbers determined from a wavenumber-calibrated Fourier-transform spectrum [23]. Energy values used in this paper are determined in a global fit from more than 2200 lines with wave numbers determined from a wavenumber-calibrated Fourier-transform spectrum [23]. Description of the lines can be found in Table 1. Wavelengths given in this graph with two figures after decimal point are readings of our lambdameter (accuracy ± 0.01 Å).
New Even-Parity Energy Levels
In this paper we report the discovery of 12 energy levels having even parity and one level having odd parity. In all cases of even-parity levels we observed significant unclassified hf patterns in the OG spectra, tuned the laser wavelength to the highest peak of a pattern, and searched for LIF. Due to the observed "negative" LIF lines we were able to identify the lower energy level of the excited transition and could calculate the energy of the unknown even-parity level. All levels found were then introduced to our database and we calculated possible transitions to lower-lying odd-parity levels. Some lines in the FT spectra could be classified in this way. We tried to excite all transitions at the calculated wavelengths in the range of our dye lasers, and successful excitations confirmed the existence of the newly introduced levels.
The parameters of the new levels with even parity, together with the classified transitions, are listed in Table 2. A new line is treated as classified as a transition of a new level if (i) the observed and predicted hf patterns agree and (ii) the difference between the observed and the calculated transition wavenumbers is smaller than ±0.02 cm −1 . a Given as 1328.6(29) in [28]. b Given as 391.0(5)/−42 (19) in [28]. c Given as 5.0(34) in [28]. d Given as 46.6(20) in [28]. e Given as 367 (5) in [4]. f Given as 368.9(37) in [28]. Most probably a typing error in [28]-368.9 instead of 386.9.
In columns 1 to 3 the properties of the new levels are given. We could determine only the hf constant A; the value of B was assumed to be zero. In column 4 the wavelengths of the classified lines are given. Wavelengths given with four digits after decimal point correspond to lines observed in the wavenumber-calibrated FT spectrum (uncertainty ±0.0003 Å); those with three digits were determined from wavenumber differences (uncertainty ±0.003 Å); wavelengths with two digits are readings of our lambdameter, accuracy ±0.01 Å. The next five columns contain properties of the combining levels, including a reference to the used hf constants. In the last column, the remark "blend" is given, if the observed hf pattern overlaps with the pattern of another, classified line. "nf" followed by a wavelength means an observation of a "negative" LIF line, "nf+" marks a very strong LIF line, "nf-" a very weak one. "wl" means wavelength, "SNR" the signal-to-noise ratio in the FT spectrum. The value of SNR < = 10 of the FT lines mentioned in Tables 2 and 3 indicates that these lines are quite weak (e.g., in comparison with the lines at 6107.2686 Å, shown in Figure 3, trace (c) and 6108.4821 Å, OG recording shown in Figure 4, trace (a)). Table 3. Properties of the new odd-parity level and classified lines. Uncertainty of the energy of the new level ±0.010 cm −1 . wl-wavelength. SNR-signal-to-noise ratio in the FT spectrum. In column "Ref." the source of the hf constants A and B of the combining levels is given. The value of B could not be determined for the new level and was assumed to be zero. Uncertainties of A-values determined in this work to 2-sigma.
The level energies in Tables 2 and 3 were determined in the following way. First we determined-whenever possible-wavelengths more precise than the reading of our lambdameter, either from the available FT spectrum [22,23] or from wavenumber differences to well-known lines, contained in the same OG recording (e.g., Figure 4, trace (g)). As mentioned above, these wavelengths are given in Tables 2 and 3 with four or three digits after the decimal point. The energies of the combining levels were taken from preliminary results of [24] and were assumed to have a total uncertainty of ±0.004 cm −1 . For these lines, the transition wavenumbers were calculated and in a fit, by fixing the energies of the combining lower levels, the energies of the new levels were determined. We obtained statistical uncertainties between 0.002 and 0.004 cm −1 . Taking into account a possible systematic error in the calibration of the FT spectrum, the 2-sigma uncertainty of the level energies was assumed to be 0.010 cm −1 .
Sometimes it is not easy to find lines classified by a newly introduced level. We demonstrate this on the example of the line at 6108.345 Å, classified as transition from the new level at 41,350.809 cm −1 . The OG record is shown in Figure 3, trace (a). It has a large signal-to-noise ratio (greater than 100:1). The observed structure is composed of two strong lines (green and red hf pattern) and a weak third line (arrow, blue pattern) between two hf components of the second line. Both strong lines are classified. Below the structures the hf components (with theoretical intensity ratios) are drawn, as well as the centers of gravity. The difference between the cg wavenumbers of the strong lines (0.435 cm −1 , corresponding to 13046 MHz) agrees well with the wavenumber difference of 0.436 cm −1 calculated from the level energies. Trace (b): Fit of the structure taking into account saturation effects for the strong line at 6108.4821(3) Å. Trace (c): Part of the FT spectrum [22,23]. As can be seen, the resolution of the FT spectrum is only marginally lower than that of the OG spectrum, but the sensitivity is different (see Section 3). Lines 2 and 3 are not noticeable in the FT spectrum, while line 1 has a very high signal-to-noise ratio. The cg wavelength of this line can be determined from the FT spectrum to be 6108.4821(3) Å. The classification of line 2 as transition from the level 41350.809 cm −1 is based on two observations: (i) The observed structure can be fitted well assuming the predicted hf pattern of line 2 and (ii) the cg wavelength 6108.345(3) Å (determined from the wavelength of line 1 and the cg frequency difference in trace (a), and the transition wavelength calculated from the level energies, 6108.3448 Å, are practically the same (wavenumber deviation less than 0.001 cm −1 ).
New Odd-Parity Energy Level
Data and lines for the discovered odd-parity level are listed in Table 3, having the same layout as Table 2. Figure 5a shows the OG spectrum between 6075.11 and 6074.65 Å. Hyperfine patterns of two overlapping lines can be distinguished. The low-frequency line, cg wavelength 6074.914 (3) Å, could be identified as a transition involving the new level at 41,545.894 cm −1 . The stronger, high-frequency line remained unclassified. Setting the laser wavelength to its highest component, we observed strong "negative" LIF signals at 5158 and 5455 Å, indicating that the well-known odd-parity level at 19,379.395 cm −1 , J = 5/2, A = −58 MHz, is the lower level of the excited transition (corresponding to level 1 in Figure 2). Adding the cg wavenumber of the line, we got a new even-parity level at 35836.34 cm −1 . The structure could be fitted quite well assuming J = 7/2 and A = 455 MHz for the new level. Surprisingly, this level did not explain any other line from the FT spectrum or our OG spectrum, thus its existence seemed to be questionable. Additionally, a preliminary semi-empirical fit of the even level structure of La did not leave space for a level with J = 7/2 in the vicinity of 35,800 cm −1 [34]. calculated from the level energies. Trace (b): Fit of the structure taking into account saturation effects for the strong line at 6108.4821(3) Å. Trace (c): Part of the FT spectrum [22,23]. As can be seen, the resolution of the FT spectrum is only marginally lower than that of the OG spectrum, but the sensitivity is different (see Section 3). Lines 2 and 3 are not noticeable in the FT spectrum, while line 1 has a very high signal-to-noise ratio. The cg wavelength of this line can be determined from the FT spectrum to be 6108.4821(3) Å. The classification of line 2 as transition from the level 41350.809 cm −1 is based on two observations: (i) The observed structure can be fitted well assuming the predicted hf pattern of line 2 and (ii) the cg wavelength 6108.345(3) Å (determined from the wavelength of line 1 and the cg frequency difference in trace (a), and the transition wavelength calculated from the level energies, 6108.3448 Å, are practically the same (wavenumber deviation less than 0.001 cm −1 ).
New Odd-Parity Energy Level
Data and lines for the discovered odd-parity level are listed in Table 3, having the same layout as Table 2. Figure 5a shows the OG spectrum between 6075.11 and 6074.65 Å. Hyperfine patterns of two overlapping lines can be distinguished. The low-frequency line, cg wavelength 6074.914 (3) Å, could be identified as a transition involving the new level at 41,545.894 cm −1 . The stronger, highfrequency line remained unclassified. Setting the laser wavelength to its highest component, we observed strong "negative" LIF signals at 5158 and 5455 Å, indicating that the well-known odd-parity level at 19,379.395 cm −1 , J = 5/2, A = −58 MHz, is the lower level of the excited transition (corresponding to level 1 in Figure 2). Adding the cg wavenumber of the line, we got a new even-parity level at 35836.34 cm −1 . The structure could be fitted quite well assuming J = 7/2 and A = 455 MHz for the new level. Surprisingly, this level did not explain any other line from the FT spectrum or our OG spectrum, thus its existence seemed to be questionable. Additionally, a preliminary semi-empirical fit of the even level structure of La did not leave space for a level with J = 7/2 in the vicinity of 35,800 cm −1 [34]. Since the OG signal of the unclassified line is quite strong, and since the two observed LIF lines have quite high intensity in the emission spectrum, we thought that we had observed indirect LIF, which can be explained using Figure 2. If the transition between the even-parity level 2 and a new high-lying odd-parity level has a high probability, level 2 is depopulated by the laser light, and the laser light burns a "hole" into the equilibrium distribution of the populations. The plasma tries to fill Since the OG signal of the unclassified line is quite strong, and since the two observed LIF lines have quite high intensity in the emission spectrum, we thought that we had observed indirect LIF, which can be explained using Figure 2. If the transition between the even-parity level 2 and a new high-lying odd-parity level has a high probability, level 2 is depopulated by the laser light, and the laser light burns a "hole" into the equilibrium distribution of the populations. The plasma tries to fill this "hole" by population transfer from neighboring levels, and the population from the odd-parity level 1 may be transferred to level 2 via collisions, causing a decrease of the intensity of decay lines ("negative" LIF signals) of level 1.
Now we had to find the level 2 excited by laser light, and we looked for even-parity levels close in energy to 19,379.395 cm −1 . Additionally, the level should have J = 5/2 and a small value of A (concluded from the possible fit). The only even-parity level (level 2 in Figure 2) that fulfills these conditions is the well-known level at 18,776.615 cm −1 , J = 5/2, A = 9.9 MHz. A fit of the structure, shown in Figure 5b, with these values fixed gave A = 503.6 MHz for the new odd-parity level, and its energy was calculated to be 35,223.558 cm −1 . J must be 7/2. Calculated decay lines of this level did agree-with respect to wavelength and hf pattern-with two observations; at 5909.32Å (line in the OG spectrum with low SNR) and at 3949.3868 Å (line in the FT spectrum, SNR 10). The hf patterns of these lines are shown in Figure 6. Thus we think that the existence of the new odd-parity level is confirmed.
in energy to 19,379.395 cm −1 . Additionally, the level should have J = 5/2 and a small value of A (concluded from the possible fit). The only even-parity level (level 2 in Figure 2) that fulfills these conditions is the well-known level at 18,776.615 cm −1 , J = 5/2, A = 9.9 MHz. A fit of the structure, shown in Figure 5b, with these values fixed gave A = 503.6 MHz for the new odd-parity level, and its energy was calculated to be 35,223.558 cm −1 . J must be 7/2. Calculated decay lines of this level did agreewith respect to wavelength and hf pattern-with two observations; at 5909.32Å (line in the OG spectrum with low SNR) and at 3949.3868 Å (line in the FT spectrum, SNR 10). The hf patterns of these lines are shown in Figure 6. Thus we think that the existence of the new odd-parity level is confirmed. The signal-to-noise-ratio of the OG signal is not very good, and a drift of the background signal occurred. Nevertheless, the hf pattern fits well to the one predicted from the hf constants of the combining levels. Full width at half maximum (FWHM) of the simulated pattern 900 MHz. (b) Line 3949.3068 Å as observed in the FT spectrum [22,23]. FWHM of the simulated pattern 2200 MHz.
Conclusions
The paper reports an OG spectrum of La over the full wavelength range between 6110 and 5610 Å (over 1458 cm −1 ), composed of more than 1900 laser scans, each between 1 and 1.5 cm −1 wide. Circa 800 previously unknown spectral lines could be classified as transitions between already known energy levels, but more than 1000 hf patterns, which could not be classified so far, indicate the existence of a large number of hitherto unknown energy levels. The first treatment of some unclassified lines led to the identification of 12 levels having even parity and one level having odd parity, explaining altogether 87 spectral lines. Further work on finding new energy levels is in progress.
Author Contributions: TB: Experimental work, discovery of some levels. LW: Experimental work, discovery of some levels, determination of hf constants, paper writing. The signal-to-noise-ratio of the OG signal is not very good, and a drift of the background signal occurred. Nevertheless, the hf pattern fits well to the one predicted from the hf constants of the combining levels. Full width at half maximum (FWHM) of the simulated pattern 900 MHz. (b) Line 3949.3068 Å as observed in the FT spectrum [22,23]. FWHM of the simulated pattern 2200 MHz.
Conclusions
The paper reports an OG spectrum of La over the full wavelength range between 6110 and 5610 Å (over 1458 cm −1 ), composed of more than 1900 laser scans, each between 1 and 1.5 cm −1 wide. Circa 800 previously unknown spectral lines could be classified as transitions between already known energy levels, but more than 1000 hf patterns, which could not be classified so far, indicate the existence of a large number of hitherto unknown energy levels. The first treatment of some unclassified lines led to the identification of 12 levels having even parity and one level having odd parity, explaining altogether 87 spectral lines. Further work on finding new energy levels is in progress.
Author Contributions: T.B.: Experimental work, discovery of some levels. L.W.: Experimental work, discovery of some levels, determination of hf constants, paper writing. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding. | 8,862 | 2020-05-19T00:00:00.000 | [
"Physics"
] |
Recording and reconstructing 10 billion unbiased b hadron decays in CMS
The CMS experiment has recorded a high-purity sample of 10 billion unbiased b hadron decays. The CMS trigger and data acquisition systems were configured to deliver a custom data stream at an average throughput of 2 GB s−1, which was "parked" prior to reconstruction. The data stream was defined by level-1 and high level trigger algorithms that operated at peak trigger rates in excess of 50 and 5 kHz, respectively. New algorithms have been developed to reconstruct and identify electrons with high efficiency at transverse momenta as low as 0.5 GeV. The trigger strategy and electron reconstruction performance were validated with pilot processing campaigns. The accumulation and reconstruction of this data set, now complete, were delivered without significant impact on the core physics programme of CMS. This unprecedented sample provides a unique opportunity for physics analyses in the flavour sector and beyond.
Introduction
In recent years, a number of experimental results related to lepton universality tests in b hadron decays have yielded measurements [1][2][3][4][5][6][7][8][9] that are in tension with expected values from the standard model. The cited measurements, performed by the BaBar [10], Belle [11], and LHCb [12] Collaborations, are for both b→s and b→c ν transitions and the individual measurements exhibit deviations in the range 2-4σ. Collectively, they may be the first indications of the violation of lepton flavour universality (LFU) [13,14]. The confirmation of LFU violation would be a striking proof of the existence of physics beyond the SM. A key experimental observable R K is defined by the double ratio where the numerator and denominator are ratios of the branching fractions for the nonresonant B + →K + + − and resonant B + →K + (J/Ψ→) + − decays in the muonic (electronic) channel, respectively. 1 The R K * observable is similarly defined using the branching fractions for the nonresonant B 0 →K * + − and resonant B 0 →K * (J/Ψ→) + − decays. The R K and R K * observables are known with high theoretical precision [15,16] as a function of the 4-momentum transfer of the dilepton system, q 2 ( ), and thus are ideal probes for the presence of newphysics processes in rare decays due to b→s transitions.
Results related to lepton universality from the CMS experiment [17] are thus far limited: examples include the measurements of branching fractions for B 0 (s) →µ + µ − [18] and angular analyses of B 0 →K * µ + µ − decays [19]. The CMS trigger system [20] comprises a two-tier system that enables algorithms on a level-1 (L1) subsystem of custom hardware processors and a software-based high-level trigger (HLT) subsystem that runs on a farm of processors. The system already implements the required algorithms to efficiently record samples of b hadron decays in muonic final states with high purity. However, there is no corresponding trigger logic that can be used to collect an adequate sample of B +(0) →K ( * ) e + e − decays. This limitation has thus far prevented the measurements of R K and R K * by the CMS Collaboration.
A novel trigger and "B parking" strategy was deployed during the data taking period in 2018, which has enabled the accumulation and reconstruction of 10 B unbiased b hadron decays from which the measurements of R K and R K * may be derived. The data streams that serve the core physics programme of CMS are promptly reconstructed at the CERN Tier-0 data centre [21], and are generally available within 48 hours for physics analysis. The new data stream has a trigger rate of several kHz, which is beyond the standard processing capabilities of the Tier-0 centre. However, the trigger and data acquisition (DAQ) systems have the ability to record nonstandard parked data streams to extend the CMS physics programme [22]. These data streams, typically defined by relaxed inclusive trigger requirements, are not processed immediately by the CMS reconstruction software. Instead, the data are temporarily stored in local buffers at Point 5 before being transferred-unprocessed-to permanent tape storage. These data streams are processed at a later point in time, e.g. during an end-of-year or long shutdown of the LHC. The parked data streams serve analyses with complementary or extended coverage (e.g. Ref. [23]) with respect to the core CMS physics programme.
This sample of unbiased b hadron decays, unprecedented in its size, provides a unique opportunity for the discovery of new-physics processes, in the flavour sector and beyond, and it is complementary to the high-p T new-physics search programme of CMS. The trigger and B parking strategy, a new electron reconstruction algorithm, and some preliminary validation studies are described in the following sections.
Trigger strategy
The selection of bb events using a "tag-side" trigger logic in order to accumulate a sample of unbiased "signal-side" b hadron decays has been an important technique for analyses at B factories, LEP, and hadron colliders. The natural decay channels for the signal-side b hadron are unbiased by the trigger logic requirements imposed on the tag-side decay. The logic is based on the presence of a single muon, as semileptonic decays to muonic final states, b→(c→)µX, account for ≈20% of all b hadron decays.
In CMS, the same tag-side technique, coupled with existing trigger logic for muons, is used to record both the (signal-side) muonic and electronic final states required by the R K and R K * measurements. The CMS trigger logic has been tuned to record b→(c→)µX events with a purity of ≈80%, as described below. The B +(0) →K ( * ) + − decays have branching fractions of O(10 −7 ). Assuming an acceptance times efficiency (A ) of ≈10%, a large sample of O(10 10 ) bb events is therefore required to obtain O(100) events containing B +(0) →K ( * ) µ + µ − or B +(0) →K ( * ) e + e − decays. The expected yield, N(B +(0) →K ( * ) + − ), after the application of a muon-based L1 trigger algorithm during data taking in 2018 can be estimated by where f B is the fractional production rate of a particular type of b hadron relative to all b hadrons (e.g. 0.4 for B 0 and B ± ); R L1 is the rate [kHz] of positive decisions by the L1 trigger logic; P L1 is the purity of the event sample recorded by the L1 trigger logic, assumed here to be 0.3; and t LHC is the duration of the data taking period in 2018, assumed to be 7.8 × 10 6 s (i.e. six months of LHC operation with a duty cycle of 50%). The branching fraction for B + →K + + − (B 0 →K * + − ) is 4.5 (6.7) × 10 −7 [24]. Hence, assuming a L1 trigger rate of 10 kHz, the total number of events with a positive L1 decision that contain a signal-side The purity of the data stream is substantially improved through the use of tailored muon algorithms in the HLT. Studies have identified the two variables with the highest discriminating power to improve purity while maintaining acceptance to the signal processes: the muon p T and the muon impact parameter (defined as the spatial distance between the primary pp collision and the muon at its point of closest approach), expressed in terms of its measurement significance, IP sig . The latter variable leverages the lifetime of the B ±(0) meson and the characteristic displacement of the muon. The improved purity provided by the HLT algorithm is an important factor in controlling the total rate at which events are recorded by the CMS trigger system and written to tape.
The trigger strategy aims to maximise the number of B +(0) →K ( * ) + − events recorded during 2018 while ensuring that the ability of the CMS trigger and DAQ systems to deliver the core physics programme is unaffected. This is achieved by taking advantage of an increase in idle online computing resources as the instantaneous luminosity L inst decreases during each LHC fill. Specifically, as L inst decreases, the L1 and HLT trigger rates decrease, and the perevent processing load also decreases as a consequence of a reduced number of additional pp interactions within the same bunch crossing as the primary interaction (pileup). Table 1 summarises the tag-side muon trigger requirements imposed by the L1 and HLT algorithms. The L1 logic requires the presence of a muon that satifies |η| < 1.5, which helps to control rate and also improves the acceptance for the signal-side B +(0) →K ( * ) + − decays. Both the L1 and HLT requirements are relaxed through a series of settings that progressively increase the rate at which the CMS trigger system results in a positive decision with only a moderate reduction in purity. The purity, estimated from simulation, is found to be in the range 0.59-0.92, with an average value of ≈0.75 that has been validated against data by reconstructing D * + candidates from the decay B 0 →D * + µν→(D 0 π soft )µν→(K + ππ soft )µν. The trigger rates of the L1 and HLT system peak at values of ≈50 kHz and 5.4 kHz, respectively. The highest rates are observed late in an LHC fill, which results in a pileup value of ≈20 when averaged over an entire LHC fill. This value is a factor ≈2 lower than that typically observed for the standard physics data streams of CMS. Figure 1 shows the trigger rate of the CMS L1 system as a function of time during an LHC fill in 2017 (left) and 2018 (right). The left panel illustrates how the total rate decreases with time, as a consequence of the decreasing L inst during the LHC fill. The right panel illustrates Table 1. Summary of the tag-side muon trigger requirements imposed by the L1 and HLT algorithms: the L1 and HLT muon p T thresholds, and the HLT muon impact parameter significance IP sig . Also shown are the trigger purity and peak trigger rate. All values are shown as a function of the peak L inst . how the total rate is maintained close to the optimum value of ≈90 kHz by evolving the settings, as defined in Table 1. The left panel of Fig. 2 shows the trigger rates of the CMS HLT system as a function of time during an LHC fill in 2018, for both the standard physics and B parking streams. Sharp increases in the rate for the B parking stream occur throughout the LHC fill, as the settings are evolved, while the rate decreases monotonically for the standard physics data streams.
Data parking
The DAQ system is able to handle the additional load from the B parking stream up to a limitation determined primarily by the data transfer from local storage buffers at Point 5 to tape resources available via the Tier0 centre. The trigger strategy outlined in Sec. 2 delivers a rate of ≈2 kHz when averaged over an LHC fill, which corresponds to a throughput of ≈2 GB s −1 . This throughput, when averaged over a timescale of several days, can be sustained without compromising the performance of the CMS DAQ system. The allocation of higher rates later in the LHC fills helps to load-balance the DAQ system. At the beginning of the LHC Run 2, CMS allocated tape resources to accommodate the parking of data (and a copy) at an average rate of ≈500 Hz during 2016, 2017, and 2018 to support the analysis of the scouting data stream [22]. The resources for 2017 and 2018 were reallocated to accommodate the new B parking proposal. Assuming a single copy, these resources are sufficient to permanently store the B parking data stream.
Event reconstruction and validation
The B parking sample was accumulated during the period June-November 2018. The sample comprises 12 B events, recorded with high purity triggers, and contains ≈10 B unbiased b hadron decays. The size of the single-copy unprocessed data sample is 7.6 PB. The reconstruction of the B parking sample occurred during the LHC long shutdown, in the period May-December 2019. The sample is permanently available as an analysis-level data format (MINIAOD) with a reduced footprint. Table 2 summarises the composition of the sample.
Approximately 7% of the data sample, enriched in dielectron final states from b→s transitions, is also temporarily available in the raw and AOD data formats, which permits further developments of algorithms and validation studies. A "pilot" reconstruction campaign, comprising a small fraction of the full data set, O(1%), was undertaken early in the data taking period to allow the validation of the trigger and parking strategies. The right panel of Fig. 3 shows the invariant mass distribution obtained from candidate B + →K + (J/Ψ→)e + e − decays using the standard CMS reconstruction software. This is the first observation from CMS of b→s transitions in the dielectron final state, obtained from the pilot campaign, which demonstrates the rich physics potential of the B parking sample. The trigger purity studies, based on the reconstructed D * + candidates, were also based on the pilot campaign.
Electron reconstruction
A crucial component of the R K and R K * measurements is the ability to efficiently identify electrons down to very low transverse momenta. The left panel of Fig. 3 shows the generator-level p T distributions for the daughter particles from B + →K + + − decays. The p T distributions are very soft, with those for the kaon and subleading lepton peaking at ≈1 GeV. The right panel of Fig. 3 shows the efficiency to reconstruct electrons as a function of the generator-level p T , as obtained with the CMS default electron reconstruction algorithm (blue square markers). The efficiency is essentially zero for the region 0 < p T < 2 GeV and in the range 0.2-0.8 for the region 2 < p T < 10 GeV. A custom electron reconstruction algorithm, optimised for the low-p T regime, has been developed for the B parking data set. As for the standard CMS electron algorithm, the determination of the charged-particle track parameters for electron candidates, in the presence of bremsstrahlung energy loss, relies on the use of a Gaussian sum filter (GSF) approach [26]. The "GSF tracking" stage is computationally expensive, and therefore it is seeded by a more computationally efficient logic that identifies potential electron candidates. The trajectory of each GSF track is used to identify a compatible "seed" cluster of energy in the CMS electromagnetic calorimeter. Additional clusters of energy, consistent with the bremsstrahlung energy loss pattern of the electron candidate, are associated with the seed cluster as part of a "super cluster", which can be used with the tracking information to identify genuine electron candidates with high efficiency and purity. The right panel of Fig. 3 illustrates the increase in efficiency obtained with the new electron reconstruction algorithm with respect to the standard algorithm with only minimal identification quality criteria applied.
The seeding logic implements two independent models based on boosted decision trees (BDT). The first BDT provides signal-to-background discrimination based on a "kinematically agnostic" approach that exploits only tracking and calorimeter information. The second BDT provides a (model-dependent) "kinematically aware" model that also uses the p T , η, and track impact parameter of an electron candidate to discriminate signal from background. The left panel of Fig. 4 shows the ROC curves obtained for the two BDTs based on simulated B + →K + e + e − events. A loose working point is defined for each BDT that yields a 10% mistag rate while providing a factor ≈2 gain in efficiency over that obtained from the baseline seeding logic of the standard CMS algorithm. These working points were used to seed the new electron reconstruction sequence as part of the reconstruction campaign described in Sec. 4 and the electrons are available for analysis in the MINIAOD data format.
A large, high purity sample of electrons with 0.5 < p T < 10 GeV can be obtained from converted photons resulting from interactions with the beam pipe and inner tracking structures. This sample is being used to study and tune the identification algorithm for low-p T electrons. The right panel of Fig. 4 shows the vertex positions of photon conversion candidates in the transverse plane for the region |η| < 1. The structures of the beam pipe and inner layer of the CMS pixel barrel subdetector are clearly visible.
Summary
The CMS experiment has recorded and reconstructed a high-purity sample of 10 billion unbiased b hadron decays. This sample was recorded with minimal impact on the core CMS physics programme, as the strategy exploited the use of existing infrastructure, trigger algorithms, and idle resources available during the latter part of LHC fills. The data stream was parked during 2018 and processed during 2019. A new electron reconstruction algorithm was deployed as part of the processing campaign, which provides the potential for highly efficient electron identification at transverse momenta as low as 0.5 GeV. This unprecedented sample provides a unique opportunity for physics analyses in the flavour sector and beyond. | 4,034.8 | 2020-01-01T00:00:00.000 | [
"Physics"
] |
Ferromagnetic soft catheter robots for minimally invasive bioprinting
In vivo bioprinting has recently emerged as a direct fabrication technique to create artificial tissues and medical devices on target sites within the body, enabling advanced clinical strategies. However, existing in vivo bioprinting methods are often limited to applications near the skin or require open surgery for printing on internal organs. Here, we report a ferromagnetic soft catheter robot (FSCR) system capable of in situ computer-controlled bioprinting in a minimally invasive manner based on magnetic actuation. The FSCR is designed by dispersing ferromagnetic particles in a fiber-reinforced polymer matrix. This design results in stable ink extrusion and allows for printing various materials with different rheological properties and functionalities. A superimposed magnetic field drives the FSCR to achieve digitally controlled printing with high accuracy. We demonstrate printing multiple patterns on planar surfaces, and considering the non-planar surface of natural organs, we then develop an in situ printing strategy for curved surfaces and demonstrate minimally invasive in vivo bioprinting of hydrogels in a rat model. Our catheter robot will permit intelligent and minimally invasive bio-fabrication.
T he rapid development of three-dimensional (3D) printing has paved the way for myriad biomedical applications [1][2][3][4][5] . Driven by the development of implantable technology in healthcare 6,7 , there is a growing interest in directly fabricating bio-tissue and/or biomedical devices on internal organs in living animals including humans. In vivo bioprinting that is capable of seamlessly integrating in situ printed materials and devices with the human body holds great promise in human tissue engineering and human-machine interface 8,9 . Currently, in vivo bioprinting is still in its infancy with most applications at or near the skin including, for example, skin or cartilage repair by direct inkwriting 10,11 or fabrication of epidermal electrodes 12 . For the printing on the internal organs of the human body, however, a surgical operation is usually required, which in turn poses a higher risk of infection and prolonged recovery time for patients. Therefore, minimally invasive bioprinting inside the body would be highly significant, but challenges remain. For example, attempts have been made to form patterned biopolymers by using a near-infrared light-induced polymerization under the skin. But the low penetrability of the light source limits the printing depth to around 5 mm [13][14][15] . Zhao et al. used a conventional motordriven printer to directly write ink inside a chamber 16 . However, the nature of the rigid printer nozzle limits its application inside the body where tortuous anatomy is commonly encountered.
Recent advances in soft robots capable of dexterous manipulation have offered an opportunity to revolutionize surgical practice in a minimally invasive way [17][18][19][20][21][22] . For the minimally invasive operation that is characterized by a confined, easily deformable, dynamically changing environment, magnetoactive robots that can be remotely controlled to navigate hard-to-reach areas of the body have recently have garnered interest [23][24][25][26][27] . Due to the ease of untethered control, magnetic robots have broad potential applications including endovascular interventions and drug delivery to targeted lesions [28][29][30][31] . Among others, Kim et al. recently developed a ferromagnetic soft guidewire robot by uniformly dispersing ferromagnetic particles within a polymer matrix 32 . Upon magnetic actuation, such a robot can actively bend its tip and be swiftly steered through narrow and winding environments such as a brain vasculature phantom. In addition to omnidirectional steering and navigating capabilities, such a robot can be easily functionalized and integrated with other advanced technologies to permit more complicated biomedical applications.
Here, we report a ferromagnetic soft catheter robot (FSCR) system that is capable of minimally invasive in vivo bioprinting by incorporating magnetic actuation with 3D printing techniques. In the form of a slender rod-like structure with dispersed hardmagnetic particles, FSCR can reach regions inside the body using remote magnetic actuation, followed by in situ printing of functional inks (Fig. 1a) such as lesion healing creams and electrode gels. Distinct from conventional printing systems with a rigid nozzle (Fig. 1b), our FSCR feathers a magnetoactive soft nozzle that can print over a large workspace through a small incision (Supplementary Table 1). To realize steady extrusion of inks, our FSCR is rationally designed with an embedded reinforcing fiber mesh (Fig. 1c), which enables printing of various biocompatible and functional inks including silicones, silver pastes, and conductive hydrogels. A magnetic field is imposed by four numerically controlled motor-driven permanent magnets to achieve both translational and rotational motion of the FSCR (Fig. 1d). Compared with existing commercial apparatus for magnetically controllable catheters, the developed control system by employing four permanent magnets is relatively simple (Supplementary Table 2). Our FSCR can print different patterns using multiple inks on both flat and curved surfaces. We also demonstrate printing a functional hydrogel on a porcine tissue phantom and the liver of a living rat in a minimally invasive, remotely controllable, and automated manner.
Results
Rational design of FSCR. The FSCR is a slender rod-like structure with a hollow channel inside for material transport. The body of the FSCR is fabricated using an injection molding method, as illustrated in Fig. 2a. The ferromagnetic composite ink was first prepared by mixing uncured polymer resin (polydimethylsiloxane, PDMS) with evenly dispersed hard-magnetic microparticles (Neodymium iron boron, NdFeB). The composite ink was then injected into a tubular mold with a steel core wire placed at the center as the inner template. To enhance the mechanical performance of the FSCR, a polylactide (PLA) fiber mesh was inserted inside the mold. After being fully cured, the outer mold and inner wire were removed, producing a PLA fiber reinforced FSCR body with an inner channel for ink extrusion as highlighted by the green line in Fig. 2a. The details of the parameters for the PLA fiber reinforced FSCR can be found in Supplementary Fig. 1. To impart the desired magnetically responsive property of the robot, we magnetized the body by applying a strong impulse magnetic field (about 4 T) to magnetically saturate the dispersed NdFeB particles along its axial direction. When the applied magnetic field strength reaches 3 T, the residual magnetic flux density of FSCR tends to be saturated ( Supplementary Fig. 2). Unlike soft magnetic materials that easily lose the induced magnetization once the external field is removed, hard-magnetic materials such as NdFeB, once magnetically saturated, retain their remanent magnetization due to their large coercivity. Therefore, the entire body of FSCR is characterized by a magnetization along its axial direction. The magnitude of the magnetization and Young's modulus of PDMS + NdFeB composite can be readily programmed by tuning the volume fraction of NdFeB inside the composite 32 . We employed a 15 vol.% of NdFeB, which confers a magnetization of 100 kA/m after full magnetization with Young's modulus of 1.15 MPa. By employing different molds, we can easily fabricate FSCRs of various sizes that can be used in different applications (Fig. 2a) and the smallest outer and inner diameter of our FSCR can be achieved as 2 and 0.6 mm, respectively, complying with the standard of incision size in minimally invasive surgeries 33 . In addition, the cured NdFeB + PDMS composite has no toxicity based on a cell viability test in which the cell survival rate is 98.6%, suggesting high biocompatibility of our FSCR according to the standard (70% cell survival rate) of USP (ISO 10993-5) (Supplementary Fig. 3) 32,34 .
Our FSCR features an embedded PLA reinforcing mesh primarily to enhance the printing performance by restraining the lateral expansion of the printing channel when pressurized ( Fig. 2b and Supplementary Movie 1). In general, due to the ink viscosity and friction between the ink and the nozzle, the input energy dissipates during the process of ink extrusion, leading to a significant pressure loss along the channel (see Supplementary Materials for details) 35 . Thus, a steady extrusion of the printing ink requires an applied pressure in the range of hundreds of kilopascals which would result in a substantial expansion in the diameter of the printing channel without enforcement, yielding a delay time after pressure is applied till the materials is extruded. To visualize lateral expansion, we prepared printing nozzles using pure PDMS with the same Young's modulus (E = 1.15 MPa, fabricated by varying the curing temperature and base-to-curing agent weight ratio) as that of PDMS + NdFeB composite and dyed the printing ink with orange coloring (Fig. 2b, see "Methods" section for details). The performance of both reinforced and non-reinforced designs under various applied pressures was compared. Figure 2c clearly suggests that the lateral expansion, characterized by D/d where d and D are the channel diameter before and after ink extrusion, respectively, of the nonreinforced sample is much higher than the reinforced counterpart. As a result, non-reinforced samples not only show an impaired printing resolution but also have a slower flow rate ( Supplementary Fig. 4), giving rise to a markedly increased delay time during printing (Fig. 2d) 36 . By contrast, the reinforced design exhibits a small lateral expansion of 4% when the pressure is increased to 240 kPa and also maintains steady extrusion over time (Fig. 2e). Since delay time is longer when FSCRs are made of softer materials ( Supplementary Fig. 5), the PLA reinforcing mesh is required. And the mechanical testing of fabricated reinforced and non-reinforced FSCR is shown in Supplementary Fig. 6.
It is also worth noting that the presence of the reinforcing mesh has a minimal influence on the bending behavior of the FSCR when it is subjected to magnetic fields. In this regard, we compared the tip deflection δ of the FSCR when approaching a cuboidal magnet (50 × 50 × 30 mm with surface induction of about 400 mT). The experimental setup is depicted in Fig. 2f, in which the robot tip initially falls in the central line of the magnet. Note that such a cuboidal magnet is later used in building our magnetic control system. We found that the reinforced FSCR can easily bend up to δ=L ¼ 0:3 where L denotes the robot's length (Fig. 2f); this provides a large enough workspace for magnetically controlled printing. Only small differences have been observed in the bending performance between reinforced and non-reinforced cases upon magnetic actuation. Therefore, the embedded PLA reinforcing mesh provides the FSCR with steady printing performance upon magnetic actuation.
Magnetically-controlled printing system. To realize an automated printing process, we customized a magnetically controlled printing system utilizing a set of motors that can be numerically controlled by a computer. Figure 3a illustrates the printing system apparatus that consists of a fixed printing platform, an FSCR printing nozzle, and four cuboidal magnets. The normal vector of the printing platform at its center defines the z-axis along which the printing nozzle can be moved up and down by a motor. Four magnets are placed in a rectangular layout with the north pole facing inside ( Supplementary Fig. 7) and their symmetric planes (i.e., central planes) define the XY plane and XZ plane of the coordinate system, as highlighted by the yellow and green box. The motors drive the four magnets to move in a concerted fashion providing both translational displacement along xdirection and rotation about the z-axis, denoted as T mag and θ mag , respectively. To achieve steady printing, the nozzle tip is always placed at XY plane at a fixed distance from the printing plane that is equal to FSCR's printing linewidth (0.6~1 mm).
The superimposed magnetic field generated by four cuboidal magnets is nonuniform. The distribution of magnetic flux density B¼ðB x ; B y ; B z Þ for such a nonuniform field was measured using a 3D Hall probe, as presented in Fig. 3b and Supplementary Fig. 8a. In a nonuniform magnetic field, hard-magnetic materials experience both magnetic torque and body force: where M is the magnetic moment density (magnetization) of the material, B is the total magnetic field from the magnet and ∇ B describes the magnetic field gradient. The FSCR has a magnetic moment along its axial direction, i.e., negative z-direction in Fig. 3a. The primary reasons for designing such a 4-magnet control system are twofold. First, due to symmetry, the magnetic field has B x ¼ B y ¼ 0 along the z-axis and in particular B ¼ 0 at the origin. Thus, the FSCR is at equilibrium initially when the entire body is aligned with the z-axis and the tip coincides with the origin, maintaining a vertical configuration before printing. Second, placing two magnets with north poles facing each other in both x-direction and y-direction allow for more stable control of the FSCR. The translational and rotational displacement of 4 magnets as a whole will alter the magnetic field in space and thus drive the FSCR to bend and rotate, giving rise to a translational displacement, T tip , and a rotational displacement, θ tip , of its tip, respectively (Fig. 3c). The maximum value of T tip determines the effective printing workspace, which varies with the length-todiameter aspect ratio of FSCR ( Supplementary Fig. 9). It is worth noting that the tip will also undergo an upward displacement from the XY plane, denoted as U tip , as it translates outward (Fig. 3d). Therefore, to compensate for the deviation from the XY plane, the FSCR should undergo a downward displacement by the motor with a magnitude identical to U tip (Fig. 3c, d). Since the relationship between magnet displacement and tip displacement is the foundation of digitally controlled printing, we first investigate this relationship using both finite element modeling (FEM) and experiments. The body of the FSCR is made by uniformly dispersing micronsized ferromagnetic particles (~5 µm) in the soft polymer, referred to as hard-magnetic soft materials 37,38 . Applying a nonuniform magnetic field induces magnetic torques and forces on the embedded ferromagnetic particles, which produces microscopic stresses that drive the macroscale deformation. Such microscopic stresses, denoted as magnetic Cauchy stresses, can be expressed as: where F is the deformation gradient and the operation denotes the dyadic product which takes two vectors to yield a second-order tensor. Implementing the magnetic Cauchy stress in a user-defined element subroutine in the commercial finite-element software ABAQUS, we can simulate the deformation of FSCR within the nonuniform magnetic field. Note that the analytical form of such a nonuniform magnetic field was used in FEM that was derived from 39 and validated by our experimental measurements ( Supplementary Fig. 8b). Simulated results for one FSCR with a length-to-diameter aspect ratio of 25 are presented in Fig. 3f-h and are in excellent agreement with the experimental data. Correlations between magnet displacements and tip displacements are found as T mag ¼ 0:63T tip , U tip ¼ 0:0028T 2 mag À 0:007T mag , and θ tip ¼ θ mag (Fig. 3e). Note that when the printing platform is planar, the motor-driven FSCR will be pushed downward by a displacement equal to U tip to compensate for the deviation from the printing plane. For the printing of multilayered structures or on 3D non-planar surfaces, such a compensational displacement should be altered accordingly by taking into account the 3D surface morphology, i.e., the altitude variation in the z-direction. By mapping these relationships into the computational code, we can precisely manipulate the tip motion by digitally controlling the movement of magnets, thus accomplishing magnetically-driven printing of various complex structures.
Minimally invasive printing. The magnetically controlled printing system can print various patterns on both a planar surface and non-planar surfaces, or even through a minimally invasive manner. To print, the target pattern needs to be converted into catheter-path codes according to the established relationship between T tip , U tip , and T mag (see "Methods" section for printing process details). The FSCR printing system is able to print PDMS-1700 and Ecoflex composite ink (viscosity~340 Pa·s, Supplementary Fig. 10) into various patterns as demonstrated by a flower with six petals, a square spiral, a 3D tube, and a 3D scaffold (Fig. 4a, b, and Supplementary Movies 2-5) 39 . The printing process can be completed in a single stroke (e.g., the Fig. 3 Numerical control of FSCR for printing. a Schematic illustration of the magnetically-controlled printing system. b Contour plots of magnetic flux density B in XY plane (up) and XZ plane (down) of the superimposed magnetic field, corresponding to the yellow and green boxes in a. The arrows indicate the magnetic field vectors and the background color represents the magnetic field strength as indicated by the color bar in mT. c Images showing the motion control of FSCR by moving four permanent magnets with both translational displacement along the x-axis and rotational displacement about the z-axis. For translational mode: as the magnets translate in the x-axis direction (denoted by T mag ), the tip of the FSCR moves to the same direction (denoted by T tip ); A downward displacement (denoted by U tip ) compensates the lift of the XY plane during the translation. For rotational mode: the magnetic field is rotated by θ mag ; Tip of the FSCR is rotated to the same direction by θ tip . d Simulation of the translation process. Left panel: computational T tip when magnetic field translated by T mag . Right panel: computational compensation of U tip as T tip increases. The color represents the displacement magnitude. e Simulation of the rotation process. The eight states show that when the magnets rotate, the tip of the FSCR rotates at the same angle. The color represents the displacement magnitude. flower, square spiral pattern, 3D tube) or multiple strokes with different initial positions of the printing nozzle (e.g., the 3D scaffold). As presented in Supplementary Fig. 11, all printed patterns exhibit excellent agreement with the original designs, demonstrating a high printing accuracy of the FSCR. Due to the viscoelastic nature, the extruded ink usually has a die-swelling phenomenon ( Supplementary Fig. 12a), resulting in a printed fiber with a diameter αd, where d is the inner diameter of the printing nozzle and α is the swelling ratio 40,41 . The resolution of the printed fiber (i.e., αd) of our FSCR mainly depends on four parameters: the moving velocity of the nozzle, the input pressure, the inner diameter of the nozzle d, and the viscosity of the ink. As shown in Supplementary Fig. 12b, a faster-moving velocity usually stretches the printed fiber, leading to a smaller αd; while increasing the input pressure, and the size of the inner diameter and the viscosity will increase αd. Note that to ensure a continuous printed fiber without accumulation or discontinuity, the velocity of the nozzle should be well controlled 40,41 . Based on the injection molding method, the smallest inner diameter d was achieved as 0.6 mm, which yields a resolution αd of 0.53 mm at the moving velocity of 3.3 mm/s, the pressure of 240 kPa, and the viscosity of 339 Pa·s ( Supplementary Fig. 12b, c).
Given the soft and slender nature of the FSCR, it can be threaded through a small aperture and programmed to print a wireless electronic device into a spiral pattern on the bottom of a chamber with conductive ink, as shown in Fig. 4c. The conductive ink is composed of silver flakes in an alginate solution with an added trace ethanol 12 . The resistance of the composite conductive ink can be changed by varying the weight fraction of silver flakes from 68.7% to 93% in dry conditions ( Supplementary Fig. 13). The spiral conductive coil can be connected with an electronic component such as a commercial light-emitting diode (LED). When actuated by an alternating magnetic field from an electromagnetic coil, the printed spiral wire can light up the LED wirelessly through electromagnetic induction ( Fig. 4c and Supplementary Movie 6). In addition to the demonstrated printing ability, our FSCR is also capable of object manipulation in a minimally invasive manner. For example, it can deliver, move or suck out targeted materials either in liquid or in solid form with different shapes and variable weight (0.5-5 g) in confined environments ( Supplementary Fig. 14a, b). Such an object manipulation capability allows for more applications of our FSCR in the future minimally invasive operations (Supplementary Movie 7 and 8). To demonstrate the capability of drug delivery to the target lesion, we carried out an experiment in which a hydrogel containing acetylsalicylic acid (ASA) was printed to a porcine tissue and validated the released drug by UVvis spectrophotometer (see "Methods" for details, Supplementary Fig. 14c, d) 42 .
In vitro minimally invasive bioprinting. To demonstrate the potential application of the FSCR in minimally invasive bioprinting, we printed a spiral pattern on an excised porcine tissue with a naturally non-planar surface using a minimal incision in an artificial skin overlaying the porcine tissue (Fig. 5a). To print the desired pattern, we need to identify a 3D path on the curved surface of the tissue to guide the nozzle tip. Using a digital scanner (Fig. 5b), the tissue surface was first reconstructed into a 3D model with (x, y, z) coordinates data set ( Supplementary Fig. 15) from which the printing path for the desired pattern was designed. By mapping such a printing path with the control parameters (Fig. 5c), T mag and θ mag , we generated the code to guide the printing nozzle. The FSCR was inserted through a small incision (diameter~0.8 cm) in the artificial skin and a spiral pattern of a conductive hydrogel was printed at the surface of the porcine tissue along the pre-defined path (Fig. 5d and Supplementary Movie 9). The entire process was completed within 2 min. Besides, the printed conductive hydrogel was also characterized by electrochemical impedance properties (Supplementary Fig. 16) and adhesion performance on the tissue surface ( Supplementary Fig. 17).
In vivo minimally invasive bioprinting. We then evaluated the feasibility of minimally invasive bioprinting on a rat liver in vivo. First, computed tomography (CT) technology was utilized to reconstruct the 3D surface of the liver in a living rat (Fig. 6a). The reconstructed 3D model and the upper surface of the liver after extraction and smoothing are shown in Supplementary Fig. 18. A printing path was then defined on the upper surface of the liver, and the control code was generated to guide the printing nozzle (Fig. 6b). To clearly demonstrate the printing process, the rat abdomen was continuously insufflated with carbon dioxide to provide a large and stable operating space, and a digital laparoscope (diameter 0.5 cm) was inserted to record the printing process through a minimal cut in the abdomen (Fig. 6c). In this demonstration, a thinner FSCR (25 mm in length and 2 mm in diameter) was employed because of the confined space in the rat abdominal cavity. As presented in Fig. 6d, an Archimedes spiral pattern (material: conductive hydrogel) was successfully printed at the surface of the liver through a small incision (diameter 3 mm) within 70 s (Supplementary Movie 10).
Discussion
In this work, we introduced a minimally invasive ferromagnetic soft catheter robot and developed a printing system that can be remotely controlled by a computer. We have demonstrated our proof-of-concept studies by printing various patterns using different functional inks on both flat and naturally curved surfaces and succeeded in all cases. Overall, the soft catheter robot has distinct advantages when working in a confined space in a minimally invasive manner compared with conventional robots (as shown in Supplementary Table 2). For future applications [43][44][45][46][47][48] , we propose a digital control strategy utilizing magnetic actuation that would allow surgeons to complete operations away from x-ray radiation. This minimally invasive in vivo bioprinting technology is still in its infancy there will be limitations regarding printing speed, resolution, and complexity of the printed pattern. To adapt to both complex three-dimensional patterns and the confined biological environment, further optimization of the magnetic domain and the miniaturization of the body of the FSCR body will be needed 49 . In addition, a more versatile magnetic field can be designed; thus, for instance, the four permanent magnet setups can be upgraded to a 6-polar-magnet system, which allows for more freedom of control 50 . The current printing system adopts the CT to reconstruct the 3D topology of the tissue that is later used to generate the numerical code to print. In the future, utilizing the intraoperative CT and/or equipping the robot with vision-based sensors (e.g., stereovision and structured-light scanning 51 ), the real-time tomography of the tissue can be constructed, and augmented reality for real-time bioprinting can be achieved 52 . In this regard, a close-loop soft robotic system with feedback based on real-time imaging may further improve the accuracy of printing 53 . The coding system to guide the printing path can also be optimized to create even more complicated patterns and 3D architectures with high resolution. Moreover, progress on functional materials (e.g., biomaterials for gastric ulcer healing 54 and/or health monitoring 55 ) with bioprintingcompatible properties (e.g., rheology and adhesion) will enable the FSCR to print more complicated 3D patterns/architectures to the curvilinear and wet tissue surfaces. The major limitations of the potential printable materials originate from how to maintain the as-printed pattern in situ. First, similar to existing extrusionbased biomaterials, the adhesion between the printed material and target surface is critical to shaping the desired pattern. Such a consideration should be taken into account when the target surface is vertical and/or wet. In this regard, enhancing the adhesion between the printed materials and wet bio-tissue surfaces is essential for the quality of bioprinting. Second, injectable inks solidify either through liquid evaporation, gelation, or a temperature-induced or solvent-induced phase change, while minimally invasive bioprinting may not favor such a condition for solidification due to the confined anatomy environment. In particular, when printing complex 3D architectures, the asprinted structure may collapse before it cures. Therefore, reducing the solidification time of the injectable ink and/or employing biodegradable supporting mold to assist solidification is also of great significance. To this end, we envisage that our FSCR and injectable bio-inks with a high adhesive strength to the curvilinear and wet bio-tissues and fast-to-solidify properties will together pave the way for the future applications of minimally invasive bioprinting in a remote, automated, and safer manner.
Methods
Fabrication of ferromagnetic soft catheter robot. The composite ink was made by mixing the hard magnetic NdFeB particle at 15 vol. % with an average diameter of 5 μm into a PDMS matrix (base-to-curing agent at a 10:1 weight ratio, Sylgard 184 silicone, Dow Corning). To ensure homogenous particle dispersion, we stirred the mixture using a planetary mixer (rotation 200 rpm and revolution 2000 rpm, AR-100, Thinky) for 3 min. Ease Release (Ease Release 200, Mann Release Technologies, Inc.) was evenly sprayed on the core wire and mold surface to prevent the bonding with the elastomer matrix. After that, the mixture slurry was injected into a 3D mold, and a polylactide (PLA) fiber mesh was carefully inserted into the center of the mold together with a 1.0 mm diameter supporting core wire as the inner template. Next, the mold was placed in a vacuum degassing chamber for 1 h to remove the air bubbles and then cured in an oven at 37°C for 48 h. The cured soft catheter robot was magnetized by a 3850 mT impulse magnetic field generated by a digital pulse magnetizer (Beijing Eusci Technology Ltd).
PLA filaments (average diameter of 150 μm) were employed. Sixteen strands of PLA filaments were knitted into hollow tubes with 1 mm inner diameter by a highspeed automatic knitting machine (Xuzhou Hongtai Knitting Machine Technology Co., Ltd.) at a rate of 20 mm/min.
The basic fabrication process of FSCR with pure PDMS was the same as that was described above. The PDMS-184 base-to-curing agent weight ratio was 8:1. It was cured in an oven at 37°C for 24 h and then put in an oven at 80°C for 24 h after demolding.
Cytotoxicity tests. Cell survival rate was tested on HCV-29 cell line (Human bladder epithelial cells, American Type Culture Collection). HCV-29 cells were cultured in Roswell Park Memorial Institute (RPMI)-1640 medium (Boster Biological Technology Co. Ltd.) supplemented with 10% fetal bovine serum (Gibco) and penicillin/streptomycin (Boster Biotechnology Co. Ltd.). To investigate the cytotoxicity of NdFeB + PDMS composites, 3 × 103 cells per well were inoculated in 96-well plates and cultured for 24 h at 37°C and 5% CO 2 . The covering material was co-cultured with HCV-29 cells for 24 h without changing the medium. Meanwhile, untreated cells and 70% ethanol-treated cells were employed as positive and negative controls respectively. And the blank well was RPMI-1640 plus CCK-8 reagent. After removing the co-cultured material and replacing the medium, the cell survival rate was evaluated based on the CCK-8 assay according to the manufacturer's protocol (Cell Counting Kit-8, Boster Biological Technology Co. Ltd.). Briefly, 10 μl CCK-8 reagent was added to each well and incubated for 1 h at 37°C and 5% CO 2 . The absorbance of each well was measured at 450 nm by a microplate reader (Multiskan FC, Thermo Scientific). Cell survival was calculated by the following formula. where A test ; A blank ; A control is tested samples (NdFeB + PDMS composite and 70% ethanol-treated cells), blank controls (blank well), and positive controls (untreated cells), respectively.
Magnetic characterization. The magnetic moment densities of ferromagnetic particles were measured with a comprehensive physical property measurement system (Quantum Design) using the vibrating sample magnetometer option. A 7.5 mg of NdFeB powder sample was used. The temperature in the cavity was set to 310 K, the normal temperature of the human body. The maximum symmetrical magnetic field strength was set in 5000 Oe steps from 10000 Oe to 50000 Oe. The hysteresis loop was continuously measured of the same NdFeB powder sample where the magnetic field change rate was set to 200 Oe/sec with a sampling frequency of 1 Hz. The magnetic field was measured by a precision gauss meter (Multi-Dimension Magnetic Field Scanning and Imaging Test System F30, Beijing Cuihaijiacheng Magnetic Technology Co., Ltd), which was driven by a moving stage to map the spatial distribution of B in three dimensions.
Ink preparation
Biocompatible viscoelastic ink. Ecoflex (Ecoflex-0030, PART-A, Smooth On, Inc.) and PDMS-1700 (PDMS SE-1700, Dow, Inc) were mixed to form the printing material. Ecoflex-A, SE-1700 base and SE-1700 curing agent were added at a 10:10:1 weight ratio. To ensure thorough mixing of the particle dispersion, the dispersion was stirred constantly for 3 min using a glass rod. Fat-soluble dye Sudan Red III was added in an amount to aid visualization and the mixture was centrifuged at about 7155 × g for 2 min to remove air bubbles. To avoid changes in the rheological properties of the material, printing should be done promptly after preparation. Conductive silver ink. The polymer solution was prepared by dissolving 5% Alginate powder (Sigma-Aldrich) in deionized water followed by centrifuging for 2 min at a rate of about 7155 × g to remove air bubbles. The conductive ink was prepared by adding silver flakes (with an average diameter of 10 µm, Sigma-Aldrich) and ethanol into the Alginate solution in the weight ratio of 4:6:1. The square resistivity of the ink film was measured with four-point probing equipment (ST2558B-F01, Suzhou Jingge Electronic co., Ltd.). Through wireless power transmission, an alternating magnetic field was generated by an electromagnetic coil with a power of 1600 W. An LED connected to the conductive silver structures was used to demonstrate the inductive currents generated by the alternating magnetic field.
Hydrogel ink. Conductive hydrogel ink was prepared by using a physical mixing process in an aqueous solution as previously reported 56,57 . Briefly, 0.1 g Hyaluronic acid, 3 g Pluronic F127 (Energy Chemical), 1.5 g PEDOT: PSS (Clevios TM PH1000, Heraeus Electronic Materials), and 1 g Polycarbophil (Lubrizol) were dispersed in distilled water (gross weight 10 g), and stirred for 24 h in the ice-water bath to minimize foaming. The conductive hydrogel ink was used at room temperature. The two probe testing method was used to test the conductivity of the conducting hydrogel as previously reported 58,59 . Here, the gap L' between the two glass carbon electrodes, the inner diameter D' of cylindrical mold, and the diameter of electrode d' were 3 mm, 6 mm, and 3 mm, respectively. The impedance of the conducting hydrogels was recorded at 5 mV over a range of frequencies from 10 −2 to 10 5 Hz. Drug (ASA)-loaded hydrogel ink was prepared by using a physical mixing process according to refs. 42,56 . Briefly, 0.1 g Hyaluronic acid, 3 g Pluronic F127, 0.7 g ASA (Energy Chemical) were dispersed in distilled water (gross weight of 10 g) and stirred for 24 h in the ice-water bath to minimize foaming. The ASA-loaded hydrogel ink was used at room temperature. According to references 56,57,60 , all constituent materials of the hydrogel were biocompatible."
Mechanical testing
Modulus test. Samples were molded with ferromagnetic composite ink and then cut into dumbbell test specimens using a standard part cutter. The mechanical testing was subjected to standard test methods (ASTM D412) on a mechanical testing machine at a displacement rate of 4 mm/min (width: 4 mm; gauge length: 10 mm).
Mechanical testing of FSCR. The lateral load of the reinforced catheter robots and non-reinforced catheter robots (Outer diameter 4 mm, inner diameter 1 mm, length 10 mm) were tested in the same mechanical testing machine as above at a rate of 4 mm/min. Longitudinal tensile strength measurements of the reinforced catheter robots and non-reinforced catheter robots (Outer diameter 4 mm, inner 1 mm, test distance 60 mm) were tested using the same condition.
Rheology measurements. The rheological properties of the printing inks were measured via a hybrid rheometer (DISCOVERY HR-1, TA Instruments) with a 40mm diameter rotor. Complex moduli including storage modulus G′ and loss modulus G″ of inks were measured using small amplitude oscillatory shear tests over an angular frequency range of 0.1-100 rad/s with an oscillatory strain of 0.1 in the linear viscoelastic region. Apparent shear viscosity was obtained by steady-state flow tests with a logarithmic sweep of shear rate over the range of 0.1-100/s ( Supplementary Fig. 10). All rheological properties in these experiments were measured at 25°C with 120 s soak time prior to heating.
In vitro drug release study. The dyed ASA-loaded hydrogel was printed onto a piece of porcine tissue which was immersed in PBS solution (pH 7.4) 56 . The drug release testing was carried out by detecting the salicylic acid of the sampling solution using a UV-vis spectrophotometer (UV-3600 Plus, Shimadzu, Japan). The ASA solution undergoes hydrolysis and produces salicylic acid with intrinsic absorbance (peak height) at 297 nm ( Supplementary Fig. 14d).
Printing process Printing procedure. The prepared inks were loaded into the ink chamber. The chamber was then affixed to the designed printing platform that was connected to the fixed end of the soft catheter robot. The CAD pattern was converted into a printing path in modified G-code to adapt our platform to our control algorithm, in which the established functions T tip ¼ 0:63T mag , U tip ¼ 0:0028T 2 mag À 0:007T mag were included. The designed pattern was only related to T mag . The external superimposed magnetic field was applied to the ferromagnetic soft catheter robot to reorient the soft robot tip during printing. See Supplementary Video for the noncontact printing process.
Printer configuration. The hardware printer was assembled by ourselves. The extrusion of printing material was controlled by a digital pneumatic system (Nordson EFD) that was connected to the motherboard Raspberry Pi 2B via the RS232 protocol. All control programs were home written corresponding to the general G-code.
Construct a printing path on a curved surface. Cloud of surface points of the porcine tissue (the fresh porcine tissue purchased from the local slaughterhouse) and the rat liver were acquired by a laser line scanner (the scanning service was kindly provided by SCANTECH TM ) and X-ray microtomography (Trans-PET Discoverist 180), respectively. The random data was used to fit the curved surface and reconstruct the surface. For better printing, the X-ray microtomography model has been smoothed, and all the smoothed data was converted into the coordinate system-based data on our platform using commercial software (XPrograma 4.3, also provided by SCANTECH TM ). Then the fitted curve surface was sampled with raster at equal intervals of 0.1 mm. The sampled data was made into a datasheet for inquiring the height of the z-axis during printing.
In vivo animal experiments. In all animal experiments, Sprague Dawley rats, 8-10 weeks of age (Vital River Laboratories) were anesthetized with an intraperitoneal injection (2% pentobarbital, 40 ml/kg). The rat abdomen was insufflated with carbon dioxide using a needle to detach the peritoneum from the abdominal organ before performing X-ray microtomography. For the surgery, a digital laparoscope (diameter 0.5 cm) was inserted into the abdominal cavity via a minimal cut (diameter~0.8 cm) in the left abdomen, and a purse-string suture was tightened around the laparoscope to avoid leaks in the next pneumoperitoneum procedure. A small incision (diameter~0.3 cm) above the target printing location was made to insert the ferromagnetic soft catheter robot. After that, a needle connected with an adjustable carbon dioxide pump was punctured into the abdominal cavity to create a stable operating space. Then we performed the printing process on the surface of the rat liver. All the animal experiments were approved by the Animal Care and Use Committee of Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Hubei, China. The approval number was IACUC number: 2389.
Analysis and simulation. Finite element analysis was conducted by a commercial package Abaqus/Standard 2017. To account for the interaction between magnetic composite with embedded hard-magnetic particle and the external non-uniform magnetic field, we developed a user element (UEL) subroutine based on the continuum framework 37 . The magnetic field around the cubic magnet can be analytically expressed according to reference 38 . The magnetic soft catheter robot was meshed with a sufficiently large number of UEL such that during each iteration of computation, the position-dependent magnetic field B and its gradient ∇B at each element can be accurately calculated. Thereafter, the magnetic torque τ ¼ M B and force f ¼ M Á ∇ ð Þ B can be implemented by computing the magnetic Cauchy stress σ magnetic ¼ ÀB N FM where F is the deformation gradient and operator N represents a dyadic product that takes two vectors to yield a second-order tensor. All simulations are checked with convergence.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
The data generated in this study have been deposited in the github database under accession code https://github.com/softnano501 or can be requested from the corresponding author. | 9,041.2 | 2021-08-20T00:00:00.000 | [
"Materials Science",
"Biology"
] |
Dual disorder-driven magnetic dynamics in GdCu2 superantiferromagnetic nanoparticles
The spin dynamics in magnetically disordered GdCu2 nanoparticles, varying the nanoparticle size in the range 53 to 7 nm, has been scrutinized. Dynamic χAC susceptibility measurements have revealed the existence of dissipation at Tg = 18 K, which is associated to the spin freezing transition, for all the ensembles. Besides, the superantiferromagnetic ensembles (〈D〉≥ 24 nm) also showcase a dissipation contribution close to the vicinity of the Néel transition, TN = 40.2 K. This dissipation, which takes the form of two humps located at Td1 = 33.5(5) K and Td2 = 40.0(5) K, is associated to uncompensated antiferromagnetic moments. Time-dependent phenomena (ageing and memory effects) are only evidenced below the spin freezing transition, evidencing that solely this low-temperature disordered phase is driven by the frustration of RKKY exchange interactions. Consequently, GdCu2 nanoparticles display a dual disorder-driven magnetic dynamics, which are the one ascribed to the magnetically frustrated moments located at the nanoparticle surface; and that of uncompensated antiferromagnetic moments located within the nanoparticle core.
Introduction
Over the last years, the spin dynamics of magnetic ensembles, including mesoscopic ones, has received increasing interest. Their technological transfer, specially focused on the spintronics fields, has been the driving force motivating the experimental and theoretical efforts to clearly understand how the size reduction to the nanoscale affects the magnetic properties (Golosovsky et al. 2021;Baczewski et al. 1989;Jefremovas et al. 2020aJefremovas et al. , 2021Zhao et al. 2019).
Abstract The spin dynamics in magnetically disordered GdCu 2 nanoparticles, varying the nanoparticle size in the range 53 to 7 nm, has been scrutinized. Dynamic χ AC susceptibility measurements have revealed the existence of dissipation at T g = 18 K, which is associated to the spin freezing transition, for all the ensembles. Besides, the superantiferromagnetic ensembles (〈D〉≥ 24 nm) also showcase a dissipation contribution close to the vicinity of the Néel 1 3 Vol:. (1234567890) Within this context, antiferromagnetic (AF) nanoensembles have attracted a lot of interest in recent years, owing to their inherent assets (e.g., anomalous Hall effect, anisotropic magnetoresistance, longdistance spin-wave propagation (Marti et al. 2014;Nakatsuji et al. 2015;Lebrun et al. 2018). It is also worth mentioning that the complex magnetic arrangements found in AF materials makes them potential candidates for hosting exotic spin textures, as it is the case of skyrmions (Zhang et al. 2016;Fukami et al. 2020;Jungwirth et al. 2016;Jungfleisch et al. 2018;Everschor-Sitte et al. 2018;Park and Kim 2021). It is within this context that the determination of the spin dynamics governed by disordered and frustrated magnetic interactions is key, since the magnetic frustration arising from RKKY interactions have shown to be helpful in stabilizing these topologically protected spin textures (Greedan 2006;Zvyagin 2013;Tokura and Kanazawa 2020;von Malottki et al. 2017;Yuan et al. 2017).
Among the Rare-Earth binary alloys, the GdCu 2 bulk alloy displays an AF structure (Rotter et al. 2000b), which is maintained down to 24-nm-sized magnetic nanoparticles (MNPs), all along with the onset of a frustrated Spin Glass (SG) phase at low temperature (Jefremovas et al. 2020b). Among the R elements, Gd 3+ -ions display a half-filled 4f shell, implying absence of spin-orbit interaction. This leads Gd 3+ to be in an intermediate situation between that of 3d and 4f magnetism. The S-state of Gd 3+ also implies the lack of magnetocrystalline anisotropy for this ion, which certainly eases theoretical modeling and calculations. Thus, Gd 3+ has been more and more included in compounds to understand their magnetism at the basis, as it has been the case of the recent study reported by P.G. Welch et al., where the spin dynamics of the Heisenberg pyrochlore antiferromagnet Gd 2 Pt 2 O was unraveled (Welch et al. 2022). Furthermore, Gd 3+ displays, along with Tb 3+ , one of the greatest effective magnetic moments (μ eff = 7.94μ B ), which constitutes another asset to determine subtle modifications in the interaction among the magnetic moments. This has been useful, for instance, to reveal the existence of two-length scales reported for nanocrystalline Gd (Döbrich et al. 2009), or, more recently, to reveal how the RKKY magnetic interactions were altered by the size reduction (Jefremovas et al. 2020b). In this latter work, Jefremovas et al. have determined 24 nm to be the limit size for the bulk AF state to survive within the nanoparticle core, together with a surface SG phase, building the so-called superantiferromagnetic (SAF) state. The RKKY interactions of smaller nanoparticles are unable to establish a collective AF ordered state, and a Super Spin Glass (SSG) arrangement is settled instead. This lead GdCu 2 to display a particular magnetic order/disorder configuration depending upon the nanoparticle size. Indeed, a detailed study of the underlying spin dynamics is key to understand the modifications of the RKKY exchange interactions at the nanoscale.
Hence, in the present work, we carried out dynamic χ AC susceptibility measurements, both in the temperature and frequency domains (T,f), and in the time domain (t), to unravel the nature behind the magnetic order/disorder phases found in GdCu 2 MNPs. Complementary, specific heat c P measurements were analyzed to better determine the nature of the magnetic transitions. The present characterization is not so commonly found in the literature, and supplements the static picture of the magnetic properties of GdCu 2 MNPs reported in Jefremovas et al. (2020b).
The magnetic characterisation was performed by means of dynamic χ AC (t,f,T) (time, frequency and temperature, respectively) measurements. These were carried out in two QD-MPMS (SQUID) magnetometers, one of which is located at the Uppsala University (Sweden), and the second one, at the Universidad de Cantabria (Spain). The ensembles of MNPs were measured in different temperature ranges between T = 5-300 K. Oscillating fields μ 0 h = 0.1 and 0.313 mT, and frequencies ranging from 0.17 Hz to 2 Hz were employed. To probe memory effects and ageing phenomena, several protocols can be found in the literature (Nordblad and Svedlindh 1998;Jönsson et al. 2000;Jonason et al. 1998;Joshi et al. 2020;Svedlindh et al. 1992). Here, we have probed both time-dependent phenomena by tracing the out-of-phase ′′ component of the dynamic χ AC susceptibility, as it allows to detect in more detail the subtleties concerning the spin dynamics (Nordblad and Svedlindh 1998;Svedlindh et al. 1992;Jefremovas et al. 2022). Shortly, memory effects have been probed from the difference between the out-of-phase ′′ component measured (i) during cooling ( ′′ cooling ), and (ii) upon warming ( ′′ warming ). There is a difference between the measuring protocol used while cooling and warming. This way, during the cooling, a stop is made at T ag = 15 and 30 K for t > 10 3 s. During this time, the system gets aged, and a particular spin disorder configuration (domain) can be settled. Then, the warming curve is measured without making any stop. The memory effect is then evidenced by the occurrence of a drop in �� warming − �� cooling at temperatures slightly below T g . Coming to the ageing phenomena, these can be easily detected by the inspection of the ′′ vs. t dependency. Moreover, the robustness of the SG-like frustrated interactions has been further investigated by applying a temperature cycling protocol. This consists of measuring the ′′ vs. t dependency at a certain temperature T ag within the SG phase (in our case, T ag ∼ 0.8T g ) for a sufficiently long period of time ( t ∼ 10 3 s). After the waiting time, the temperature is raised to T ag + ΔT. In the case of the present work, ΔT was selected such that T ag + ΔT was 0.85T g and 1.1T g . Then, the temperature is lowered back to T ag , and �� (t) is measured again for t ∼ 10 4 -10 5 s. This cycling protocols mimics the one reported in ?svedlindh1992time,ageing_Eli ().
Finally, heat capacity (c p ) measurements were performed using a QD-PPMS instrument (Universidad de Cantabria) in the temperature range T = 2-300 K. The measurements were collected in the absence of magnetic field. Measurements were performed following the relaxation method (Bachmann et al. 1972), and the data analyses follows the surface-core separation already detailed in Jefremovas et al. (2021Jefremovas et al. ( , 2022.
Results and discussion
Dynamic magnetic susceptibility vs. temperature Figure 1a and b showcase the χ AC (T) components (inphase ′ and out-of-phase ′′ , respectively) measured at the frequency of f = 0.17 Hz for the 6 ensembles of MNPs. There, the occurrence of a cusp in both in-phase ( Fig. 1a) and out-of-phase components ( Fig. 1b) is clearly noticeable in the low-temperature region (T < 25 K). This has been interpreted as the onset of a SG-like phase, whose freezing dynamics has been characterized in great detail in Jefremovas et al. (2020b). Therefore, from herein, we will focus on the high temperature region, i.e., the one close to the Néel transition.
Therefore, in Fig. 1a, a peak associated with the AF transition is found at T N = 40.2 K for MNPs larger than 24 nm, being absent for the smallest ones (13 and 9 nm). The temperature value of this peak does not evidence a size-dependence, as expected (Coey 2010). Paying attention now to the out-of-phase component displayed in Fig.1b, additionally to the onset of a low-temperature peak (T g ≈ 18 K), linked to the SG phase, a dissipation contribution is clearly detectable in the temperature range between 30 and 40 K, but only for the case of 〈D〉≥ 24 nm MNPs. More precisely, this dissipation takes the form of two humps, located at T d1 = 33.5(5) K and T d2 = 40.0(5) K for all these superantiferrromagnetic (SAF) MNPs. The peaks broaden and reduce in magnitude with the size reduction, being completely wiped out when the limit size for AF order to remain is overcome (i.e., below 24 nm). The observation of dissipation connected to AF order is totally unexpected, as these should not exhibit, in principle, any contribution to the out-of-phase component (Coey 2010). There is, however, one scenario where dissipation can be expected for AF MNPs, and it is found within the context of uncompensated magnetic moments associated to antiferromagnetic domains. This interpretation follows the same ideas previously proposed for NiO AF grain boundaries (Takano et al. 1997), and it is congruent with the static M DC (H) characterization described in Jefremovas et al. (2020b), where the evolution of the coercive field H C with MNP size 〈D〉 reached a maximum for 24-nm-sized MNPs. A simple estimation based on the AF unit cell, which comprises 3 crystallographic-ones along b direction (Rotter et al. 2000a), leads to an AF correlation length of 3 b ∼ 2.1 nm. Hence, the 24-nm-sized MNPs, for which a core size of ⟨D⟩ ∼ 20 nm can be estimated (Jefremovas et al. 2021(Jefremovas et al. , 2022Rojas et al. 2007), are large enough to host up to 11 complete AF unit cells and the AF grain boundaries within. The fact that this dissipation takes the form of two well-defined and separated cusps is in further agreement with the AF structure of GdCu 2 .
According to neutron diffraction and muon spectroscopy resonance analyses (Rotter et al. 2000a;Gygax et al. 2002;Rotter et al. 2000b), GdCu 2 arranges into a commensurate AF structure carrying non-collinear cycloidal propagation. There are two different domains possible, one with a left-handed and another with a right-handed cycloid. Each of these two domains should carry frequency-dependent dissipation, a fact that can be observed in Fig. 1c for the 33-nm-sized MNPs, as there is a progressive decrease of the ′′ component at T d1 and T d2 with increasing f. Furthermore, the trace of a high-temperature double-peak signature is also traced by the nonlinear ′ 3 response, included in Fig. 1d. The lack of these two peaks also in the non-linear response corresponding to the SSG MNPs (13-and 9-nm-sized) necessarily connects this high-temperature dissipation to the AF structure.
Time-dependent phenomena: memory effects and ageing
Ageing and memory effect phenomena are intimately connected to the out-of-equilibrium dynamics of nonergodic systems (SGs), thus proving the existence of highly correlated RKKY-frustrated spins (Svedlindh et al. 1992;Nordblad and Svedlindh 1998 Binder and Young 1984;Jonsson et al. 1995;Mydosh 2014). Figure 2a depicts the temperature dependence of out-of-phase component, measured while cooling down and letting the system stay at T ag = 15 and 30 K for t > 10 3 s; and upon warming without making any stop. The inspection of the �� warming − �� cooling difference showcased at the bottom inset allows to clearly detect memory effects below T < 15 K, as a drop in the difference is observable at temperatures below T ag = 0.83 T g = 18 K. This effect, which is triggered by the out-of-equilibrium dynamics ascribed to the freezing transition, shows up for all MNPs except for the 53-and 43-nm-sized ones. The lack of memory effects at the larger MNPs reveals that the AF-coupled core is still robust towards the magnetically frustrated surface phase. By comparing the drop in the �� warming − �� cooling difference, two dif-ferent trends can be deduced: (i) for the SAF MNPs (33 and 24 nm), the size reduction seems to reduce the memory effects, as the �� warming − �� cooling difference is broader for the 18 nm ensemble as compared to the 33-nm-sized one. This is congruent with the stated more robust SG phase of the 33-nm-sized MNPs (Jefremovas et al. 2020b), as the RKKY exchange interactions are less affected by the microstrain. Then, once the AF order is destroyed, and a SSG state is settled, (ii) memory effects get more intense as the nanoparticle size is reduced. Accordingly, the SSG MNPs (9-nm-sized) display stronger memory effects than those showcased by the 13-nmsized ones. Both SSG ensembles display stronger memory effects compared to the SAF ensembles. This can be deduced from the sharpness of the This drop is displayed in the bottom inset, which also includes the difference corresponding to 24-, 13-and 9-nm-sized MNPs. The �� warming − �� cooling is divided by the respective ′′ Tw for each nanoparticle size to compare the drop accounting for the memory effects. The top inset displays the ′′ vs. t curves recorded at T ag = 30 K. b) and c) display the cycling protocol results of 33-nm-sized and 9-nm-sized MNPs, respectively. The relaxation is measured with f = 0.2 Hz at T ag = 15 K before and after the temperature cycling. It can be seen that the SG state is completely reborn when the rise is of T ag + ΔT = 1.06T g in both MNP ensembles. Inset in c) compares the relaxation measured at T ag = 12, 15 and 18 K for the 9-nm-sized MNPs the top-right inset in Fig. 2a evidences the lack of a clear time-dependence associated to the high temperature dissipation (compare with Fig. 2b and c, where the time-dependence obeys the expected decay for spin glasses at several T ag < T g ). This backs up the idea of a non-frustrated disordered phase, which is congruent with uncompensated AF moments, as it has been explained in the previous section.
In order to test the robustness of the SG phase, temperature cycling have been performed. To this aim, the MNPs were aged for t ∼ 10 3 s at T ag = 15 K. Then, the temperature was rose to T ag + ΔT ≈ 0.88, 0.97 and 1.06 T g and cycled back to T ag . Immediately, the magnetization was recorded for t ∼ 10 4 s. Figure 2b and c show results for these measurements performed in 33 nm (SAF) and 9 nm (SSG) sized MNPs, respectively. As it can be seen, both MNP ensembles achieve a completely reborn SG landscape (rejuvenation) when the cycling step is performed above T g (T + ΔT ≥ 1.06T g ), as the ′′ post-cycle (t > 10 3 s) is equal to the former ′′ pre-cycle (t = 0s). Also, in both MNP ensembles, the smaller the ΔT, the slower the relaxation towards equilibrium, which indicates that larger free-energy barriers are built. This reveals that the domains of correlated spins are larger (Jonason et al. 1998).
If we stick close to the rejuvenation limit by paying attention to the cycle performed at T + ΔT = 0.97T g , it can be observed how, despite the different global states (SAF and SSG), the SG freezing dynamics corresponding to 33-and 9-nm-sized MNPs behave in a very similar fashion. In this way, the �� ∕ �� (t = 0) value at both 33 nm and 9 nm MNPs after the temperature cycle is around �� ∕ �� (t = 0) = 0.98, whereas it is already �� ∕ �� (t = 0) = 1 (fully recovered) for 24and 13-nm-sized ones (not shown). This implies that the domains built for the former (33 and 9 nm) are larger, as the particular SG configuration is not completely reborn after the cycling. The reason beneath this feature further supports the static picture, from which it was deduced that the most interacting SG phase is settled for 33-nm-sized MNPs (Jefremovas et al. 2020b). Once the SSG state is established, the smaller the MNPs, the more robust the frustrated interactions among the spins.
Specific heat measurements
Specific heat analyses have been carried out separating the contributions stemming from the core and the surface, following the same procedure as the one published in Jefremovas et al. (2021Jefremovas et al. ( , 2022. To this aim, the experimental specific heat c P is assumed to be the result of the lattice contribution c lattice (formed by the addition of the electronic and the phononic ones, i.e., c el + c ph ), plus the magnetic c mag . In the case of GdCu 2 , since Gd 3+ are S-state ions, no crystalline electric field (CEF) contribution is present in the GdCu 2 specific heat. Each core and surface contributions are weighted by the geometrical core and surface-to-volume ratios, respectively, which have been estimated in the same way as in precedent works (Jefremovas et al. 2021(Jefremovas et al. , 2022. Accordingly, the core and surface contributions are N c = 2.0 (33 nm), 1.0 (13 nm) and 0.9 (9 nm), and N s = 3 − N c , respectively, as the number of atoms is N = 3). Therefore, the specific heat is modeled following: Following this procedure, the green line in Fig. 3a represents the obtained c lattice for 33-nm-sized MNPs. Values of γ c = 6.7 (2) mJ (molK 2 ) -1 and c D = 277(3) K have been obtained for this particular size, which agree well with the ones obtained for bulk alloy (not shown), and reported for polycrystalline bulk GdCu 2 (Podgornykh and Kourov 2007). The obtained γ s and s D values for all the MNP sizes are listed in Table 1. There, it can be seen that both parameters increase with the size reduction, a fact that can be understood according to the increasing surface disorder and surface-to-core ratio.
The inset of this figure shows the magnetic entropy S mag against the temperature. This S mag has been obtained according to: The experimental magnetic entropy (18 J/mol⋅K) and the calculated S theo mag (300K) = R[ln(2J + 1)] = 17.29 J/mol⋅K are in good agreement. The value of S exp mag is already ≈ 18 J/mol⋅K at T = 100 K. This indicates that the energy levels are fully populated at a lower temperature than expected. This trend, which holds for all the GdCu 2 MNPs, may be indicative of the existence of quadrupolar and/or higher order interactions (Luong and Franse 1995;Morin and Schmitt 1990).
(1) Figure 3b depicts the c mag vs. T dependency for bulk, 33-, 13-and 9-nm-sized MNPs. It is worth noting the two sources of c mag evidenced by the bulk and 33-nm-sized alloys. Accordingly, the AF-coupled magnetic moments give rise to a λ-like peak anomaly, located at T ∼ 40 K. This contribution should show a slight left-shift with the size reduction, owing to the reduction of the amount of AF-coupled moments (García-Saiz et al. 2014;Chevalier et al. 2006). Nevertheless, no shift is found in our GdCu 2 alloys, as it was also the case of the M DC (T) curves shown by Jefremovas et al. in (2020b). This underlines, in agreement with the dynamic χ AC (t,f,T) characterization, the robustness of the AF order. Additionally to the AF λ-anomaly, at temperatures below T N , a broad hump, which can be ascribed to a Schottky-like contribution (Gopal 2012), emerges for these bulk and 33-nm-sized MNPs. In the present case of GdCu 2 , the occurrence of this Schottky contribution should be ascribed to the Zeeman splitting of the eightfold degenerate energy level, rather than to CEF effects (as Gd 3+ is L = 0) or spin wave propagation (Luong and Franse 1995;Luong et al. 1985). Furthermore, the fact that the excess of c mag drops to zero for T > T N rules out the possibility of a CEF-motivated contribution to c P (Gopal 2012), and it is congruent with the results reported for GdCu 2 single-crystal (Koyanagi et al. 1998) and for GdCu x bulk antiferromagnets (Podgornykh and Kourov 2007). Both contributions (AF λ-anomaly and Schottky contributions) are so close to each other that they overlap, resulting in a single broad cusp, rather than in two separated signatures.
On the other hand, the c mag (T) dependency of the SSG 13-and 9-nm-sized MNPs evidences a broad cusp at T ∼ 31 (13 nm) and T ∼ 27 K (9 nm), with a tail (asymptotic decrease to zero of the c mag ) that extends up to T ∼ 100 K. The cusp intensity is reduced with respect to the one of bulk and 33 nm MNPs, and it is also found at lower temperatures. The interpretation of the c mag of these SSG MNPs is very alike to the one already commented for bulk and 33-nm-sized MNPs. Even though the AF interactions are not strong enough to give rise to a collective well-defined ordered state within these SSG MNPs, they still exist within the sample, as they are a basic requirement for magnetic frustration (Mydosh 2014 of the MNPs give rise to an effective local field, which splits the multiplets (Zeeman splitting), resulting in a contribution to c mag . Of course, as the AF order interactions are further diminished by reducing the MNP size (increasing disorder), this splitting gets smaller, thus, the hump shifts towards lower temperature value. Finally, the c mag asymptotically decreases to zero, a fact that contrasts with the drop observed for bulk and 33-nm-sized MNPs. This tail shall be ascribed to the SG-frustrated moments, which may give rise to (tiny) contributions to the specific heat at higher temperatures (Martin 1979;Mydosh 2014;Gopal 2012;Arons et al. 1994).
Conclusions
The dual magnetic disorder dynamics in ensembles of GdCu 2 magnetic nanoparticles (53-to 7-nm-sized) has been unraveled. On the one hand, at low temperature ( T g ≲ 18 K), an interacting SG phase is well-established, evidencing rejuvenation and memory effect phenomena. This frustrated phase comes as a result of the size reduction to the nanoscale, driven by (i) the surface frustrated moments for the case of SAF nanoparticles (53-, 43-, 33-and 24-nm-sized), and (ii) the whole nanoparticle, for the case of SSG-ones (13 and 9 nm). On the other hand, a high temperature ( T d1 ∼ 33 K and T d2 ∼ 40 K) non-frustrated disordered phase has been found only for the SAF nanoparticles. This dissipation, which does not evidence a time dependency of the AC susceptibility, and therefore magnetic ageing, is not present for the SSG nanoparticles. This necessarily connects the high-temperature dissipation to a disorder driven by the uncompensated AF moments, found at the AF grain boundaries. The observation of two cusps for this hightemperature disorder is intimately related to the cycloidal propagation of the GdCu 2 magnetic structure, which includes left and right-handed domains. Our results demonstrate the importance of ageing and specific heat as powerful tools to unravel the subtleties concerning the spin dynamics in magnetic GdCu 2 nanoparticles.
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work has been financially supported by Spanish MCIU MAT2017-83631-C3-R project. EMJ work was supported by 'Beca Concepción Arenal' BDNS: 406333 (Gobierno de Cantabria-Universidad de Cantabria).
Data availability
The raw/processed data required to reproduce these findings cannot be shared at this time due to technical or time limitations.
Conflict of interest
The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 5,955.6 | 2022-09-21T00:00:00.000 | [
"Physics"
] |
Markovian loop soups: permanental processes and isomorphism theorems
We construct loop soups for general Markov processes without transition densities and show that the associated permanental process is equal in distribution to the loop soup local time. This is used to establish isomorphism theorems connecting the local time of the original process with the associated permanental process. Further properties of the loop measure are studied.
Introduction
A Markovian loop soup is a particular Poisson point process L on paths associated to a Markov process X. It is determined by its intensity measure µ which we refer to as loop measure. Loop measure for Brownian motion was introduced by Symanzik in his seminal paper [24] on Euclidean quantum field theory, where it is referred to as 'blob measure', and is a basic building block in his construction of quantum fields. Brownian loop soup was introduced by Lawler and Werner [20], in their work on SLE and conformally invariant processes in the plane. Le Jan extended the notion of loop soups to other Markov processes [12], and this has been generalized further in [14,15]. In all this work the loop measure is constructed using bridge measures for X. This requires that X have transition densities. The main point of this paper is to show how to construct loop measures and hence loop soups for Markov processes which have potential densities but not transition densities.
Our motivation in studying Markovian loop soups is to better understand the wonderful and mysterious Isomorphism Theorem of Dynkin, [7,8], which connects the family of total local times L = {L x ∞ , x ∈ S} of a symmetric Markov process X in S with the Gaussian process G = {G x , x ∈ S} of covariance u(x, y). (When X is symmetric, u(x, y) is positive definite.) Actually, in the Isomorphism Theorem it is the family of squares of G, that is x , x ∈ S}, which is connected with L. This theorem is not an isomorphism in the usual sense, but the connection between L and G 2 is sufficiently tight that it has been used to derive many new properties of the local times, as described in [22]. This is why we consider the Isomorphism Theorem to be wonderful. We call it mysterious because it is hard to see intuitively why there should be any connection between Markov local times and Gaussian processes.
As noted by Le Jan,[13,Theorem 9], loop soups offer a deep understanding of this connection. Recall that each realization of L is a countable collection of paths ω. Set We call L x the loop soup local time at x. A simple application of the Palm formula for Poisson point processes provides a connection, an Isomorphism Theorem, between L = {L x ∞ , x ∈ S} and L = { L x , x ∈ S}. Since L is defined in terms of local times of X this should not be surprising. What may be surprising is that when X is symmetric then L = { L x , x ∈ S} has the distribution of G 2 = {G 2 x , x ∈ S}! Furthermore, the definition of (1.1) of L does not require the symmetry of X, so we obtain an Isomorphism Theorem for non-symmetric X.
In 1997, D. Vere-Jones, [26], introduced the α-permanental process θ := {θ x , x ∈ S} with kernel u(x, y), which is a real valued positive stochastic process with joint distributions that satisfy u(x j , x π(j) ), (1.2) for any x 1 , . . . , x n ∈ S, where c(π) is the number of cycles in the permutation π of [1, n]. In addition, by [26, p. 128], the joint moment generating function of (θ x 1 , . . . , θ xn ) has a non-zero radius of convergence. Consequently, an αpermanental process is determined by its moments. It is not hard to show that in the symmetric case G 2 /2 = {G 2 x /2, x ∈ S} is a 1/2-permanental process with kernel u(x, y), the covariance of G.
In [9], Eisenbaum and Kaspi were able to show the existence of an αpermanental process with kernel u(x, y) whenever u(x, y) is the potential density of a transient Markov process X, and use this to obtain an Isomorphism Theorem for non-symmetric X, where the role played in the symmetric case by the Gaussian squares G 2 is now played by a permanental process. In this paper we will see that the loop soup local time L is an α-permanental process with kernel u(x, y).
The advantage of using loop soups to construct permanental processes and obtain Isomorphism Theorems is two-fold. First, as mentioned, loop soups provide an intuitive understanding of the connection between permanental processes and local times. Second, this approach is capable of great generalization. Recent work, [14,15], uses loop soups for Markov processes with potential densities u(x, y) which may be infinite on the diagonal. In this case there are no local times and no permanental processes. Rather, loop soups are used to prove the existence of permanental fields (indexed by measures rather than points in S) with which to establish Isomorphism Theorems: for continuous additive functionals in [14], and for intersection local times in [15]. We know of no way other than using loop soups to prove the existence of permanental fields associated with not necessarily symmetric X, and the Isomorphism Theorems contain constructs which seem inaccessible without the loop soup. For example, in the symmetric case the Isomorphism Theorems contain random variables which are not in the associated Gaussian sigma field.
Here is an outline of this paper. The loop measure is constructed and studied in Section 2. In the short sub-section 2.1 we show that when transition densities exist, our definition of loop measure agrees with the standard definition using bridge measures. In Section 3 we introduce the loop soup and quickly show that the loop soup local timeL is an α-permanental process with kernel u(x, y). In the short Section 4 we use the Palm formula to prove our Isomorphism Theorem. Further properties of the loop measure are derived in Sections 5-7. These include invariance under loop rotation, and the behavior of the loop measure under restriction and space-time transformations. Here again the novelty is in deriving these properties in great generality and without the assumption of transition densities.
where d is a metric for the topology of S, and f ǫ is a continuous function supported on [0, ǫ], and define L y t (ω) = lim inf n→∞ t 0 f n −1 ,y (X s ) ds. (This is used to show measurability in y).
Under our assumption that u(x, y) is continuous, it follows as in the proof of [22,Lemma 3.4.3], that uniformly in x, u(x, y) as a function of y is locally bounded and continuous. This implies that for any β > 0, the same is true for and it follows from the resolvent equation that for each x, u β (x, y) is a density for U β (x, dy) with respect to m(dy). It then follows from the resolvent equation that for any α, β > 0 and all x, y (2.3) Using (2.2) and the resolvent equation for additive functionals we see that We now show that m(K) < ∞ for each compact K ⊆ S. To see this note first that from (2.4) and our assumption that u(x, y) > 0 for all x, y ∈ S, that also u 1 (x, y) > 0 for all x, y ∈ S. It then follows from the last paragraph that y → u 1 (x, y) is bounded below for y ∈ K by a constant C = C(x) > 0. Consequently We may take the canonical representation of X in which Ω is the set of right continuous paths ω in S ∆ = S ∪∆ with ∆ / ∈ S, and is such that ω(t) = ∆ for all t ≥ ζ = inf{t > 0 |ω(t) = ∆}. Then X t (ω) = ω(t). We define a σ-finite measure µ on (Ω, F) by for all F-measurable functions F on Ω. Here k t is the killing operator defined by k t ω(s) = ω(s) if s < t and k t ω(s) = ∆ if s ≥ t. We call µ the loop measure of X because, when X has continuous paths, µ is concentrated on the set of continuous loops. See also Lemma 2.4 below. Even if X is not assumed to have continuous paths we can verify that µ is concentrated on To see this, note first of all that since 1 {ζ=∞} • k t = 0 for each t, it is clear from (2.6) that µ(ζ = ∞) = 0. Then, since L x t is constant for t ≥ ζ, while on But by right-continuity of paths, the set of times for which X t− (ω) either fails to exist or exists but is different from X t (ω) is at most countable, for each ω ∈ Ω, [4, IV, Theorem 88D], while L x t is continuous in t so that d t L x t has no atoms. Hence (2.7) where the last equality used the fact that d t L x t is supported on {X t = x}. As usual, if F is a function, we often write µ(F ) for F dµ. u(y k , y π(1) ) · · · u(y π(k−2) , y π(k−1) )u(y π(k−1) , y k ), (2.9) where P k−1 denotes the set of permutations of [1, k − 1]. When k = 1 this means µ (L y 1 ∞ ) = u(y 1 , y 1 ).
Let Q x,y denote the measure defined on (Ω, F) by We remark that under the measures P x/h = 1 u(x,y) Q x,y , the paths of X are conditioned to hit y and die on their last exit from y. P x/h is the h-transform of P x , with h(x) = u(x, y)/u(y, y) = P x (T y < ∞).
In the proof of the Isomorphism Theorem we will need the following Lemma.
Comparing (2.26) with y = x and (2.9) we see that (2.24) holds for all polynomial H. But it is easily seen from (2.9) and (2.26) that the random variables L z ∞ are exponentially integrable both under Q x,x and µ (L x ∞ ·), hence finite dimensional distributions are determined by their moments.
Since ζ • k t = ζ ∧ t, we note for future reference that In the sequel we will use the fact that (t, x) → L x t (ω) is an occupation density with respect to m: for all t ≥ 0 and all non-negative Borel functions f , almost surely. It suffices to prove this for f ≥ 0 which are continuous and compactly supported. This case follows from the proof of [22, Theorem 3.7.1], with one change. That theorem assumed the joint continuity of L x t in order to show that the right hand side of (2.28), which we denote by A t , is a CAF. But this can be seen directly. A t is obviously monotone increasing in t and constant for t ≥ ζ. Also, using (2.5) s. Hence the a.s. continuity of A t follows from the dominated convergence theorem after applying Fubini to the fact for each for each x ∈ S, a.s. in ω. Hence by Fubini this holds a.s. in ω for a.e. x ∈ S. From the right hand side of (2.28) we then see that a.s. in ω, A s+t (ω) = A s (ω) + A t • θ s (ω), which completes the proof that A t is a CAF, and hence the proof of (2.28).
Transition densities
For this subsection only, we assume that P t (x, dy) ≪ dm(y) for each t > 0 and x ∈ S; in other words, P t (x, dy) has transition densities with respect to m. Under this assumption we give an alternate description of the loop measure. This is the description found in the literature. Using this description we give a simple proof of the fact that the loop measure is invariant under loop rotation. A proof of this fact without assuming transition densities is given in Section 5. The material in this sub-section will not be used in the following sections of the paper.
Under our assumption that P t (x, dy) has transition densities with respect to m, it follows from [6] that we can find jointly measurable transition densities p t (x, y) with respect to m which satisfy the Chapman-Kolmogorov equation Assume that p t (x, y) < ∞, for all 0 < t < ∞ and x, y ∈ S. It then follows as in [10] that for all 0 < t < ∞ and x, y ∈ S, there exists a finite measure Q x,y for all F s ∈ F s with s < t. In particular, for any 0 < t 1 ≤ · · · ≤ t k−1 ≤ t k < t and bounded Borel measurable functions for all F measurable functions F on Ω.
Proof of Lemma 2.3 Let us temporarily use the notation µ(F ) to denote the right hand side of (2.33). It suffices to show that µ Note that this last condition implies that Similarly, using the Markov property For y, x ∈ S, we define the measure Γ y,x (·) on [0, ∞) with cdf: We claim that for each y, we can find a set S y ⊂ S with m(S y ) = 0 such that for all t and x ∈ S c y . Then by (2.38), for any t k and all x ∈ S c Since the right hand side of (2.36) involves a dm(x) integral, by Fubini we can replace the term E y k ∞ t k 1 t d t L x t−t k which appears there with the left hand side of (2.40). Thus we will obtain Comparing with (2.35) then shows that µ(F ) = µ(F ). It only remains to verify our claim concerning (2.39). Note that since the left hand side of (2.39) is continuous in t and the right hand side is monotone, it suffices to find a set S y which works for all rational t, hence for each fixed t. By the occupation density formula (2.28) Since this holds for all bounded measurable f , our claim for fixed t is established.
For later use we note that applying the Chapman-Kolmogorov equation The next result justifies our calling µ the loop measure even for a process with discontinuous paths. This result will be proved in full generality in section 5. Define the loop rotation ρ u by Here, for two positive numbers a, b we define a mod b = a− mb for the unique . . , f k be bounded Borel measurable functions on S ∪ ∆ with f j (∆) = 0, j = 1, . . . , k. Fix some t and u. Since Integrating both sides with respect to dm(x) and applying the Chapman-Kolmogorov equation (2.30) we obtain where in the last line we used Comparing with (2.43) we obtain our Lemma.
The loop soup
Let Ω be the path space for X described after (2.5). For any α > 0, let L α be a Poisson point process on Ω with intensity measure αµ. Note that L α is a random variable; each realization of L α is countable subset of Ω. To be more specific, let Then for any disjoint measurable subsets A 1 , . . . , A n of Ω, the random variables N (A 1 ), . . . , N (A n ), are independent, and N (A) is a Poisson random variable with parameter αµ(A), i.e.
The Poisson point process L α is called the 'loop soup' of the Markov process X. The term 'loop soup' is used in [20], [19] and [17,Chapter 9]. In [12] L α is referred to, less colorfully albeit more descriptively, as Poissonian ensembles of Markov loops. See also [25] and [21]. We define the 'loop soup local time', L x , of X, by The next theorem is given for associated Gaussian squares in [13,Theorem 9].
Theorem 3.1 Let X be a transient Borel right process with state space S, as described in the beginning of this section, and let u(x, y), x, y ∈ S denote its potential density. Let { L x , x ∈ S} be the loop soup local time of X. Then { L x , x ∈ S}, is an α-permanental process with kernel u(x, y).
Proof By the master formula for Poisson processes, [16, (3.6)], Differentiating each side of (3.4) with respect to z 1 , . . . , z n and then setting z 1 , . . . , z n equal to zero, we get where the second sum is over all partitions B 1 , . . . , B l of [1, n]. Using (2.9) it is easily seen that this is u(x j , x π(j) ). (3.6) Let θ = {θ x , x ∈ S} be an α-permanental process with kernel u(x, y), x, y ∈ S, as considered in Theorem 3.1. Clearly, by our loop soup construction, θ is infinitely divisible. In [9, Corollary 3.4], Eisenbaum and Kaspi show that the Lévy measure of {θ x , x ∈ S} is given by the law of for any y ∈ S. However it follows from Theorem 3.1 that the loop measure αµ is the Lévy measure of {θ x , x ∈ S}. Therefore for any y ∈ S. This fact is also an immediate consequence of Lemma 2.2.
The Isomorphism Theorem via loop soup
For our Isomorphism Theorem we will need a special case of the Palm formula for Poisson processes L with intensity measure n on a measurable space S, see [2,Lemma 2.3]. This says that for any positive function f on S and any measurable functional G of L Proof We apply the Palm formula with intensity measure αµ, and Note that Then by (4.1) so that our Theorem follows from Lemma 2.2.
Invariance under loop rotation
In subsection 2.1, assuming the existence of transition densities, we gave a simple proof of the fact that the loop measure is invariant under loop rotation. In this section we give a proof of this fact without assuming transition densities. This proof is considerably more complicated.
Because the lifetime ζ is rotation invariant (ζ(ρ v ω) = ζ(ω) so long as ζ(ω) < ∞), the rotation invariance of the loop measure µ is equivalent to that of the measure ν defined by ν(F ) := µ(ζF ). By (2.27) and (2.22) we have The measure ν is more convenient for the calculations that follow, because of the following formula, where Γ x,y is defined in (2.37): as measures on the product space S k × (t k , ∞). Furthermore, with t 1 = 0, (5.2) holds for all 0 < t 2 < t 3 < · · · < t k .
Proof of Lemma 5.1: Using (5.1) we see that Using the Markov property and (2.4) we see that Here P β t (x, dy) = e −βt P t (x, dy). Using (2.38) and then the Markov property as in the previous display We claim that for a.e. t 1 , as measures in y 1 , To see this, it suffices to integrate both sides with respect to e −αt 1 dt 1 , use (2.3) with α replaced by α + β, and the fact that S has a countable base. (It is important to note that we allow y k = y 1 ).
Combining (5.3)-(5.6) we obtain for a.e. t 1 This agrees with what we obtain from the right hand side of (5.2): This completes the proof of our Lemma when t 1 > 0. When t 1 = 0, it follows from (5.3)-(5.4), and then (2.4) and (2.38) that This agrees with (5.7) for t 1 = 0, and the rest of the proof follows as in (5.8).
As a byproduct of our proof we now show that for any continuous compactly supported f 1 . To see this, note that by (5.3)- By (5.6), for a.e. t 1 this equals S S P β t 1 (y 1 , dz) u β (z, y 1 ) f 1 (y 1 ) dm(y 1 ). (5.12) But as noted in the paragraph containing (2.2), u β (z, y 1 ) is bounded, uniformly in z for y 1 in the compact support of f 1 (y 1 ). Hence (5.12) is bounded by Thus we have shown that for some dense D ⊆ R 1 and the right hand side is finite by (2.5). (5.10) then follows using right continuity. We will also need the following.
Let us define the process X to be the periodic extension of X; that is, It will be convenient to write The key observation is that for all α > 0. This follows from Hence for any continuous compactly supported f The rotation invariance of µ or ν is equivalent to the statement that for all 0 < t 1 < · · · < t k and r > 0 and all f j ≥ 0 continuous with compact support. This will follow once we show that the joint distribution of (X, ζ) is invariant under time shifts. That is, ((X t+v ) t≥0 , ζ) has the same distribution (under ν) as ((X t ) t≥0 , ζ) for all v > 0. To prove this we will first show that for all k and all α 1 , . . . , α k , for all g of the form g(ζ) = (1 − e −αζ )e −βζ , and where By (5.20) the left hand side of (5.23) is finite for all α 1 ≥ α, while the right hand side is finite since By uniqueness of Laplace transforms, it then follows that for Lebesgue a.e. k-tuple (t 1 , . . . , t k ), and in particular, for any r > 0, It follows that for any r > 0, But it is easily seen that F k (t 1 +r, . . . , t k +r) = F k (t 1 , . . . , t k ) so that, canceling the common constant factor e − k j=1 α j r , we obtain and thus comparing with (5.23) we have that for each r > 0 It follows that This holds for any k, in particular for k = 1, so that using (5.21) we have Thus by Fubini we can find a set T ⊆ R + with T c of Lebesgue measure 0 such that for all t 1 ∈ T we have (5.32), and (5.31) holds for a.e. t 2 , . . . , t k .
Using the boundedness and continuity of the f j and the right continuity of X t it follows from the Dominated Convergence Theorem that (5.31) holds for all (t 1 , t 2 , . . . , t k ) ∈ T × R k−1 + . Let now f 1,n be a sequence of continuous functions with compact support with the property that f 1,n ↑ 1. By the above, (5.31) with f 1 replaced by f 1,n holds for all (t 1 , t 2 , . . . , t k ) ∈ T n × R k−1 + for an appropriate T n ⊆ R + with T c n of Lebesgue measure 0. In particular T * = ∩ n T n = ∅, and we can apply the Monotone Convergence Theorem with t 1 ∈ T * to conclude, spelling out g(ζ), that for all t 2 , . . . , t k . Applying once again the Monotone Convergence Theorem for α → ∞ we obtain for all t 2 , . . . , t k . Fix a compact K ⊆ S. If we replace f 2 by a sequence f 2,n ↑ 1 K and then set t 2 = 0, we can conclude from (5.34) and (5.25) that the finite measures 1 K (X 0 ) · ν and 1 K (X 0 ) · ρ r * ν agree on the σ-algebra generated byX t , t ≥ 0 and ζ. Since this holds for any compact K ⊆ S, so do ν and ρ r * ν. Here and below we use the notation f * ν(A) = ν(f −1 (A)). It remains to prove (5.23). Using (5.19) and using (5.2) We now make the change of variables r = t 1 , s j = t j − t j−1 (j = 2, . . . , k), s 1 = t − t k + t 1 (with accompanying limits of integration 0 < r < s 1 , s j > 0) and then integrate out r. Writingŝ j := s 2 + · · · + s j ands := k j=1 s j , the expression in (5.36) is thereby transformed to e −α σ(j)ŝj 1 − e −α σ(j)s P s j (y j−1 , dy j ) f σ(j) (y j )Γ y k ,y 1 (ds 1 ) ds 2 · · · ds k .
We now turn to the right hand side of (5.23) and try to rewrite it in terms which are similar to our last expression for the J k (σ)'s. Using k j=1 α j t j = k j=1 α σ(j) t σ(j) we have Let us now fix σ ∈ P k and consider the corresponding term in (5.39) Changing variables (r 1 = t 1 , r j = t j − t 1 for j = 2, . . . , k) and integrating out r 1 , this can be rewritten as (5.41) Summing first over all permutations σ ∈ P k with σ(1) = i and then over i we obtain (5.42) Using (5.19) we can express this as Using Lemma 5.1 we then see that with the convention that t 1 = 0. Once more making the change of variables Γ y k ,y 1 (ds 1 ) ds 2 · · · ds k .
The restriction property
Let B ⊆ S be open and set Clearly, t → X t is right continuous. With and we show in [22,Section 4.5] that X = (Ω, G t , G, X t , θ t , P x ) is a Borel right process with state space B and potential densities with respect to the measure m(dx) restricted to B. Here we have used the convention that u(∆, y) = 0 and that X t (ω) = ∆ when t = +∞. It follows as before that uniformly in x, u(x, y) is locally bounded and continuous in y.
Let {L x t , (x, t) ∈ S × R + } be the family of local times for X used in the construction of µ.
It is easy to see that L x t is a CAF for X and It follows that { L x t , (x, t) ∈ B × R + } are local times for X. We can then define the loop measure µ for X by the formula (In our definition (2.6) of µ we assumed that X had continuous potential densities. We do not know if u(x, y) is continuous in x. However, the continuity of u(x, y) was only used to guarantee a nice family of local times for X, and by the above this is inherited by { L x t , (x, t) ∈ B × R + }). Note that B c = S − B does not contain ∆. Proof of Theorem 6.1: It suffices to prove this for F of the form But this is clearly which is precisely what we obtain from µ k j=1 f j (X t j ) by proceeding as in (6.9).
7 Transformations of the loop measure 7.1 Mappings of the state space LetS be another locally compact topological space with a countable base, and let f : S →S be a topological isomorphism. Then forms a Borel transition semigroup onS. LetΩ be the set of right continuous paths ω inS ∆ =S ∪ ∆ with ∆ / ∈S, and such that ω(t) = ∆ for all t ≥ ζ = inf{t > 0 |ω(t) = ∆}. Then withX t (ω) = ω(t) it follows from [23, Section 13] thatX = (Ω,F t ,X t , θ t ,P x ) is a Borel right process. Furthermore, ThusX has continuous potential densitiesū(x, y) = u(f −1 (x), f −1 (y)) with respect to the measure f * m.
Unit Weights
We say that a random variable T ≥ 0 is a unit weight if Of course, since ζ is invariant under loop rotation, 1/ζ is an example of a unit weight. (7.20) will provide another example, which is be used in the proof of Theorem 7.2 to determine how the loop measure transforms under a time change.
Let I ρ (X) be the collection of measurable functions on Ω which are invariant under loop rotation.
Lemma 7.1 If T is a unit weight then for all F ∈ I ρ (X) (7.8) Proof: By invariance of µ we have that for each u > 0 and F ∈ I ρ (X) Since ζ is invariant under loop rotation, this implies that for any u > 0 This shows that
Time change by the inverse of a CAF
where ν A is a Borel measure on S. We suppose that P x (A t = ∞, t < ζ) = 0 for all x ∈ S and t > 0. (This is the case, for instance, if ν A (K) < ∞ for each compact K ⊂ S.) By the argument at the beginning of the proof of Lemma 2.1, (7.13) defines a CAF of X. Let S A denote the fine support of A; that is, the set of x ∈ S such that P x (R = 0) = 1 where R = inf{t > 0 | A t > 0}, see [23,Section 64]. Because m is a reference measure and v(x) := E x (exp(−R)) is a 1-excessive function, Let τ t be the right continuous inverse of A t , and let Y t = X τt . Then Y = (Ω, G t , Y t , θ t , P x ) is a Borel right process with state space S A and lifetime A ζ , see [23,Theorem 65.9] for details, noting that [23, (60.4)] applied to exp(−A t ) allows us to assume that A is a perfect CAF. Here θ t (ω) = θ τt(ω) (ω). Using the change of variables formula, [5, Chapter 6, (55.1)], we see that so that Y t has continuous potential densities u(x, y) with respect to the measure ν A (dy) on S A . (In the last step we used the fact that for any measurable function h s , we have ∞ 0 h s dA s = ∞ 0 h s dL y s ν A (dy). It suffices to verify this for functions of the form h s = 1 [0,t] (s), in which case it is obvious). Furthermore, since S A is the fine support of A, L x τt is continuous in t for each x ∈ S A , see [11, p. 1659], and of course E y L x τ∞ = u(y, x). It follows that {L x τt , (x, t) ∈ S A × R + } is the family of local times for Y . See [23, Theorem 65.6] for additivity.
It will be convenient to use the canonical notation X ′ = (Ω, , which is the same as X t (ω), but we use the notation X ′ t to emphasize that it is associated with the measures P ′x which we now define. If we set g(ω)(t) = ω(τ t (ω)) we have Y t = X t • g and put Using [23, (62.20)], compare (2.23), we see that if t 1 < · · · < t n , Let Q ′x,x be the measure in (2.22) associated with X ′ . Using (2.23) and the fact that X ′ also has potential densities u(x, y) we have that if Hence for all measurable F g * Q x,x (F ) = Q ′x,x (F ) , ∀x ∈ S A . (7.17) Before considering general ν A 's, we first study the special case where the measure ν A is equivalent to m. Thus ν A (dx) = h(x)m(dx) where h is a measurable function on S with 0 < h(x) < ∞ for all x. It follows from (2.28) that A t = t 0 h (X s ) ds, (7.18) and thus S A = S. Let µ, µ ′ be the loop measures for X, X ′ respectively. Proof of Theorem 7.2: Define the unit weight T (ω) = h(ω(0)) A ζ (ω) . (7.20) By (7.8) we have µ (F ) = S Q x,x (T F ) m(dx) for all F ∈ I ρ (X). It is easy to see that F ∈ I ρ (X ′ ) implies that F • g ∈ I ρ (X). Noting that A ζ = ζ • g, and using (7.17) The last equality used (2.27) and the fact that ν A (dx) = h(x)m(dx).
We next show how to combine Theorems 7.1 and 7.2. Let S ′ be another locally compact topological space with a countable base, and let f : S → S ′ be a topological isomorphism. With h as above, let m S ′ be the measure on S ′ defined by m S ′ (dy) := f * (h m S ) (dy). (7.22) It follows from the discussion in sub-section 7.1 and the present sub-section that if we setX ′ t := f (X τt ) = f (Y t ) and {P ′x , x ∈ S ′ } the measures induced by {P x , x ∈ S}, thenX ′ = (Ω,F t ,X ′ t , θ t ,P ′x ) is a Borel right process with continuous potential densities u ′ (x, y) = u(f −1 (x), f −1 (y)) (7.23) with respect to the measure f * (h m S ) = m S ′ on S ′ . Set f ♯ (ω)(t) = f (ω(τ t )) and let µ,μ ′ be the loop measures for X,X ′ respectively. Combining Theorems 7.1 and 7.2 we obtain Corollary 7.1 f ♯ * µ (F ) =μ ′ (F ) , ∀F ∈ I ρ X ′ . (7.24) Remark 7.1 Let D, D ′ be two simply connected domains in the complex plane and let f be a conformal map from D onto D ′ . Let X be Brownian motion in D. Since the potential density for X with respect to λ D , Lebesgue measure on D, is not continuous, (it has a logarithmic singularity on the diagonal), X does not fit into the framework of this paper. Nevertheless, we argue by analogy. Let h(x) = |f ′ (x)| 2 . ThenX ′ is a Brownian motion in D ′ , and f * (h λ D )(dy) = λ D ′ (dy). It follows formally that (7.24) would yield [18,Proposition 5.27], the conformal invariance of Brownian loop measures.
We now turn to a general CAF as in (7.13), Our results are not as complete as (7.19), but see the Remark following the proof of Theorem 7.3.
For any B ⊆ S let L B (X) be the σ-algebra generated by the total local times {L x ∞ , x ∈ B} of X, and let µ ′ be the loop measure for X ′ . Recall that L ′x ∞ = L x ∞ • g so that Consider F ∈ L S A (X). Since A ζ ∈ L S A (X) and A ζ > 0, P x a.s. for all x ∈ S A , by replacing F in (7.26) by F/A ζ and then integrating with respect to dν A (x) we can deduce immediately that , ∀F ∈ L S A (X) . (7.28) Although S A may not be locally compact, X ′ inherits from X all the properties needed to define µ ′ as in (2.6), and it then follows as in (2.22) that µ ′ (F ) = S A Q ′x,x 1 ζ F dν A (x). (7.29) By (7.17) this shows that Noting that A ζ = ζ • g, (7.28) and (7.27) then imply our Theorem. · · · dL xn s n−j+1 dL x 1 s n−j+2 dL x 2 s n−j+3 · · · dL x j−1 sn , that is, we measure n-tuples of times in which x 1 , . . . , x n are visited in cyclic order. If n = 2 and x 1 = x 2 , then L x 1 ,x 2 t = L x 1 t L x 2 t , but in general L x 1 ,...,xn t is not a product of the corresponding local times. Let M(X) denote the σalgebra generated by the multiple local times. When Supp (ν A ) = S we can show that (7.25) holds for all F ∈ M(X) = M(X ′ ). When S is finite, it is known that M(X) = I ρ (X), [13, p. 24]. For diffusions, see [21], especially Corollary 2.9, and for more general processes see [3].
We leave to the interested reader the task of combining Theorem 7.3 with spatial transformations as in Corollary 7.1. | 9,023 | 2012-11-21T00:00:00.000 | [
"Mathematics"
] |
When are negative emissions negative emissions?
Negative emission technologies (NETs) have seen a recent surge of interest in both academic and popular media and have been hailed as both a saviour and false idol of global warming mitigation. Proponents hope NETs can prevent or reverse catastrophic climate change by permanently removing greenhouse gases from the atmosphere. But there is currently limited agreement on what “negative emissions” are. This paper highlights inconsistencies in negative emission accounting in recent NET literature, focusing on the influence of system boundary selection. A quantified step-by-step example provides a clear picture of the impact of system boundary choices on the estimated emissions of a NET system. Finally, this paper proposes a checklist of minimum qualifications that a NET system and its emission accounting should be able to satisfy to determine if it could result in negative emissions.
View Article Online
Without immediate and comprehensive mitigation of anthropogenic greenhouse gas emissio D n O s I: , 1 t 0 h .1 e 039/C8EE03338B prevention of catastrophic impacts from global warming may come to depend on the deliberate removal of massive quantities of greenhouse gases from the atmosphere. This concept of "negative emissions" gained increasing attention after its initial inclusion in the 4 th IPCC assessment report in 2009 and then in the vast majority of integrated assessment models in the 5 th report in 2014. The ambitious "well below 2°C" target of the 2015 COP21 Paris climate agreement may already be unachievable without negative emissions [1][2][3][4] .
Indeed, all modelling scenarios in the 2018 IPCC special report on limiting global warming to 1.5°C rely on the removal of carbon dioxide from the atmosphere. 5 In a 2017 review 5 , all included 1.5°C scenarios depended on permanently removing an annual 3 to 30 gigatonnes of CO 2 from the atmosphere-up to 80% of current global emissions-before the end of this century.
Some of the technologies designed to achieve negative emissions are based the encouragement of natural processes that uptake and store atmospheric carbon, such as afforestation (AF) 7,8 and soil carbon sequestration (SCS). 7,9 Other negative emission technologies (NETs) rely on human engineering, such as capture and storage of CO₂ from the combustion of biomass for energy (bioenergy with carbon capture and storage, BECCS), 7,10 or the chemical removal of CO₂ directly from air 7,11 and subsequent storage (direct air capture with storage, DAC-S).
Achieving massive-scale negative emissions requires an unprecedented fast-tracking of technological development and an unprecedented level of cooperation within and between political, industrial, and consumer stakeholders. 12,13 For while negative emission strategies are based on proven technological components, such as biomass cultivation, energy use, logistics, and gas storage, each of these components have financial costs, greenhouse gas emissions, and other environmental and social impacts. NETs rely on connecting these components into complex systems, further increasing risk and uncertainty. 13 An overarching necessity is to ensure that the total effect of all components within the complex system of a NET is the permanent removal of atmospheric greenhouse gases, and thereby a net decrease in the greenhouse gas concentration in the atmosphere.
If massive-scale negative emissions are to be achieved, a clear, comprehensive, and consistent definition of when negative emissions occur is a necessary prerequisite for the effective implementation of incentives, regulations, and accounting. However, this is not currently the case. The 2018 IPCC special report 5 defines "negative emissions" explicitly only as the "removal of [atmospheric] greenhouse gases," though long-term storage is a feature of all greenhouse gas removal technologies discussed. A recent report by the European chemical industry 14 argues that CO 2 use-including in fuels and other short-lived chemicals-can be counted as "negative emissions," regardless of the origin of the CO 2 or fate of the product. A proposed EU policy 15 the emission accounting of manure-based biogas allows methane diverted from traditional waste treatment to be labeled "negative emissions." That is, even if the biogas is later combusted and the resulting CO View is Article Online released to the atmosphere, since the emissions were prevented from happening during the waste treatment process itself, they are considered "negative." The above examples each come from a document relevant to policy and industry decision makers, and each example uses the term "negative emissions to refer to a different concept, including the removal (and implicit storage of) atmospheric greenhouse gases), the utilization of greenhouse gases in products, , and the prevention or delay of greenhouse gas emissions. This paper shows that this lack of clear consensus is due to the use of different system boundaries when considering what to count as "negative emissions." This paper reviews the variations in the explicit and implicit usage of the term "negative emissions" and related terminology in studies from 2014 to 2018. To clarify the impact of system boundary selection on the perceived emission balance of a NET, a simplified example is used to illustrate the differences in emission accounting for a hypothetical NET when different system boundaries are used. Finally, we propose an operational set of minimum criteria for evaluating whether a system could result in negative emissions.
Literature review methods
Recent peer-reviewed academic literature on negative emissions was collected via a Web of Science topic search on the terms "negative emission," "negative CO 2 ," "negative greenhouse gas," "CO 2 negative," and "carbon negative" from 2014 through June 2018. This search resulted in 433 citations, of which 147 were neglected; 31 for lacking peer-review, 14 for being inaccessible, and 102 for being on unrelated topics, such as carbon electrode design or short-term natural carbon fluxes.
In the remaining 286 studies, the use of the term "negative emissions" was evaluated on whether the usage encompassed: the physical removal of greenhouse gases from the atmosphere, the storage of atmospheric greenhouse gases, and whether the storage, was specified to be permanent, were collected in a tally spreadsheet, which is provided in the supplemental information to this paper.
Overview of the usage of negative emissions terminology in recent literature
Half of the 286 papers reviewed provided an explicit definition of the term "negative emissions" (or "negative CO 2 ," "negative greenhouse gas," "CO 2 negative," and "carbon negative," if those were used additionally or instead). Table 1 shows that these explicit definitions were not always consistent. 143 (50%) of studies specified the removal of atmospheric greenhouse gas, but only 82 (29%) specified any sort of storage of the greenhouse gas. 23 papers (9%) considered negative emissions to be generated from processes that explicitly re-release the gas into the atmosphere in the short term, such as via conversion to fuel. A further 33 studies (12%) also explicitly considered negative emissions to come from processes that do not remove greenhouse gases from the atmosphere, such as carbon capture and storage (CCS) of fossil fuel emissions or emission reduction technologies. The full list of papers reviewed, tagged with usage features is available in the supplemental information as a sortable spreadsheet. 1: including the alternate terms: "negative CO2", "negative greenhouse gas", "CO2 negative", and "carbon negative" 2: Including 11 of the 27 (41%) life cycle assessments papers that are in the literature review For the full article list with usage features marked per article, please refer to the supplemental information.
If implicit usage is also considered, a further 34% (84% total) of the studies likely consider negative emissions to involve the removal of atmospheric greenhouse gases, and a further 44% (65% total) likely include the permanent storage of greenhouse gases. However, there is high variance in how clearly these terms are
Energy & Environmental Science Accepted Manuscript
used, and without an explicit definition, it is ambiguous whether these are intended as necessary or optional criteria of negative emissions. The most consistent usage feature was that 70% (199) of papers state that purpose of negative emissions is to reduce global warming or, more specifically, to reduce atmospheric concentrations of greenhouse gases.
Therefore, logically, the quantity of greenhouse gas in the atmosphere must be lower after NET use than before it. This requires not only that greenhouse gases are removed from and stored outside the atmosphere, but also ensuring that any greenhouse gases emissions that result from this process are not greater than the amount of greenhouse gases removed. Of the papers reviewed, only five 16,17,18,19,20 (2%) explicitly acknowledge that all emissions associated with the use of NETs, including those upstream and downstream of the removal process, are needed determine whether a technology actually results in in an overall decrease of atmospheric greenhouse gases. The system boundary selection example below illustrates the potential importance of these upstream and downstream emissions on the overall GHG balance of an NET system. Avoided emissions are an estimation of emissions that are assumed to be potentially prevented by switching from a system of reference to the system studied in the LCA, based on specific assumptions of future system behaviour. They are a feature of a method to account for the emission-reduction potential of co-products that are produced in a system analysed by an LCA, known as "displacement" or "system expansion." 21 As an example, in Beaudry et al (2018), 22 a palm oil biorefinery is assumed to produce -among other productsethanol and electricity. The study assumes that this ethanol and electricity directly replace gasoline and coalbased electricity, and therefore, if the biorefinery is in operation, these fossil fuels will not be used. It then follows that the greenhouse gas emissions attributable to the production and use of the gasoline and electricity from coal will also not be produced; these emissions are said to be "avoided." The study then subtracts these "avoided emissions" from the emissions of the biorefinery. As the resulting difference is a negative number, the biorefinery is said to result in negative emissions.
Avoided emissions and enhanced oil recovery
In short, the negative greenhouse gas emission numbers in these LCAs are not physical emissions. They are the potential reduction of emissions in a hypothetical scenario where a specific technology replaces another specific technology, and will change depending on the reference scenario selected. Avoided emissions refer
Energy & Environmental Science Accepted Manuscript
to the potential of adding a smaller, but still positive, amount of greenhouse gas to the atmosphere. This is in contrast to how the term negative emissions is used in the context of pathways to reach 1.5°C mit targets, which refers to greenhouse gases that are physically removed from the atmosphere. Some LCAs 23,24 further conflate these terms by lumping together physical removal and assumed avoidance of greenhouse gases while other LCAs simply use the term negative emissions to refer to avoided emissions without any removal of atmospheric greenhouse gases at all. 23,26,27 The full list of LCAs in the review that conflate the term negative emissions with avoided emissions is available in the supplemental information.
The term negative emissions is also sometimes used to refer to CCS applied to fossil fuels, particularly in papers within the field of enhanced oil recovery (EOR). 28,29,30 In EOR, CO 2 is used to extract otherwise unrecoverable oil from otherwise depleted oil fields. Some EOR studies label the balance of CO 2 (CO 2 trapped in the geological formation minus CO 2 released when oil is combusted) negative emissions, regardless of the origin of the CO 2 , which, in most cases, is either extracted from natural formations or from the flue gas from the combustion of fossil fuels. Storage of fossil CO 2 , however, does not involve any removal of CO 2 from the atmosphere, and therefore cannot result in any decrease in atmospheric greenhouse gases.
Furthermore, even when removed atmospheric CO 2 is used and permanently stored in the process of EOR, the CO 2 emissions from the use of the recovered oil can be greater than the atmospheric CO 2 removed and stored, thus leading to a net increase in atmospheric CO 2 . In at least one study, 31 the emissions from the combustion of the recovered oil -which otherwise would have remained in the ground-are excluded from the CO 2 balance, and the whole quantity of stored CO 2 is considered negative emissions.
How system boundaries selection matters for negative emissions
To illustrate the impact of system boundary selection on the estimated greenhouse gas emissions of a NET system, the following example will look at the way the emission estimate changes for a steel mill implementing BECCS based on different boundary selection. The system itself, an overview of which is shown in Figure 1, is the same in every case; it is only our perspective of it that changes, as indicated by the different system boundary lines.
Figure 1. Different technology assessments boundaries applied to a BECCS steel plant.
A "gate-to-gate" system only considers the emission within the steel plant itself. Bioenergy assessment also often includes the uptake of atmospheric carbon by the biomass without also including the biomass processing and transport in a "cradle-to-gate" or "cradle-to-grave" system, the latter also including the impacts of product use and waste processing after they leave the steel plant.
In bioenergy systems, unintended (or "indirect") land use change may also need to be included to achieve a full picture of the system impacts. Figure 1 provides an overview of system boundaries common in technology assessment. A "gate-to-gate" system considers only the processes and emissions that occur within the steel plant itself. Studies on bioenergy often use a modified gate-to-gate boundary, that additionally includes an amount CO 2 removed by biomass from the atmosphere that is assumed to be exactly equal to the CO 2 emitted from its combustion, and thus the bioenergy is considered to be "carbon neutral." A "cradle-to-gate" system includes upstream emissions and resource use, such as land use, cultivation, harvest, and transportation of biomass and the production of other inputs, but nothing downstream of the factory gate, such as product use or waste treatment. The inclusion of both upstream and downstream emissions is a "cradle-to-grave" system. Since bioenergy systems often involve changes in land use that many not be temporally or geographically immediate to the cultivation or harvest or biomass, a further expansion of the boundaries to encompass indirect land use change (ILUC) is also used. The below example illustrates that without a "cradle-to-grave" perspective, it is not possible to determine whether the use of a NET will result in an overall decrease in atmospheric greenhouse gas concentration and thereby achieve negative emissions.
This example, illustrated in Figure 2, considers a steel mill that first implements capture and geologic storage of its CO 2 emissions (CCS), and later also switches its energy source from coal to wood charcoal (BECCS). For clarity, the example assumes a heavily simplified steel mill that produces one type of steel and derives all its energy and emissions from the combustion of one type of fuel. Since the focus of this example is CO 2 emissions, the mining of iron ore and use of the steel product are excluded. The quantities used in this example, while based on real data, are heavily simplified and intended only for illustrative purposes. This example illustrates only a single possible configuration, and many other choices of technology, production methods, and transport, are available. Furthermore, a full inventory of greenhouse gas emissions from the supply chain of steel production, charcoal, and CCS would be much more extensive, but is neglected the assumption that the charcoal is "carbon neutral." (e) shows a simplified "cradle-to-grave" system, including in its boundaries the CO 2 absorbed by the wood that is lost in the charcoal production process, the CO 2 emissions from biomass harvest and transport, the CO 2 emissions of charcoal production, and the CO 2 emissions CO 2 storage. (f) is a variant where the production of biomass has significant emissions from indirect land use change (ILUC). (g) is a variant where the geologic storage of CO 2 leads to the production and combustion of fossil fuels whose CO 2 emissions outweigh the CO 2 stored. Online the atmosphere. In (b), the steel mill has installed CCS technology that captures 90% of the CO 2 produced at the mill. However, the energy required for carbon capture increases the mill's coal consumption to 0.5 t, thus increasing the total amount of CO 2 produced by combustion to 1.3 t. The CCS technology captures 1.2 t of this CO 2 , which is then sent to for storage in a geologic formation. The uncaptured 0.1 t of CO 2 is still emitted to the atmosphere. Therefore, from a gate-to-gate perspective, the addition of CCS reduces the steel mill's atmospheric CO 2 emissions from 1.0 t to 0.1 t. Figure 2(c)-(g) assume that the steel mill with CCS that has also switched its energy source from coal, a fossil fuel, to charcoal, a biogenic fuel. Fossil fuels contain carbon that has been removed from the carbon cycle for geologic time periods, and CO 2 emissions from fossil fuels increase the level of CO 2 into the atmosphere.
In contrast, CO 2 emitted via the combustion of biogenic fuels contains carbon that was recently removed from the atmosphere via photosynthesis of growing biomass. Theoretically, if the biomass harvested for combustion is replaced by an equivalent amount of new planting, the replacement biomass will eventually absorb an equivalent amount of CO 2 from the atmosphere, resulting in a net zero addition of CO 2 to the atmosphere. In a system emitting fossil CO 2 , the maximum impact of CCS is that emissions can be reduced to near-zero. If a system emits biogenic CO 2 , it is possible to generate a flow of CO 2 from the atmosphere to some form of permanent storage, thus generating negative emissions.
In this example, the charcoal has a lower energy content than coal, therefore 0.7 t is necessary to provide the same amount of power as the 0.5 t of coal in (b). In Figure 2 In Figure 2(d), the system is extended to include the assumption that the charcoal used is "carbon neutral." That is, since the combustion of the charcoal resulted in generation of 1.4 t of CO 2 emissions, the charcoal is assumed to have been produced from biomass that removed exactly 1.4 t of CO 2 from the atmosphere.
Therefore, from the perspective of a "gate-to-gate with carbon neutral biomass" system, a net 1.2 t of CO 2 is estimated to be permanently removed from the atmosphere via BECCS.
Energy & Environmental Science Accepted Manuscript
transport and storage. In (d), it was assumed that biomass absorption of CO 2 was equal to the CO 2 it produces when it is combusted, neglecting any losses between photosynthesis and combustion. The emission accounting for the cradle-to-grave system includes these losses, which encompass an additional 0.4 t of CO 2 absorbed from the atmosphere that is re-emitted during charcoal production. Furthermore, biomass harvest and transport here use energy from fossil fuels, emitting 0.1 t of CO 2 . For CO 2 transport and storage, 0.1 t of fossil CO 2 is emitted while providing the energy needed to transport, inject, store, and monitor the CO 2 . Leakage of CO 2 from storage is assumed to be negligible. In total, the cradle-to-grave boundaries encompass 1.8 t of CO 2 removed from the atmosphere via photosynthesis, of which 1.2 t is captured after combustion for energy and stored in a geologic formation, and 0.6 t is emitted to the atmosphere during charcoal production and from CO 2 capture losses. Additionally, 0.2 t of fossil CO 2 is emitted to the atmosphere during the upstream processing of biomass and the downstream processing of CO 2 . Overall, the cradle-to-grave perspective accounts for an additional 0.4 t of CO 2 removal and 0.6 t of CO 2 emissions than is estimated by using the gate-to-gate system boundaries of (d). Overall, a net 1.0 t CO 2 is estimated to be permanently removed from the atmosphere via BECCS. Nothing in the system has changed, but more of the supply chain is now included in the boundaries used to estimate the emission balance.
Online
Quantified estimates of negative emissions should take into account, as fully as possible, all greenhouse gas removals and emissions in the cradle-to-grave system, including indirect emissions when pertinent (e.g. from indirect land use change or the combustion of system coproducts such as EOR oil). While any emissions estimate is limited by the available data, the use of as broad a system boundary as possible minimized the possibility of inconsistent or short-sighted system boundary selection leading to emission estimates that are misleading, contradictory, and possibly very wrong.
Further consideration for biomass-based NETs
As several NETs rely on the large-scale cultivation of biomass, it is relevant to briefly highlight the limitations of the above example with regard to biomass production and use, particularly as it only describes a single possible system configuration. In the above example, the bioenergy system of cultivation, harvest, processing, and combustion, by itself (excluding CCS) resulted in a positive balance of CO 2 emitted to the atmosphere. However, depending on the method of cultivation and processing, bioenergy can be carbon positive, carbon negative, or carbon neutral. 35,36 Factors that influence the emission balance of bioenergy systems include the growth rate and harvest frequency of the biomass, the preparation of the land for biomass cultivation (direct land use change), the energy intensity and energy origin of biomass harvest, transport, and processing, and the management of soil and biomass residues, among others. 36 Furthermore, while significant emissions from ILUC were included in the example for illustrative purposes, whether and how much land use change occurs, direct or indirect, is highly specific to the geographic considerations, such as existing available land and land use patterns, of each bioenergy system. 37 Besides the physical considerations of the biomass system, the accounting method can significantly influence the estimated emissions of a bioenergy system, particularly for slow-growth biomass such as forestry. In particular, as highlighted in Daystar et al (2015), 35 negative emissions. While cradle-to-grave system analysis is not within the scope of all research on NETs, it is vital for researchers and decision-makers to be aware of the system boundaries they explicitly or imp As shown in the simplified example above, emission negativity cannot be determined without accounting as fully as possible for all emissions and removals of greenhouse gases in the cradle-to-grave system. Based on the most common defining elements seen in explicit and implicit usage of the term "negative emissions," and keeping in mind the goal of negative emissions-reducing atmospheric level of greenhouse gases-four key criteria can be considered "minimum qualifications" for determining whether a technology results in negative emissions: 1. Physical greenhouse gases are removed from the atmosphere.
2. The removed gases are stored out of the atmosphere in a manner intended to be permanent.
3. Upstream and downstream greenhouse gas emissions associated with the removal and storage process, such as biomass origin, energy use, gas fate, and co-product fate, are comprehensively estimated and included in the emission balance.
4. The total quantity of atmospheric greenhouse gases removed and permanently stored is greater than the total quantity of greenhouse gases emitted to the atmosphere.
While the above criteria require a cradle-to-grave system perspective for emissions accounting, they do not endorse a specific methodology for emission accounting, as evaluating the merits and limitations of the different accounting practices is outside the scope of this paper. However, a clear distinction should always be made between physical negative emissions, as defined above, and the emission reduction potential of one technology in comparison to another (avoided emissions), that can appear as negative numbers in LCAs.
The use of the term "negative emissions" for both physical removals and assumed avoidance has a particular risk for counterproductive misunderstanding in decision-making and incentive design.
Furthermore, the impact on atmospheric greenhouse gas concentrations is just one of several impacts that a negative emission technology could have that may affect global warming. Others include changes in albedo 41 the response of natural carbon sinks 42 or a rebound effect of increased consumption 43 . Additionally, other environmental impacts, such as biodiversity loss, acidification, and water use, also require consideration when evaluating the utility of a specific NET. 41,44 It is also important to leave space for impacts that are currently beyond our knowledge-the unknown unknowns-and to adapt analysis as understanding of the impacts of negative emissions increases.
Finally, it should be emphasised that negative emission technologies are nascent and the scale on which they could be effectively implemented is uncertain. Preventing catastrophic climate change is a race against the
Conflicts of Interest
There are no conflicts to declare. | 5,748.4 | 2019-04-10T00:00:00.000 | [
"Philosophy"
] |
FinQA: A Dataset of Numerical Reasoning over Financial Data
The sheer volume of financial statements makes it difficult for humans to access and analyze a business’s financials. Robust numerical reasoning likewise faces unique challenges in this domain. In this work, we focus on answering deep questions over financial data, aiming to automate the analysis of a large corpus of financial documents. In contrast to existing tasks on general domain, the finance domain includes complex numerical reasoning and understanding of heterogeneous representations. To facilitate analytical progress, we propose a new large-scale dataset, FinQA, with Question-Answering pairs over Financial reports, written by financial experts. We also annotate the gold reasoning programs to ensure full explainability. We further introduce baselines and conduct comprehensive experiments in our dataset. The results demonstrate that popular, large, pre-trained models fall far short of expert humans in acquiring finance knowledge and in complex multi-step numerical reasoning on that knowledge. Our dataset – the first of its kind – should therefore enable significant, new community research into complex application domains. The dataset and code are publicly available at https://github.com/czyssrs/FinQA.
Introduction
Financial analysis is a critical means of assessing business performance, and the consequences of poor analysis can involve costs of billions of dollars (Jerven, 2013;MacKenzie, 2008). To facilitate high quality, timely decision making, professionals -such as analysts or investors -perform complex quantitative analysis to select information from financial reports. Such analysis demands advanced expertise in reasoning among heterogeneous (structured and unstructured) data sources and performing complex numerical reasoning, for example, comparing financial ratios of profitability or growth. These challenges are compounded 1 https://github.com/czyssrs/FinQA by an exponentially expanding collection of company financial documents (MacKenzie et al., 2012;Lange et al., 2016) such that it is genuinely unclear whether dedicated human effort can produce fiscal analysis of sufficient quality for current decision making. This poses an interesting question: can we automate such deep analysis of financial data?
A few NLP studies in Question Answering (QA) explored the numerical reasoning capabilities needed to answer questions correctly. For example, the DROP dataset (Dua et al., 2019) focused on Wikipedia-based questions that require numerical reasoning, e.g., "Where did Charles travel to first, Castile or Barcelona?" needs a comparison between the times of two events. However, most prior work only targeted the general domain, where the questions involve much less calculation (mostly one-step calculation) than that of the financial domain. Financial QA is more challenging than classic QA (Rajpurkar et al., 2018; because it requires the system to spot relevant information across heterogeneous sources, such as tables and unstructured texts, and then create a numerical reasoning path to connect all the information. It also takes substantial knowledge to ask meaningful financial questions. It is not clear how well the large language models, which performed well for general-domain QA, can be adapted to answer realistic, complex financial questions. This paper introduces FINQA, a expertannotated dataset that contains 8,281 financial QA pairs, along with their numerical reasoning processes. Eleven finance professionals collectively constructed FINQA based on the earnings reports of S&P 500 companies (Zheng et al., 2021). ing processes answering these questions are made of many common calculations in financial analysis, such as addition, comparison, and table aggregation. To the best of our knowledge, FINQA is the first dataset of its kind to tackle complicated QA tasks based on the real-world financial documents.
We propose a retriever-generator QA framework to first retrieve supporting facts from financial reports, then to generate executable reasoning programs to answer the questions. Equipped with pretrained language models, such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), our proposed approach outperforms all other baselines and achieves an execution accuracy of 65.05%. Although our system outperforms the non-expert crowd (50.68%), the significant accuracy gap between the model and human experts (91.16%) motivates the need for future research.
The main contribution of this work is three-fold: • We propose the task of QA over financial data to assist financial analysis. The task emphasizes an important phenomenon for the NLP community to study and analyze how the current pre-trained models perform on complex and specialized domains.
• We construct a new large-scale dataset, FINQA, with 8,281 examples written by financial experts, with fully annotated numerical reasoning programs.
• We experiment on various baselines and find that the models are still far behind expert performance, strongly motivating future research. supat and Liang, 2015), Spider (Yu et al., 2018), TabFact (Chen et al., 2020b), etc. For reading comprehension, the dataset most related to ours is the DROP dataset (Dua et al., 2019), which applies simple calculations over texts. The top methods on DROP typically use specific prediction heads for each kind of calculation. HybridQA (Chen et al., 2020c) targets QA over both the table and the text, but not with the focus of numerical reasoning. All these existing datasets are built upon the general domain (mostly based on Wikipedia). In contrast, our dataset focus on the finance domain, which demonstrates much more complex nature in numerical reasoning questions, combining both the structured tables and unstructured texts. Another kind of QA datasets related to ours is the math word problem datasets, like MaWPS (Koncel-Kedziorski et al., 2016), MathQA (Amini et al., 2019). The task is to generate the solution programs given a short input math problem. Existing models include (Kim et al., 2020;Chen et al., 2020a,d), etc.
Financial NLP. Financial NLP has become one of the major application domains attracting growing attentions. Previous works in finance domain include risk management to detect fraud (Han et al., 2018;Nourbakhsh and Bang, 2019), sentiment analysis to assist market prediction (Day and Lee, 2016;Wang et al., 2013;Akhtar et al., 2017), opinionated Question Answering (Liu et al., 2020), such as the FiQA 2 dataset built from forums and social media. Recent works attempt to develop pre-trained models specialized for finance domain Araci, 2019), and the downstream tasks are mostly sentiment classifications. To the best of our knowledge, there is no previous work and dataset on building QA systems of numerical reasoning on financial reports.
Task Definition
Problem Formulation. Presented with a financial report consisting of textual contents E and structured table T , given a question Q, the task is to generate the reasoning program G = {w 0 , w 1 , ...w n }, where w i is the program tokens defined by domain specific language (DSL), then it is executed to get the answer A: Where {G i } is all the correct programs to evaluate to the answer. For financial tables, there is typically a description header (blue header in Figure 1), which often gives the timing information; and each row has its name on the left. Some of the financial tables may demonstrate more complicated layouts, e.g., nested structures. As a first step for this direction, in this paper we only focus on the regular layout cases for simplicity.
Each operation takes a list of arguments args n . On consulting with financial experts, as most of the accounting and financial valuation theory primarily include linear algebra, we include 10 common types of operations in our dataset. There are 6 mathematical operations: add, subtract, multiply, divide, greater, exp, and 4 table aggregation operations The table operations take arguments of table row names. We use the special token #n to denote the result from the nth step. For example, in Figure 1, the program consists of 3 steps; The first and the second division steps take arguments from the table and the text, respectively, then the third step subtracts the results from the two previous steps. Refer to Appendix A for more details of the operations and the grammars.
Evaluations. Previous studies on QA with numerical reasoning only evaluate the execution accuracy, i.e., the final results from the generated programs, such as DROP (Dua et al., 2019) and MathQA (Amini et al., 2019). However, the applications for the finance domain generally pose much higher requirements of explainability and transparency. Therefore, we also provide the gold programs for our dataset. Besides execution accuracy, we also propose to evaluate the accuracy of the generated programs. Specifically, we replace all the arguments in a program with symbols, and then we evaluate if two symbolic programs are mathematically equivalent. For example, the following two programs are equivalent programs: add(a 1 , a 2 ), add(a 3 , a 4 ), subtract(#0, #1) add(a 4 , a 3 ), add(a 1 , a 2 ), subtract(#1, #0) Note that execution accuracy tends to overestimate the performance because sometimes the model just hit the correct answer by chance; While program accuracy tends to produce false negatives since some questions may have multiple correct programs.
Data Preparation
Data Source. We develop FINQA based on the publicly available earnings reports of S&P 500 companies from 1999 to 2019, collected in the FinTabNet dataset (Zheng et al., 2021). An earnings report is a set of pages in a PDF file that outlines the financials of a company, which usually contains tables and texts. The FinTabNet dataset has annotated the tables in each report.
Data Filtering. Realistic earnings reports contain many tables not suitable for numerical reasoning tasks. Equipped with the table annotations in FinTabNet, we filter the data as follows: First, we extract the pages in earnings reports with at most one table. Second, we exclude the tables with over 20 rows, over 2 description headers, or with other complex nested structures. We also exclude the tables with tedious contents, such as catalogs, which is common in FinTabNet. As stated in §3, these over-complicated tables are out of the scope of this work. Finally, for the tables with 2 description headers, we merge them into a single header to simplify the representations. As a result, a total of 12,719 pages were selected for further annotation.
Annotation Procedure
Recruiting Expert Annotators. We post job ads on UpWork 3 and hire eleven US-based experts with professional finance backgrounds (CPAs, MBAs, etc.) Each hire is interviewed using four example report pages and asked to compose example Q&A pairs. After hiring, each annotator first goes through a training session to learn the task and the annotation interface (Appendix D). When the workers fully master the annotation process, we launch the official batches for them to work on.
An annotator can compose up to two questions for each given report page or skip if it is hard to compose any meaningful question. We pay around $2.0 for each question, which leads to an average hourly wage of $35.0. The whole data collection took around eight weeks.
We do not use popular micro-task platforms, such as Amazon Mechanical Turk (MTurk), because our preliminary studies show that many MTurk workers can not perform this task effectively. Our experiment with MTurk workers in § 4.3 further echo this observation. As most existing QA datasets were constructed by MTurk workers Dua et al., 2019;Chen et al., 2020c), it requires substantial domain-specific knowledge to compose meaningful questions that are hard for computers to answer.
Annotation Task Design. For each page selected in §4.1, the annotators are asked to (i) write a meaningful financial question, (ii) compose a reasoning program to answer the question, and (iii) to annotate the supporting fact. Each page is assigned to one or two experts for annotation. We detail each part as follows. (I) Financial question: For a given page of earnings reports, the annotators are asked first to compose a question that is "meaningful for financial analysis or learning insights of the company financial reports" and require numerical calculations to answer. We encourage the experts to write questions that require the information from both the text and the table to answer. (II) Reasoning program: After providing the question, the annotators are then asked to elaborate the operation steps to answer the question. Specifically, they compose a maximum of 5 steps of operation, where each operation has four slots: "operation", "argument1", "argument2", and "result". The "operation" is one of the ten predefined operations described in §3. An "argument" is a number or a table's row name, either from the report or a previous step's result. For operations that only use one argument, such as table aggregation, workers can leave argument2 blank. The annotation interface (see Appendix D) automatically validates the inputs to ensure correctness. (III) Supporting fact: We also ask the annotators to mark all the sentences in the text and the table rows that contain the information needed to answer the question.
Data Quality Assessment
External experts answer FINQA questions with a high accuracy and a high inter-annotator agreement. To validate the quality of the annotations, as well as to set up human expert performance upper bound, we hire another two financial professionals on UpWork. We randomly sample 200 examples from our dataset, and ask the professionals to answer the questions as well as write the operation steps, following the same procedure as in the dataset construction. The payment is $2.0 per question. For execution accuracy, they reach 92.25% and 90.06%, respectively (mean = 91.16%). For program accuracy, they reach 89.44% and 85.53% (mean = 87.49%). The agreements between the two annotators are 92.65% for execution accuracy, and 86.76% for program accuracy.
Non-expert crowd workers answer FINQA questions with a low accuracy. We also test how well non-expert MTurk workers can answer FINQA questions. We distribute the samples to MTurk 4 and take the similar process to distribute each example to two workers. We end up with an average execution accuracy of 50.68% and a program accuracy of 48.17%, which is far below the expert performance; the agreement rate is only around 60%. These results echo our preliminary study's observations for MTurk workers in §4.2. has two pieces of facts; and 11.07% has more than two pieces of facts. For the examples with more than one piece of fact, we also calculate the maximum distances between all the same example's facts. 55.48% has a maximum distance of 3 or less sentences 5 ; 24.35% has a maximum distance of 4-6 sentences; and 20.17% has over 6 sentences.
Statistics of Reasoning Programs. In the programs, the most frequent operations, add, subtract, multiply, and divide, have the distributions of 14.98%, 28.20%, 5.82%, and 45.29%, respectively. The operation division has the highest frequency, as calculating ratios is common in financial analysis. In FINQA, 59.10% of the programs have 1 step, 32.71% have 2 steps, and the rest 8.19% have 3 or more steps.
Baseline Systems
In this section, we first describe our main baseline framework FinQANet in §5.1, and then we introduce other baselines in §5.2.
5 For tables, we consider one row as one "sentence".
Financial Report
Retrieved Facts Figure 2: The retriever retrieves supporting facts (text sentences or table rows) from the input financial report.
The FinQANet Framework
As a preliminary attempt on FINQA, we propose FinQANet, with a retriever to first retrieve the supporting facts from the input financial report, then a generator to generate the program to get the answer.
Retriever The full page of the financial report can go beyond 2,000 tokens, which cannot be coped with the current popular QA models (Devlin et al., 2019). Therefore we first retrieve the supporting facts from the input report. For the tables, we use templates to turn each row into sentences. For example, the last row of the table in Figure 1 is represented as 'the risk-free interest rate of 2006 is 5%; ...'. We concatenate each supporting fact with the question and train a classifier using pre-trained LMs like BERT (Devlin et al., 2019). Then we take the top n retrieved facts, reordered as they appear in the input report. This set of retriever results will serve as the input to the second phase. Figure 2 illustrates the retrieving procedure. Another common strategy is sliding window (Alberti et al., 2019). We take the sliding window of a fixed size with a stride to go through the report, then the windows containing all the supporting facts are marked as positive. However, we observe in the experiments that the length of the input to the program generator in the second phase greatly influences the performance. The performance of using sliding window falls far behind the previous method.
Program Generator Given the retrieved supporting facts from the retriever, the program generator aims to generate the executable program to answer the question. Figure 3 gives an overview of the program generator. The generated tokens come from 3 sources: 1) The input passage (retriever output) and the question tokens {e i }, like the numbers or the table row names.
2) The special tokens {s i } from the DSL, like the function names, predefined
Input encoder
Step memory embeddings 9413 add( ) 8249 #0 divide( Step memory embeddings ) 8249 Step ]. An LSTM is used for decoding. At each decoding step T , the program token embeddings H are fed as the input; The decoder output h T is used to calculate the attention vector att p and att h over the input and the decoding history. Then a context vector c T combines all the contextual information: Meanwhile, another attention vector att p over the input is applied to all the token embeddings: Different from other program tokens, the step memory tokens {m i } imply the reasoning path of the program. To make use of such structure information, at each decoding step indicating the end of one operation[args] unit, i.e., the step to generate the ending parentheses in our DSL, we compute another context vector a T : Then the step memory token embedding corresponding to the current step is updated as a T . The final prediction is calculated with: During inference time, based on the grammar of the DSL, we use masks at each decoding step to ensure the structural correctness of the generated programs. In the retriever phase, we take the top n retrieved results as the input to the program generator. Therefore, for the training of the program generator, we use the retriever result on the training set (combined with the gold facts if there is any wrong prediction) as the input.
Other Baselines
TF-IDF + Single Op. We use TF-IDF to retrieve the top 2 sentences from the input report. Since the most common case in our dataset is one-step program and the most common operation is division, we take the first number from each sentence and apply the division operation.
Retriever + Direct Generation. To demonstrate the necessity of generating the reasoning programs, we keep the architecture the same as our model, but directly generating the final results.
Retriever + Seq2seq. We use a Seq2seq architecture for the generator, similar to the Seq2seq baseline in the MathQA dataset (Amini et al., 2019). A bi-LSTM is used for encoding the input, and then an LSTM is used for decoding with attention.
Retriever + NeRd. The Neural Symbolic Reader(NeRd) (Chen et al., 2020d) is also a pointergenerator based model for program generation, with the state of the art results on the MathQA dataset (Amini et al., 2019). Different from ours, it directly learns the program with nested format as a sequence, i.e., without the step memory tokens. This way the model is able to learn the program structures as patterns from very large-scale data (~40k for MathQA), but may fail on learning the reasoning paths. We keep the retriever part the same and compare with the generator part to demonstrate the usefulness of structure learning.
Pre-Trained Longformer. There are also works on modeling very long documents with thousands of characters, with the attention mechanism that scales linearly with sequence length, like the Longformer (Beltagy et al., 2020). To demonstrate the necessity of breaking up into the pipeline of retriever and program generator, we remove the retriever and directly use the pre-trained Longformer as the input encoder in the program generator, and encode the whole report. The table rows are linearized similar as in §5.1.
Experimental Results
Experiment Setups. For the retriever, we use BERT-base as the classifier (other pre-trained models perform similarly). Since most of the examples in our dataset have 1 or 2 facts, and we find that longer inputs lower the performance of the program generator, we take the top 3 ranked facts as the retriever results. For the program generator, we experiment on using BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and FinBert (Araci, 2019) as the encoder, to test the performances of popular large pre-trained models. For all models, we use the Adam optimizer (Kingma and Ba, 2015). Check Appendix B for more details of training and parameter settings. Table 2 presents the results for all the baseline systems. We evaluate the execution accuracy (exe acc) and program accuracy (prog acc) as explained in §3. For the BERT-based retriever, we have 89.66% recall for the top 3 retrieved facts and 93.63% recall for the top 5. Using TF-IDF results in 82.91% recall for the top 5 facts. We use the same retriever results for all retriever-generator based models. Directly generating the execution results gives nearzero scores, which indicates the necessity of generating the reasoning programs. If without using the retriever-generator pipeline, but directly applying an end-to-end pre-trained Longformer model, the performance falls far behind. Because longer inputs have more numbers which put more confusions on the program generator and thus make it harder to learn. Generally, the program generators using pre-trained models perform much better than the Seq2seq baseline, as there is language modeling knowledge that can also be used for the finance domain. And larger pre-trained models give better performance, as they tend to see more financial corpus during their pre-training. FinBert (Araci, 2019) is a pre-trained model for the finance domain; its main downstream tasks are sentiment analysis. The performance of using FinBert is no better than BERT-large, mostly because its pre-training corpus is limited (~30M words from news articles).
QA Model Performance
Comparing FinQANet with the retriever + NeRd baseline (Chen et al., 2020d), it shows the improvements from learning the logical structure of the programs. We also run the program generator using the gold retriever result, shown as FinQANet-Gold. Another interesting observation is the comparisons with human performances. While there is still a large gap from the human expert upper bound, the best performing model already surpasses the general crowd performance.
Performance Breakdown
We conduct a set of performance breakdowns using the FinQANet (RoBERTa-large) model. Table 3 shows all the results.
Necessity of using both table and text. We run inferences taking facts only from a single source from the retriever. Inferences on individual source ( what is the amount of credit lines that has been drawn in millions as of year-end 2016? [1] additionally , we have other committed and uncommitted credit lines of $ 746 million with major international banks and financial institutions to support our general global funding needs , including with respect to bank supported letters of credit, performance bonds and guarantees . [2] approximately $ 554 million of these credit lines were available for use as of year-end 2016 . [1] we maintained a $ 1.4 billion senior credit facility with various financial institutions , including the $ 420.5 million term loan and a $ 945.5 million revolving credit facility . Figure 4: Error cases. In these examples, the retriever results all correctly cover the gold facts; thus we only present the gold facts, gold program, and the predicted program to study the errors of the program generator. We give more error cases in Appendix C, including the cases for the retriever errors. Questions that need more than two steps to answer are challenging. The model has a low accuracy (22.78%) on the questions that need three or more steps. Meanwhile, not surprisingly, the questions that require only one step are the easiest.
Constants in programs. Many programs in FINQA contain constants as arguments. A constant is often used to convert an English number word to another. For example, we need first to use the constant "1,000" to convert "1.5 billion" to "1,500 million" so that it can be added with "50 million". A constant is also used to explicate the implicit numbers hidden in the language. For example, to calculate "the average for the year 2012, 2013, and 2014", the program needs to use the constant "3" as the denominator, which is not mentioned explicitly in the text. As shown in Table 3, the programs with constants yield great challenges for our model, as the performance (43.88%) is much lower than that of the whole set (61.24%).
Error Analysis
We sample 50 error cases from the results of the FinQANet (RoBERTa-large) model and analyze them manually. 15% of the errors are caused by the retriever, e.g., missing facts. Half of the rest are due to the lack of financial knowledge, such as the meaning of some terminology. And the rest half are primarily numerical reasoning errors, including complex programs with multiple steps, numerical unit conversions, or resolving the ordering and matching of the numbers and the years. Many error cases involve both the numerical reasoning problems and misunderstandings of financial knowledge. We show three representative error cases in Figure 4.
Conclusion and Future Work
This paper introduces FINQA, a new expertannotated QA dataset that aims to tackle numerical reasoning over real-world financial data. The questions in FINQA pose great challenge for existing models to resolve domain-specific knowledge, as well as to acquire complex numerical reasoning abilities. We propose baseline frameworks and conduct comprehensive experiments and analysis. The results show that current large pre-trained models still fall far behind the human expert performance. This encourages potential future work on developing pre-training tasks for such realistic, complex application domains. We believe FINQA should serve as a valuable resource for the research community.
Ethical Considerations
Data Access and Licensing. We develop FINQA based on the publicly available earnings reports of S&P 500 companies from 1999 to 2019, collected in the FinTabNet dataset (Zheng et al., 2021). The FinTabNet dataset is publicly available under the CDLA-Permissive 6 license, which permits us to create additional annotations on top of the data ("Enhanced Data", §1.5 of CDLA) and publish the annotations ("Publish", §1.9 of CDLA).
Dataset Collection Process and Conditions.
For the annotation of our FINQA dataset on Upwork, we first launch interviews of the task introduction with 4 example questions, which is paid as $30, for them to try a few examples to get informed and familiar with the task. Then based on their consents to continue working on the large-scale job, we discuss with the workers to reach agreements on the compensation before starting the large-scale job. We pay around $2.0 per question, and the hourly rates are discussed and agreed upon with both sides based on the working speed of different workers. Among all eleven US-based hires, the average hourly rate is $35.0, and the minimum and maximum hourly rates are $20 and $50, respectively. The evaluation tasks follow the similar procedure, and each question is paid as $2.0.
IRB (Institutional Review Board) Approval.
This project is approved by our Institutional Review Board (IRB). The systems trained using our dataset are primarily intended to be used as augmenting human decision-making in financial analysis, but not as a replacement of human experts. Input Report AWK/2014/page_121.pdf … (abbreviate 20 sentences)... the ppaca effectively changes the tax treatment of federal subsidies paid to sponsors of retiree health benefit plans that provide a benefit that is at least actuarially equivalent to the benefits under medicare part d . the acts effectively make the subsidy payments taxable in tax years beginning after december 31 , 2012 and as a result , the company followed its original accounting for the underfunded status of the other postretirement benefits for the medicare part d adjustment and recorded a reduction in deferred tax assets and an increase in its regulatory assets amounting to $ 6348 and $ 6241 at december 31 , 2014 and 2013 , respectively . the following table summarizes the changes in the company 2019s gross liability , excluding interest and penalties , for unrecognized tax benefits: . Input Report K/2013/page_23.pdf-1 … (abbreviate 12 sentences)... underlying gross margin declined by 180 basis points in 2012 as a result of cost inflation , net of cost savings , and the lower margin structure of the pringles business . underlying sga% ( sga % ) was consistent with 2011 . our underlying gross profit , underlying sga , and underlying operating profit measures are reconciled to the most comparable gaap measure as follows: | 6,652.8 | 2021-09-01T00:00:00.000 | [
"Computer Science"
] |
Optimized Batch Process for Organic MEMS Devices †
Recently, organic electromechanical transducers have attracted intense scientific and technological interest due to their unique mechanical flexibility and their piezoelectric properties. However, the fabrication of organic MEMS devices is challenging. For example, a lift-off process cannot be used on polymers, because of the solvent in photoresists. Here, we present a straightforward and low-cost batch process for organic MEMS devices using standard micromachining techniques. As organic material we used the ferroelectric (co-)polymer poly(vinylidene fluoride-trifluorethylene) (P(VDF-TrFE)). The integration of the polymer in a CMOS-compatible process was optimized in terms of deposition and patterning of the polymer and the corresponding metal layers. Micromachined devices, such as capacitors and cantilevers, were fabricated and analysed. The ferroelectric perfomance was evaluated by electrical and electromechanical measurements. Our first results indicate that the proposed fabrication process is reliable resulting in well-functioning organic MEMS devices. We measured as piezoelectric constant a d33 of −32 pm/V with our organic P(VDF-TrFE) capacitors.
Introduction
The number of different piezoelectric materials used in micromachined sensors, actuators and energy harvesters is quite large [1][2][3][4].The most common used materials are ceramics like lead zirconate-titanate (PZT) or aluminium nitride (AlN).Recently, piezoelectric organic materials gain increasing importance in MEMS devices, especially in the field of flexible and soft electronics.Due to the low-cost and their remarkable electromechanical properties, organic MEMS have the potential to enlarge or even replace standard sensor and actuator solutions made of inorganic MEMS.However, the integration of functional organic materials in a standard microfabrication process is challenging, because typical chemicals, such as the solvent acetone, attack or even destroy most of the polymers.
To overcome these problems and to exploit the full potential of this material class for MEMS/NEMS devices, we present a simple and low-cost batch process.
Within the most well known electroactive polymers falls the class of fluoropolymers with poly(vinylidene fluoride) (P(VDF)) and its copolymers, such as poly(vinylidene fluoride70-trifluorethylene30) (P(VDF70-TrFE30)).This family of functional materials is characterized by its unique ferroelectric and piezoelectric properties.P(VDF70-TrFE30) has a piezoelectric constant d33 of around −32 pm/V and is therefore significantly above the piezoelectric constant of, e.g., AlN with a d33 of only 4-6 pm/V [3,4].In addition, the mechanical flexibility of the polymer P(VDF70-TrFE30) with a Young's Modulus of 2 GPa is superior to that of ceramics, such as AlN with a value of 310 GPa [5].Thus, electroactive polymers are predestined for energy harvesters [1].Further characteristic material parameters of P(VDF70-TrFE30) are presented in Table 1.There are several approaches for the fabrication of organic MEMS devices in literature [6][7][8].However, some process steps lead to problems.For example, in most cases the electrode is patterned on the polymer by a lift-off process [6,7].Or the design of the bottom-and top-electrode to each other is not considered [6][7][8].These errors will be explained and solved in more detail here.By tailored process steps, it is possible to fabricate organic MEMS devices in a CMOS-compatible process flow.
Fabrication Process
In the following, we present a process flow for the fabrication of organic MEMS devices.We focus on the realization of a cantilever-type MEMS device with the polymer P(VDF70-TrFE30) as electromechanical transducer.The most important process steps used to fabricate the cantilevers are presented in Figure 1a.In order to investigate the electroactive performance of the polymer thin films, standard capacitors were fabricated in parallel.
In the first step, we start with a 4″ silicon on insulator (SOI) wafer coated with silicon-rich LPCVD silicon nitrid (SiN) and PECVD silicon dioxide (SiO2).Then, the bottom electrodes were formed by a conventional lift-off process.For this purpose, 50 nm of chromium (Cr) was deposited by electron-beam evaporation as an adhesion promoter for a 200 nm gold (Au) layer to increase the electrical conductivity of the bi-layered metallization.Subsequently, a 1 µm thin polymer film was spin-coated at 3000 rpm from solutions of P(VDF70-TrFE30) powder in methyl ethyl ketone (MEK).The polymer powder was purchased from Piezotech/Arkema Group.The weight ratio of P(VDF70-TrFE30) and MEK in the solutions were 8%.For a slow evaporation of the solvent, the polymer film was kept for 10 min at 80 °C being the evaporation temperature of MEK.Followed by an annealing process at 130 °C for 2 h in a vacuum oven to increase the crystallinity of the material.To form the top electrodes, 200 nm Au was evaporated over the entire surface of the polymer and patterned by a wet-chemical etching process.An adhesion promoter was not necessary, since the adhesion between Au and P(VDF70-TrFE30) was sufficient.In literature lift-off processes are commonly used to pattern the top electrodes [7,8].However, standard solvents for photoresists, such as propylene glycol monomethyl acetate (PGMEA), attack the polymer such, that large cracks result all over the polymer layer (see Figure 1b).Therefore a wet chemical etching process was developed to protect the polymer by the Au layer.Another critical step is the anisotropic etching of the polymer.To ensure low underetching and steep edge characteristics a reactive ion etching (RIE) process with 50 sccm O2 gas flow, 150 mTorr chamber pressure and a plasma power of 150 W was used, whereas the top electrode serves as hard mask for the etching process.Furthermore, the top electrode must be designed larger than the bottom electrode.This minimize any electrical short cut between the top and the bottom electrode if underetching occurs.A SEM image of the cut through an organic cantilever is shown in Figure 1b.It can be clearly seen that the top electrode is about 1 µm larger than the bottom electrode.In addition, the image shows that the chosen etching parameters provide low underetching and steep etching edges.The polymer film was poled only once after the fabrication with an electric field of 100 V/µm by applying a DC voltage.
Results and Discussion
In order to study the functionality of the organic MEMS devices, we measured their piezoelectric perfomance, thus focusing on the ferroelectric properties of the thin polymer films.For this purpose, the electrical and electromechanical properties of the organic material P(VDF70-TrFE30) were determined.
For the electrical characterization the relative permittivity εr was measured using micromachined fabricated capacitors having different electrode areas.A linear fit curve was used to extract the unit area capacitance in order to minimize the error due to parasitic capacitances arising from the setup.We obtained a relative permittivity εr = 14 of P(VDF70-TrFE30) at room temperature and at 1 kHz, respectively.In addition, the electric polarization (P) was measured as a function of the applied electric field (E) via a Sawyer-Tower bridge.The resulting P-E hysteresis is presented in Figure 2a.Higher remanent polarization (Pr) implies stronger piezoelectric response and the coercive field (Ec) indicates the required poling conditions.We obtained a remanent polarization of 6.2 µC/cm 2 and a coercive field of ±50 V/µm.Basically, our organic P(VDF70-TrFE30) capacitors show the typical characteristics of this polymer.The hysteresis remained unchanged over several cycles up to 10 4 at 10 Hz.
The electromechanical behaviour of the polymer films was determined by AFM measurements.For this purpose, a silicon nitride cantilever was placed on the top electrode of a capacitor.To measure the piezoelectric displacement, an AC signal was applied.The vertical displacement of the polymer deflects the probe and hence the strain (S) is measured as a function of the electric field.The resulting S-E butterfly curve is shown in Figure 2b.The capacitor was attached to the sample holder by vacuum, so that clamping effects did not occur.To calculate the piezoelectric constant, we used the relation (1) and we obtained a piezoelectric constant d33 = −32 pm/V for P(VDF70-TrFE30).Besides that, the coercive field of ±50 V/µm of the material can be clearly recognized here as well.
Both electrical and electromechanical measurements demonstrate that even after several cycles up to 10 4 at 10 Hz the organic MEMS devices exhibit a stable piezoelectric activity, giving confident in our device technology.
Conclusions
We have presented a straightforward and low-cost batch process for organic MEMS devices.The resulting electroactive performance has been evaluated demonstrating excellent piezoelectric performance.We have identified and solved the challenges in the fabrication of organic MEMS devices.For P(VDF70-TrFE30), we measured a piezoelectric constant of −32 pm/V with our organic capacitors.The P-E hysteresis and S-E butterfly curve remained stable at room temperature even after 10 4 cycles at 10 Hz.
Figure 2 .
Figure 2. Ferroelectric performance of the polymer P(VDF70-TrFE30) at room temperature and at 10 Hz: (a) Polarization as a function of applied electric field measured with a Sawyer-Tower bridge (P-E hysteresis, •); (b) Strain as a function of applied electric field measured with an AFM (S-E butterfly curve, • measurement 1, • measurement 2). | 1,961.4 | 2018-11-28T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Fragileness of Exact I-ball/Oscillon
I-ball/oscillon is a soliton-like oscillating configuration of a real scalar field which lasts for a long time. I-ball/oscillon is a minimum energy state for a given adiabatic invariant, and its approximate conservation guarantees the longevity. In this paper, we examine the stability of a special type of I-ball/oscillon, the"exact"I-ball/oscillon, whose adiabatic invariant is exactly conserved. We show that the exact I-ball/oscillon is stable in classical field theory, but not stable against small perturbations depending on the value of its adiabatic invariant. Accordingly, the exact I-ball/oscillon breaks up in the presence of the fluctuations with corresponding instability modes. We also confirm the fragileness of the exact I-ball/oscillon by the classical lattice simulation.
I. INTRODUCTION
I-ball/oscillon is a non-topological soliton-like solution in real scalar field theory [1][2][3] whose formation process is nonlinear. This lump of a real scalar field is a minimum energy state and understood as the coherent oscillation around the (local) minimum of the potential. A lot of studies on the I-ball/oscillon indicate that the lifetime of I-ball/oscillon is extremely long. The longevity of the I-ball/oscillon could leave an imprint on cosmology/astrophysics which can be tested by some experiments. (See e.g. [28][29][30] for gravitational waves from the I-ball/oscillon. See also [31].) The longevity of I-ball/oscillon is guaranteed by the approximate conservation of adiabatic invariant I [13,32]. The adiabatic invariance is the analog to the invariance of the phase space volume for a periodic motion in classical mechanics. It is also known that the adiabatic invariant corresponds to the particle number [33] in the non-relativistic limit, and hence, the approximate conservation of I is regarded as the particle number conservation in this limit.
In this paper, we study the stability of a special type of the I-ball/oscillon, the "exact" I-ball/oscillon, which appears in a real scalar theory with a particular type of potential [13].
In this particular case, the adiabatic invariant of I-ball/oscillon is exactly conserved. We show that the exact I-ball/oscillon is stable in classical field theory. We also find that the exact I-ball/oscillon is not stable against small perturbations depending on the value of the adiabatic invariant. Accordingly, the exact I-ball/oscillon breaks up in the presence of the fluctuations corresponding to the instability modes. We also confirm the fragileness of the exact I-ball/oscillon by the classical lattice simulation.
Organization of this paper is as follows. In section II, we introduce the exact Iball/oscillon, which conserves the adiabatic invariant exactly. In section III, we show that the exact I-ball/oscillon is stable, but the perturbation around it has resonance bands and it breaks the exact I-ball/oscillon depending on its parameters. In section IV, we show the setup and the result of our lattice simulation to confirm fragileness of the exact I-ball/oscillon. Finally, in section V, we conclude the results.
II. EXACT I-BALL/OSCILLON
A. Exact conservation of adiabatic invariant I-ball/oscillon is a localized oscillating scalar field configuration which minimizes the energy for a given value of the adiabatic invariant I. The adiabatic invariant of a real scalar field φ is defined by where ω is the angular frequency of the oscillating field and the overbar denotes the average over one period of the oscillation.
The adiabatic invariant Eq. (1) is approximately conserved when the scalar field oscillates in the potential dominated by the quadratic term (∼ φ 2 ). In other words, the I-ball/oscillon is not an exact periodic motion in time, and hence, the time average over one period of the oscillation in Eq. (1) is not exact. Due to this approximate conservation of I, I-ball/oscillon is generally quasi-stable and eventually decays by emitting scalar waves [25].
However, the adiabatic invariant is exactly conserved when the solution of the equation of motion is completely separable into the time and the spatial dependent parts [13].
Let us assume an I-ball/oscillon solution with the separated form as where ψ(x) is the theoretical oscillon profile and f (t) is a time periodic function normalized as max{f (t)} = 1. The fact that the time and spatial dependencies are determined separately is crucial when we consider the I-ball/oscillon stability. In this case, the adiabatic invariant I is evaluated as where overbar denotes the time average over the one period of oscillation. Because f (t) is exactly periodic, the adiabatic invariant I is constant in time, and hence, is conserved exactly.
Such a separated solution like Eq. (2) is possible only when the scalar potential of φ takes the form of where κ < 0 is a dimensionless constant and m and M are mass parameters 1 [13].
For a later purpose, we redefine the parameters bỹ with which In what follows, we use the latter expression withm 2 → m 2 andκ → κ. Eventually, the potential depends only on two parameters, m and κ.
Substituting the solution Eq. (2) into the equation of motion we obtain This leads to the following two equations: where ζ is a constant. As we will see in the next section, ζ determines the adiabatic invariant of the I-ball/oscillon for given potential parameters.
Therefore, because the solutions of equations of motions under the potential Eq. (4) are independently determined by Eqs. (10) and (11) the adiabatic invariant is exactly conserved 2 as shown in Ref. [13].
B. The exact I-ball/oscillon solution
As the I-ball/oscillon corresponds to the minimum energy states for a given value of I, the I-ball/oscillon profile ψ is expected to be spherical, ψ( x) = ψ(r). Then, Eq. (11) has an 1 The parameter M corresponds to the renormalization scale.
exact solution, where ψ c and R is the central value and the radius of the I-ball/oscillon. It should be noted that the spatial size of R does not depend on ζ, and hence, does not depend on I for given potential parameters.
The period of the oscillation can be obtained as follow. By multiplyingḟ to Eq. (10) and integrating over time t, we obtain with C f being a constant. Because the potential of f is a even function of f , f oscillates in As we defined f = ±1 at the turning points of the motion, C f is represented by with whichḟ = 0 at the turning points. As a result, we find a period of the oscillation to be which is determined by ζ for given potential parameters. We plot the frequencies for the potential parameter κ = −0.3 and κ = −0.1 in Fig. 1. The upper bound for the range of ζ is determined from Eq. (18) with T = finite, Besides, the region of ω ≥ m is not physically attractive 3 , and hence, we consider only closed range of ζ. 3 More precise lower bound on ζ is determined by dE/dI < m. Similarly, the energy of the I-ball/oscillon is given by where Here, we used Eqs. (10), (3), and (12). Notice thatḟ 2 , ω and ψ c depend on ζ through Eqs. (12), (15) and (18). Thus, the adiabatic invariant is determined only by ζ for given model parameters κ and m.
A. Stability
As shown in [25], the non-exact I-ball/oscillon decays by emitting relativistic radiation of the scalar field. Here, let us discuss whether the exact I-ball/oscillon obeys this decay process.
To see whether a relativistic radiation is emitted from the exact I-ball/oscillon, we consider a small perturbation around the I-ball/oscillon, φ(x) = φ I (x) + ξ(x). Here, φ I (x) is the I-ball/oscillon solution obtained in the previous section. Then, the equation of motion of the perturbation ξ(x) is given by where Eq. (23) shows that the perturbation does not have source terms, and the I-ball/oscillon solution is stable if ξ = 0 initially. Thus, unlike the case of the non-exact I-ball/oscillon, the exact I-ball/oscillon does not decay by emitting relativistic radiations.
The absence of the source term of ξ stems from the fact that the equation for the Iball/oscillon profile is the same as the equation of motion Eqs. (10) and (11)
B. Fragileness
In this subsection, we discuss the instability of the exact I-ball/oscillon against small perturbations and derive the growth index (the Floquet exponent) of the instability.
As κ < 0, the perturbation around the vacuum, i.e. φ = 0, has an infinite mass, i.e. m 2 ξ = ∞ > 0. 4 Thus, the perturbation around the vacuum is never excited. Around the I-ball/oscillon solution, on the other hand, the perturbation has a finite non-derivative kernel, where F (t) = κm 2 log f (t) 2 . and we neglected O(ξ 2 ) term in Eq. (23). Thus, there could be instability modes around the I-ball/oscillon solution. 4 The mass at the origin of this potential is divergent because has the eigenfunctions and eigenvalues, 2E n = 2ω ξ (n 1 + n 2 + n 3 + 3/2) .
where H n denotes the Hermite polynomial of order n, and the eigenfunctions satisfy λ n (|x| → ∞) = 0.
By expanding ξ(x) by, we find that each q n (t) satisfies Here, we defined where we used Eq. (15). The exponential factor in Eq. (28) is identical to that of the Iball/oscillon, i.e.
If Eq. (31) has growing modes with non-trivial Floquet exponents, the I-ball/oscillon solution can be unstable.
It should be noted that the perturbation of the zero mode, n = 0, is redundant. As the radius of the I-ball/oscillon does not depend on ζ but is determined by κ and m, the addition of the zero mode perturbation just enhances the value of I, which leads to an I-ball/oscillon with a slightly larger I.
Because we confine ourselves to the spherical configuration in our numerical simulation, we consider the spherical perturbations to analyze the result of our simulation.
Let us rewrite Eq. (27) in the spherical coordinate, where the eigenfunction is given by R nr, (r)Y m (Ω) with Y m (Ω) being the spherical harmonics.
In Fig. 2, we show the instability mode, n r = 3, for κ = −0.3 and ζ = 0.4. The figure shows that the perturbation grows for O(100)/m. We also show the corresponding wave function R nr=3,0 (r). Once the perturbation grows around the I-ball/oscillon, it is expected to be dissociated and breaks up into smaller configurations.
In Fig. 3, we also show the Floquet exponent µ the exact I-ball/oscillon cannot keep its configuration anymore and is expected to be broken up. We will confirm these behaviors by the classical lattice simulation in the next section.
A. Setup
In this subsection we briefly explain the setup of the simulation. The procedure of the simulation is similar to that of Ref. [25]. Because the lowest energy configuration of the scalar field φ is spherically symmetric in three-dimensional space, the equation of motion of φ is represented by where Here, we introduced a small parameter to avoid the numerical instability caused by the singularity of the effective mass at φ = 0. We have confirmed that the simulation results are independent of this small regularization term .
For the boundary condition, we use the two following conditions.
• At the other boundary r = L ( R), we impose the absorbing boundary condition (see Appendix A for details). Under this condition, the radiation of the real scalar field emitted from the I-ball/oscillon is absorbed at the boundary so that we correctly calculate the time evolution of I-ball/oscillon.
As the initial condition of φ, we use the theoretical I-ball/oscillon configuration Eqs. (13)-(15) for a given ζ ini with 1% random fluctuations. We also seṫ as an initial condition ofφ. We have confirmed that the exact I-ball/oscillon is completely stable in the absence of the random fluctuations.
The other simulation parameters are shown in Table I. Here, the units of the field, time, space, etc. are taken to be m −1 , that is, We utilize the same lattice simulation code in [25], in which the time evolution is calculated by the fourth-order symplectic integration scheme and the spatial derivatives are calculated by the fourth-order central difference scheme.
B. Result
We numerically calculate the time evolution of the exact I-ball/oscillon φ and its energy E from to see how it behaves in the presence of the instability modes derived in Sec. III.
Stability
First, we show the result of the stable exact I-ball/oscillon which has no strong instability resonance bands. The result is given in Fig. 4, which shows that the exact I-ball/oscillation does not break up even in the presence of the tiny fluctuation. This result is consistent with the fact the exact I-ball/oscillon for ζ ini = 0.3, 0.2 does not have the instability modes. The sudden change of the energy at mt 60 is for the same reason as the case of Fig. 4. We find that the exact I-ball/oscillon energy strikingly decreases around the instability bands.
Fragileness
Next, we show the result of the unstable exact I-ball/oscillon. The results of the simulations are shown in Fig. 5. 5 Comparing the result with our analytical calculation (see Fig. 3), we find that the energy of exact I-ball/oscillon strikingly decreases at around instability bands showed as green color bands in Fig. 5. This can be interpreted that the initial fluctuations appended to the exact I-ball/oscillon grow exponentially and deform the exact I-ball/oscillon profile 6 .
Our results also suggest that the exact I-ball/oscillon ends up with another exact I- 5 We have confirmed that the exact I-ball/oscillon for the given ζ ini without initial fluctuations is stable within simulation time. 6 Instability bands should be wide and strong enough for the oscillon decay.
ball/oscillon profile with a smaller I after the decay process. In fact, both the cases in Fig. 5 converge to the smaller but finite energy at mt 10 6 . The case with ζ ini = 0.4 is particularly suggestive. In this case, the I-ball/oscillon first decays when the instability modes in the narrower band around ζ = 0.4 and n r = 3 grows, and ends up with the I-ball/oscillon with ζ 0.15. However, there is a broader instability band for ζ 0.15 and n r = 2, with which the the exact I-ball/oscillon further decays very quickly. As a result, the exact I-ball/oscillon profile for ζ ini = 0.4 converges to the smaller I-ball/oscillon solution.
In our analysis, we only consider the radial modes of the fluctuation. The exact Iball/oscillon can have more instability bands in the three dimensional case (see Eqs. (31) and (41)).
V. CONCLUSIONS
In this paper, we examine the stability of the exact I-ball/oscillon. The exact Iball/oscillon has been considered to be stable since it has the exactly conserved adiabatic invariant. Its stability is also expected because the perturbations around the exact I-ball/oscillon obey the field equation without the source terms.
However, we have found that the exact I-ball/oscillon is not always a stable solution and the perturbation around it has growth modes depending on the size of the adiabatic invariant. Thus, in the presence of the fluctuations with the corresponding instability modes, the exact I-ball/oscillon cannot keep its configuration anymore and breaks up eventually.
The mechanism of the exact I-ball/oscillon decay in this paper is completely different from the previous study [25], in which I-ball/oscillon decays by emitting relativistic radiations of the scalar field. Our results in this paper suggest that it is necessary to consider both decay processes, the decay by radiation and the decay by the instability modes, when we estimate the lifetime of generic I-ball/oscillon. | 3,603.6 | 2019-08-29T00:00:00.000 | [
"Physics"
] |
Gauge theories and quantum gravity in a finite interval of time, on a compact space manifold
We study gauge theories and quantum gravity in a finite interval of time $\tau $, on a compact space manifold $\Omega $. The initial, final and boundary conditions are formulated in gauge invariant and general covariant ways by means of purely virtual extensions of the theories, which allow us to"trivialize"the local symmetries and switch to invariant fields (the invariant metric tensor, invariant quark and gluon fields, etc.). The evolution operator $U(t_{\text{f}},t_{\text{i}})$ is worked out diagrammatically for arbitrary initial and final states, as well as boundary conditions on $\partial \Omega $, and shown to be well defined and unitary for arbitrary $\tau =t_{\text{f}}-t_{\text{i}}<\infty $. We illustrate the basic properties in Yang-Mills theory on the cylinder.
Introduction
Perturbative quantum field theory mainly focuses on the calculation of S matrix amplitudes, which describe scattering processes among asymptotic states, where the incoming and outgoing particles are separated by an infinite amount of time. This approximation is good for most practical purposes, especially in collider physics. However, it is just an approximation. From a theoretical point of view, it does not provide a completely satisfactory understanding. A more powerful and general approach is required, where the key issues (such as locality, renormalizability and unitarity, among the main ones, and then symmetries, anomalies, the anomaly cancellation, etc.) are understood without making this simplification.
It is possible [1] to formulate perturbative quantum field theory diagrammatically in a finite interval of time τ = t f − t i , and on a compact space manifold Ω, so as to move all the details about the restrictions to finite τ and compact Ω away from the internal sectors of the diagrams (apart from the discretizations of the loop momenta), and code them into external sources. The usual diagrammatic properties apply, or can be generalized with little effort. This way, the evolution operator U(t f , t i ) can be calculated perturbatively between arbitrary initial and final states, with arbitrary boundary conditions on ∂Ω. Unitarity, that is to say, the equality U † (t f , t i )U(t f , t i ) = 1, can be studied diagrammatically by means of the spectral optical identities [2]. The theory is renormalizable whenever it is so at τ = ∞, Ω = R D−1 , where D denotes the spacetime dimension. Purely virtual particles are introduced by removing the on-shell contributions of some physical particles, and all the ghosts, from the core diagrams, as explained in [2], and trivializing their initial and final conditions.
In this paper we consider the cases of gauge theories and gravity in detail, because certain issues that are specific to local symmetries deserve attention, when τ is finite and the space manifold Ω is compact. For example, we must specify the initial, final and boundary conditions without breaking the local symmetries. We cannot just use the gauge potential A a µ and the metric tensor g µν , for this purpose. Nor can we use the field strength F a µν , and the curvature tensors R, R µν , R µνρσ , because they are not invariant. What comes to the rescue is the purely virtual extension of gauge theories and gravity formulated in ref.s [3,4], which is based on the introduction of extra bosonic fields, together with their anticommuting partners. The extra fields can be used to perturbatively "dress" the non invariant fields and make them invariant: we can build invariant gauge fields A µ d , invariant quark fields ψ d , and an invariant metric tensor g µνd . The ordinary physical quantities, such as the S matrix amplitudes and the correlation functions of the usual it, like rearranging the diagrammatics, and making a projection on the space of states, to define the physical space. The projection defines the final, physical theory.
The new diagrammatics is built by removing the on-shell contributions of all the ghosts χ gh , and possibly some physical particles χ ph , from the diagrams of the extended theory, at every order of the perturbative expansion. This is done in one of the following equivalent ways: i) a certain nonanalytic Wick rotation [8,9], ii) dropping the spectral optical identities associated with the unwanted on-shell contributions [2] from the Cutkosky-Veltman identities [10,11] (which are the diagrammatic versions of the unitarity equation S † S = 1), or iii) replacing the standard diagrams with suitable combinations of non-time-ordered diagrams, as shown in ref. [12].
In addition, one has to make the projection mentioned above. At τ = ∞, the projection amounts to ignore the diagrams that have χ gh and χ ph on the external legs. When τ < ∞, it amounts to choose trivial initial and final conditions for the coherent states of χ gh and χ ph . The final theory is unitary, provided all the ghosts of the extended theory are rendered purely virtual.
Certain aspects of the construction of theories with purely virtual particles resemble what we normally do to gauge-fix a gauge theory. We first extend the theory by including unphysical excitations, such as the Faddeev-Popov ghosts, and project the extension away at the end. The crucial difference is that, in the case of purely virtual particles, no symmetry is there to help us. This is why we need to switch to a different diagrammatics, before making the projection.
It is worth to stress that the extended theory is just a mathematical tool to get to the correct, final theory. It is not possible to solve the problem of ghosts by just changing the viewpoint on a theory, or focusing on different quantities (e.g., "in-in" correlation functions, instead of "in-out" ones, or different prescriptions for the propagators, such as the retarded potentials, instead of the Feynman one, and so on), or moving back and forth among negative norms, unbounded Hamiltonians, non-Hermitian Hamiltonians, negative probabilities, etc. None of these operations really changes the theory: they just change the reference frame, so to speak, within the same theory. Even the Lee-Wick idea of making "abnormal particles" decay [13] cannot solve the problem 3 , because a theory with unstable ghosts is still a theory with ghosts. Necessarily, it must be abandoned at some point, in favor of a different theory, and the switch from one to the other must be a radical operation that cuts out the sick portion, like a guillotine: this is the projection we are talking about.
The main application of the idea of purely virtual particle is the formulation of a theory 3 For Lee-Wick ghosts in quantum gravity, see [14].
of quantum gravity [9], which provides testable predictions [15] in inflationary cosmology [16]. In phenomenology, purely virtual particles open new possibilities, by evading many constraints that are typical of normal particles (see [17] and references therein). The diagrammatic calculations are not more difficult than those based on physical particles. It is possible to implement them in softwares like FeynCalc, FormCalc, LoopTools and Package-X [18].
Purely virtual particles can also be used as mere mathematical tools, to study uncommon aspects of common theories, as shown in [3,4] and here. In this paper, we are using them to deal with the local symmetries at finite τ and on a compact Ω, to express the initial, final and boundary conditions in invariant ways.
The results of this paper and [1] make us less dependent on the paradigms that have dominated the scene in quantum field theory since its birth. For example, we can study unitarity without being tied to the S matrix. This is important in quantum gravity, where proper definitions of asymptotic states and S matrix amplitudes are not available, if the cosmological constant Λ C is nonvanishing [6]: when Λ C = 0, we cannot claim that the S matrix is unitary in a strict sense. Nonetheless, the evolution operator U(t f , t i ) we build in quantum gravity is unitary for arbitrary τ < ∞. This means that the problems of the S matrix with a nonvanishing Λ C are not inherent to the issue of unitarity per se.
The paper is organized as follows. In section 2 we consider a simple warm-up toy model to illustrate some of the issues we need to face when we want to find the right eigenfunctions for the expansions of the gauge fields. In section 3 we work out the general formalism for coherent states in gauge theories. In section 4 we rearrange the Lagrangian in Yang-Mills theories to make it ready for the restriction to finite τ and compact Ω. In section 5 we introduce coherent states in Yang-Mills theories at the quadratic level. In section 6 we include the interactions. In sections 7 and 8 we illustrate the formalism in two relatively simple cases: Yang-Mills theory on the semi-infinite cylinder, and on the finite cylinder. In section 9 we formulate Einstein gravity at finite τ and compact Ω. In section 10 we extend the formulation to quantum gravity with purely virtual particles, and discuss the problems that occur in the limit τ → ∞, Ω → R D−1 , in the presence of a cosmological constant. Section 11 contains the conclusions.
A warm-up toy model
The first difficulty we meet when we want to formulate gauge theories and gravity on a compact manifold Ω, is that we do not know the eigenfunctions we should use for the expansions of the fields. In a generic setting, the eigenfunctions of the Laplacian are not the right ones. In this section we study a toy model that illustrates the main issue, as well as its solution.
Specifically, we consider the simple quadratic Lagrangian with Dirichlet boundary conditions φ = 0 on ∂Ω. The dot denotes the time derivative, while the prime denotes the space derivative. What is not clear is how to deal with the termφφ ′ . We could eliminate it by means of a redefinition of space and time, but this would complicate the investigation in another way, by mixing the boundary conditions with the initial and final conditions. Moreover, we can apply the redefinition only once (i.e., for a single field), which makes it useless in the presence of more fields with kinetic Lagrangians of the same form. It is necessary to work out a general approach that can be easily exported to the cases treated in the next sections.
We begin by working out the momentum π φ and the Hamiltonian H, which are Note that H is positive definite for every real α. Then we extend the Lagrangian to which is convenient because it contains φ and π φ as independent variables. The equations of motion must be solved with the Dirichlet boundary conditions φ = 0 on ∂Ω. There is no boundary condition on π φ , because, as we are going to see, the coherent states are not built with φ and π φ , but with φ andφ. Note that φ = 0 on ∂Ω implieṡ φ = 0 on ∂Ω. This way, the coherent states automatically vanish on ∂Ω as well. For these reasons, it is convenient to introduce the shifted momentā and addπ φ | ∂Ω = 0 to the boundary conditions. The integrated Lagrangian (2.2) can be written as The boundary conditions allow us to freely integrate by parts. The field equations can be read from (2.4). The eigenfunctions with energy ω (2.5) having normalized them as explained below. We have φ * n (x) = φ −n (x), ω −n = −ω n . The expansions of the fields in terms of these eigenfunctions read The functional integral is the integral on the variables a n (or, equivalently, the coherent states, see below). It is important to stress that the expansions (2.6) define the space of functions on which the functional integral is calculated. In this spirit, we do not need to prove, or require, that the expansions converge. The orthogonality relations obeyed by the eigenfunctions can be worked out as follows.
Multiplying by the row π φm φ m and integrating on Ω, we obtain Transposing this expression, exchanging n with m, integrating by parts where necessary, and subtracting the result to (2.7), we find Dividing by ω n + ω m , we obtain the orthogonality relations for m = −n. Choosing the normalization as in (2.5), the orthonormality relations read Now we work out the expansion of the integrated Lagrangian (2.4). Consider the righthand side of the identity (2.7). Multiplying it by a m a n /2, summing on m and n, and adding the result to (2.4), we get Formula (2.8) ensures that all the terms with m = −n drop out, and we remain with iω n (a * nȧ n −ȧ * n a n ) − 2 n>0 ω 2 n a * n a n , having halved the sum by using a −n = a * n . At this point, we define the coherent states z n = a n andz n = a * n , and proceed as usual (see [1] for a derivation in the notation we are using here). Once we include the right endpoint corrections, to have the correct variational problem, the complete action is (2.9) where z ni = z n (t i ),z nf =z n (t f ) are the initial and final conditions.
Coherent states in gauge theories and gravity
A nontrivial issue is to introduce coherent states in gauge theories and gravity, and set invariant initial, final and boundary conditions. The goal is to work in a general setting, which means without shortcuts (like choosing particular gauge-fixings), because we want to have gauge independence under control, and be able to make computations with arbitrary gauge-fixing parameters, as we normally do at τ = ∞, in Ω = R D−1 .
The properties we lay out in this section are useful for both gauge theories and gravity, because they do not rely of the particular form of the local symmetry. This is possible because, by means of the formalisms of refs. [3,4], which we review in the next sections, we can rephrase the local symmetries in a universal form, which amounts to arbitrary shifts δ Λ ϕ = Λ of certain (purely virtual) extra fields ϕ. This is precisely the trick we need to specify invariant conditions on the fields.
We start from a Lagrangian L(φ,φ) that depends on a certain number of fields φ I and their first derivatives. We assume that it can be decomposed as where L free is quadratic, and L int is the part to be treated perturbatively (which may also include certain linear and quadratic terms), to which we refer as "interaction Lagrangian". For the moment, we assume that the boundary conditions on the fields φ I are φ I ∂Ω = 0. Nontrivial boundary conditions are studied at the end of this section.
We assume that no Lagrangian term contains more than two derivatives. Higherderivative theories must be first turned into two-derivative theories (by introducing extra fields, for example). Moreover, at finite τ , on a compact space manifold Ω, we assume that terms like φ 1 · · · φ n−1 ∂∂φ n have been eliminated in favor of terms like φ 1 · · · φ n−2 ∂φ n−1 ∂φ n , by adding total derivatives. In the next sections we show how to do these and other operations while preserving gauge invariance and general covariance.
Next, we assume that L it is "orthodoxically symmetric" with respect to certain infinitesimal transformations δ Λ φ I . By this we mean that i) the functions δ Λ φ I depend only on the fields φ I , but not on their derivatives, ii) the Lagrangian satisfies What is important, in point ii), is that not only the action is symmetric, but also the Lagrangian is, i.e., the right-hand side of (3.2) is exactly zero, not just a total derivative. Next, we introduce the momenta and the Hamiltonian as usual 4 : We can work out the symmetry transformations of the momenta π I φ by means of the identities (3.2) and (3.3). We find Since δ Λφ J is linear inφ I , δ Λ π I φ depends only on φ and π φ , but not onφ.
We want to prove that the equivalent, extended Lagrangian is orthodoxically symmetric, the transformations being δ Λ φ I and (3.4).
Since the transformations δ Λ φ I and δ Λ π I φ do not depend on the derivatives of the fields, point i) is satisfied. It remains to prove the equation ( 3.5) For this purpose, note that formula (3.2) with the replacementφ I →φ I gives using (3.3). Then it is easy to check that the right-hand side of the identity (3.5) is equal to which vanishes by (3.4). We need to make a further step, because the extended Lagrangian we must start from, in the coherent-state approach, is not L ′′ , but We will also need to add certain endpoint corrections to the action, in order to have the right variational problem. This part can be ignored for the moment, because it will be easy to deal with it at the very end. It is not obvious that the total derivative L ′ − L ′′ is invariant under the transformation δ Λ . Actually, in general it is not, since (3.4) gives which is invariant only if the transformations are linear: Summarizing, if the symmetry is linear, the Lagrangian (3.6) is orthodoxically invariant.
It may seem that the requirement of having linear symmetry transformations is very restrictive. Actually, it is not, if we take advantage of the formalism developed in refs. [3,4]. Indeed, it is always possible to convert Abelian and non-Abelian gauge symmetries, as well as general covariance, into a universal linear form, by introducing purely virtual fields that do not change the S matrix amplitudes.
It is easy to check that the momenta π I φ are not guaranteed to vanish on ∂Ω. The structure of the Lagrangian ensures that π I φ (φ,φ) has the form for certain functions A IJ (φ), B IJi (φ) and C I (φ). Thus, φ I ∂Ω = 0 implies We can assume C I (0) = 0. First, note that a nonvanishing C I (0) means that the Lagrangian includes a term C I (0)φ I . This is not going to happen in the cases of Yang-Mills theories and gravity. Besides, a term like C I (0)φ I can be removed at no cost. Since we are assuming that the symmetry transformations are linear and do not involve derivatives, C I (0)φ I must be gauge invariant by itself. Besides, it is a total derivative. Thus, we can always switch to an alternative Lagrangian with the same properties, but no such term. Instead, the matrix B IJi (0) is in general nontrivial and cannot be removed, so the right-hand side of (3.9) may be nonzero. As in (2.3), it is useful to define new "momenta" because then it makes sense to add the boundary conditions As we show below, these conditions turn straightforwardly into the right boundary conditions for the coherent states. The gauge transformations ofπ I φ follow from those of π I φ and φ I . This is enough, for the moment, but in subsection 3.2 we prove π I φ andπ I φ transform in exactly the same way. By assumption (3.1) and the absence of higher derivatives, the general form of the Lagrangian L ′ is 12) where L ′ free is quadratic, and the interaction part L ′ int is independent of the time derivatives. Note that the redefinitions (3.10) do not generate time derivatives in the interaction sector. The quadratic Lagrangian, integrated on Ω, has the form and K 2 being matrices, T denoting the transpose), N = N ij N 4 symmetric matrices and N i 3 antisymmetric. Observe that, by (3.11), we can freely integrate the space derivatives by parts.
Frequencies and eigenfunctions
The eigenfunctionsπ I n (x), φ I n (x) are the solutions of the problem (3.14) with the boundary conditionsπ I n ∂Ω = φ I n ∂Ω = 0, where n is some label. We assume that the frequencies are real, because they are so in the applications we have in mind. A quick proof is as follows. The frequencies are real for τ = ∞, Ω = R D−1 , in both Yang-Mills theory and gravity. Let us denote them by ω ∞ . We can work out the frequencies ω n and the eigenfunctions at finite τ , compact Ω, by considering linear combinations of the τ = ∞, Ω = R D−1 eigenfunctions with identical frequencies ω ∞ , and fixing the coefficients by means of the boundary conditions. Eventually, the frequencies become discrete, to have solutions, but remain real.
In case of need, it is not difficult to generalize the formulas of this paper to complex frequencies. We just remark that they must appear in complex conjugate pairs, since the Lagrangian is assumed to be Hermitian.
Taking the complex conjugate of (3.14), we find thatπ I * n (x) and φ I * n (x) are also eigenfunctions, and their frequency is −ω n . In analogy with the previous section, we use n * to label them, and writē (3.15) If V denotes the range of the label n, we write V = U ∪ U * , by splitting each pair n, n * between U and U * .
The orthogonality relations can be worked out as in section 2: i) we multiply (3.14) by (π m , φ m ) and integrate the product on Ω; ii) we transpose the result of i), exchange n with m, and integrate by parts where necessary; finally, iii) we subtract the results of i) and ii).
Normalizing the eigenfunctions appropriately, we have the orthonormality relations 16) where τ n = ±1 = τ n * . The value τ n = −1 signals the presence of ghosts (fields with kinetic terms multiplied by the wrong signs). Indeed, going through the toy model of the previous section, it is easy to check that, if we change the overall sign of the starting Lagrangian (2.1), the right-hand side of (2.8) turns out to be equal to −2iω n δ mn . We then expandπ n and φ n in the basis we have just worked out: with a n * = a * n . By means of (3.16), we can invert the expansion and find the coefficients: We insert (3.17) into (3.13), and then subtract (3.14), multiplied by (π m , φ m )a m /2, summed on m, n ∈ V and integrated on Ω. Then, we use (3.16), and mirror the sum on U * into a sum on U. The result is the integrated free Lagrangian L ′ free = n∈U iτ n ω n (a * nȧ n −ȧ * n a n ) − 2 n∈U τ n ω 2 n a * n a n . (3.19) If the fields φ I have, say, r independent components, I = 1, · · · , r, the solutions of the eigenvalue problem can be arranged into r independent, complete sets of eigenfunctions, each of which can be assigned to a specific component φ I . We can split the set U into a union ∪ r I=1 U I , where U I refers to the I-th complete set. For convenience, we relabel the indices n so that their range is the same for each I, to be denoted byÛ.
Letπ IJ n and φ IJ n denote the I-th components of the n-th eigenfunction of the J-th set. Let z n (t) = a n (t) denote the column made by z I n (t) = a I n (t), I = 1, · · · , r. We have, from (3.17), 20) whereπ n , φ n , denote the block matrices made byπ IJ n and φ IJ n , whileπ * n and φ * n are the conjugate matrices. The coefficientsz I n , z I n of the expansion are the variables we call coherent "states". The inverse formula reads, from (3.18), We can rearrange (3.19) as Typically, the τ I n factor we see here does not depend on n, but just on I. At this point, it is straightforward to add the interacting Lagrangian L ′ int (π φ , φ). We recall that L ′ int does not contain time derivatives ofπ φ and φ, by construction, although it can contain space derivatives. Expanding the fields and the momenta in the basis (3.20) of coherent states, and integrating by parts when needed, we obtain an integrated interacting Lagrangian that just dependsz I n , z I n (no time derivatives). Finally, the total action is where z I ni = z I n (t i ) parametrize the initial conditions in the coherent-state approach, whilē z I nf =z I n (t f ) parametrize the final conditions. The sums appearing in (3.23), which we call "endpoint corrections", are there to have the correct variational problem. This means that the variations δz I n , δz I n , subject to the initial and final conditions δz I n (t f ) = δz I n (t i ) = 0, must give thez I n and z I n equations of motion, and no further restrictions. Note that the time derivatives ofz I n and z I n appear only inside L ′ free . This is the reason why the partial integrations that take care of the terms proportional to δż I n and δż I n are compensated by endpoint corrections as simple as those of (3.23).
Gauge transformations of coherent states
Now we study the gauge transformations of the coherent states, and the conditions to have gauge invariant amplitudes. As usual, the parameters Λ of the gauge transformations are written as Λ = θC, where θ is a constant, anticommuting parameter and C are the Faddeev-Popov ghosts. The fields φ I include C and the other fields that are necessary to gauge-fix the theory, which are the antighostsC and certain Lagrange multipliers B for the gauge-fixing (see below).
Since we are assuming the linearity conditions (3.8), we can write the gauge transformations as δφ I = θΣ IJ φ J , for some constants Σ IJ . By means of linear field redefinitions, we can always split the set of fields φ I into three subsets φ I + , φ I − and φ I 0 , where: i) the fields φ I + transform into other fields; ii) the fields φ I − parametrize the transformations of other fields; and iii) the fields φ I 0 are invariant and cannot be obtained from the transformations of other fields: The transformation law can be written as where δ l denotes the left functional derivative.
The operator ∆ has a standard "descent" structure. A well-known theorem (see appendix A for a direct proof) says that the most general solution of the problem δX = 0, where X is a local function, is X = X 0 + ∆Y, (3.25) where X 0 is a φ I ± independent local function, and Y is a local function. Consider the invariant quadratic terms that we can build with the fields φ I ± . At some point, we may need to diagonalize them. It is easy to see that we cannot build enough invariant terms, unless the diagonalization organizes the field φ I ± in "pairs of pairs". Consider a single pair φ I ± , and observe that φ I + and φ I − have opposite statistics. By (3.25), the quadratic terms in question must be contained in ∆Y . However, the expressions ∆(φ I + φ I + ), ∆(φ I + φ I − ) and ∆(φ I − φ I − ) generate just one independent quadratic term, while we need two. This means that for each pair φ I ± there must be another pair φ I ± ′ , out of which the required invariants can be built.
We can organize the fields φ I ± , φ I ± ′ into doublets. Using a notation that is ready for the applications to Yang-Mills theories and gravity (adapting the meaning of the index a), we write the doublets as where φ a and B a have bosonic statistics, whileC a and C a have fermionic statistics. In all the applications that we have in mind, this is the structure we need. We can write the transformation law as where σ 1 is the first Pauli matrix and the superscript "T " means "transpose". The φ a + expansions (3.20), Since the equations (3.14) are invariant under the symmetry, the eigenfunctions appearing in the φ a − expansions must match eigenfunctions appearing in the φ a + expansions. That is to say, we must have φ aJ + +n = σ 1 φ aJ − −n for some pairings of indices J + , J − . Then, the transformations of the coherent states read δz J + n = θz J − n , δz J − n = 0. Moreover, the z J n with such indices must also be organized in doublets, for the reasons explained above. Finally, the φ I 0 expansions identify the invariant coherent states z J 0 n . We illustrate these facts in section 5, formulas (5.12) and (5.13).
Summarizing, we can split the set of coherent states z I n into three subsets w α n , u a n and v a n (with indices α, a spanning appropriate ranges). Both u a n and v a n are doublets, with bosonic first components and fermionic second components. Their gauge transformations are δw α n = δw α n = 0, δu a n = θσ 1 v a n , δū a n = θσ 1v a n , δv a n = δv a n = 0. (3.28) We can obtain results that agree with the ones just found by repeating the analysis for the "momenta"π I φ . The redefinition (3.10) is due to the presence of the terms ∼φ I ∂ i φ J in the Lagrangian. Since the symmetry is orthodox and linear, the sum of these terms must be gauge invariant by itself. Taking into account the conventions we adopted for the fields ψ, ψ with fermionic statistics, we can write such a sum aṡ whereB IJi = B IJi if the indices I, J refer to bosonic fields, or I refers toψ, whileB IJi = −B JIi if I refers to ψ. This way, the redefinitions match (3.10) precisely. Writing the transformations δ Λ φ I = θΣ IJ φ J , as above, gauge invariance gives the condition where ǫ I is the statistics of φ I . Analyzing all the situations one by one, we can easily see that this condition is equivalent to which also gives the implication from (3.4). Thus, the old and new momenta π I φ andπ I φ transform the same way. Theπ I φ expansions and their transformations can be studied as we did for the fields φ I . Matching the eigenfunctions, we find agreement with (3.28). Alternatively, we can study the expansions of φ I andπ I φ at the same time by working directly on (3.20).
Since the Lagrangian (3.6) in gauge invariant under the transformations δ Λ φ I and (3.4), and we are assuming the linearity conditions (3.8), the Lagrangian (3.12) is invariant under δ Λ φ I and (3.29). The integrated Lagrangian L ′ free + L ′ int of formula (3.23) is invariant under (3.28), once it is written in the variables w α n , u a n and v a n and their conjugates. The action (3.23) is gauge invariant if the endpoint corrections are invariant, which occurs if they do not contain u a n andū a n . In addition, we require that they do not contain gauge trivial modes, which are v a n andv a n (which can be obtained as transformations of u a n andū a n ). Thus, the physical amplitudes are those that havē in which case the endpoint corrections, which read are manifestly gauge invariant. The restrictions (3.30) on the endpoint corrections are analogous to the restrictions we commonly apply to the S matrix amplitudes: we do not consider scattering processes involving Faddeev-Popov ghosts, or the temporal and longitudinal components of the gauge fields, among the incoming and outgoing states. Yet, sometimes it may be useful to relax these requirements, and consider diagrams with all sorts of external legs, including the ones just mentioned, to study renormalization, for example, or the gauge independence of the physical quantities, or the diagrammatic versions of the unitarity equations.
We conclude this subsection by writing down the universal structure of the kinetic terms of the coherent states, inside L ′ free . The ones of the gauge invariant sector are clearly by (3.22). Using the theorem (3.25), the universal kinetic terms of the gauge sector can be written in the form a n∈Û τ a n iω a n ∆(ū aT n σ 1u a n −u aT n σ 1 u a n ) = a n∈Û τ a n iω a n (v aT nu a n − v aT nu a n +v aT nū a n −v aT n u a n ), (3.32) the right-hand side being obtained using the properties (A.2) of appendix A.
Nontrivial boundary conditions
So far, we have been working with trivial boundary conditions φ I ∂Ω = 0. Now we treat the case of general Dirichlet boundary conditions (3.33) where f I are given functions, and x ∂Ω denotes the space variables restricted to ∂Ω. We want to show that we can reduce this situation to the previous one, with few minor modifications. In particular, the eigenfunctions, the frequencies and the orthonormality relations remain the same. First, we shift the fields φ I by some functions so that the shifted fields vanish on Ω: After the shift, we are free to integrate the space integrals by parts, to move the space derivatives that act on any ϕ I somewhere else.
By the assumptions we have made on the structure of the Lagrangian L(φ,φ), its expansion can be written as (3.36) where L 0 is ϕ-independent and L ϕ (ϕ,φ) = L free (ϕ,φ)+ interactions, by (3.1). We can ignore the C-term, since it disappears once we integrate on the space manifold Ω, by (3.35). Were it just for L ϕ (ϕ,φ) (and L 0 ) we could apply the formulation developed so far with no modifications. We want to explain how to treat the corrections proportional to A and B (which need not be perturbative). Let us define for certain functions F I andF I . We have the Hamiltonians
39) and the extended Lagrangians
Equating the two expressions ofφ I in (3.38) and using the last identity of (3.37), we get Using (3.36), (3.40) and (3.39), it is easy to work out the difference ( 3.41) If we switch the interactions off, we have L ϕ (ϕ,φ) = L free (ϕ,φ), and the functions F I (ϕ, π ϕ ) become linear. Then formula (3.41) tells us that ∆L ′ ϕ is made of linear terms, plus interactions. In particular, the quadratic part ofL ′ ϕ coincides with the quadratic part of L ′ ϕ πϕ→πϕ . At this point, we make the analogues of the shifts (3.10), They do not change the structure of ∆L ′ ϕ , because they do not involve time derivatives, and send linear terms into linear terms, interaction terms into interaction terms. As far as the quadratic part ofL ′ ϕ is concerned, it is equal to the ones of (3.12) and (3.13) with the replacements φ I → ϕ I ,π I φ →π I ϕ . Hence, if we expand the pairπ I ϕ , ϕ I exactly as we expandedπ I φ , φ I before, we obtain the same quadratic part we had before, (3.19) and (3.22), plus interactions, plus linear terms (due to ∆L ′ ϕ ). Note thatπ I ϕ vanishes on the boundary ∂Ω by construction, so to speak, since it is expanded in a basis of functions that vanish there. Yet, we recall that no convergence requirements are imposed on the expansion: the expansion itself must be taken as the very definition of whatπ I ϕ ∂Ω = 0 truly means. The same can be said of ϕ and ϕ| ∂Ω = 0. As we have already noted, the functional integral is defined by the very same expansions.
As a result, we obtain a Lagrangian that has the same structure as before, apart from including extra terms that are linear in the coherent states (and terms that are independent of them). The endpoint corrections are unmodified, because ∆L ′ ϕ does not contain time derivatives. The complete action has the form (3.23), plus the corrections due to ∆L ′ ϕ : for some, possibly time-dependent, functions h I n and k. As far as the local symmetries are concerned, the shift (3.34) does change the expressions of the transformations, unless the functions φ I 0 are gauge invariant, which they must be, because we cannot build physical quantities with unphysical boundary conditions. Referring to the splitting of φ I into the three subsets φ I + , φ I − and φ I 0 , we must require Although we may sometimes relax the requirements (3.30) on the initial and final conditions, we are definitely not going to relax the requirements (3.44) on the boundary conditions, because there is no reason to do so. Having specified this, we have proved that the situation of general Dirichlet boundary conditions (3.33) reduces to the one of vanishing boundary conditions, apart from some extra terms that are linear in the coherent states, which are no source of worry.
In the case of gravity, we also need to extend the results to interaction Lagrangians that contain arbitrarily many derivatives of the fields (as long as their number grows together with the power of some coupling constant), and show that we can rearrange the Lagrangian to have a final action with the form and the properties of (3.43). We deal with this aspect in appendix B.
In conclusion, we have developed the general theory of coherent states for local symmetries. It remains to use the results of [3,4] to arrange gauge invariance and general covariance in the way we need. We do this in the next sections. Once that goal is achieved, the results of this section, combined with those of [1], allow us to build the unitary evolution operator U(t f , t i ).
Gauge theories: rearranging the Lagrangian
In gauge theories, we need to face a nontrivial issue: how can we specify gauge invariant initial, final and boundary conditions? Giving the field strength F µν is a possibility, but only in QED, because in non-Abelian theories it is not gauge invariant. And even in QED, there remains to give gauge invariant conditions for electrons.
These problems can be solved by introducing gauge invariant fields as explained in ref. [3,4]. The goal is achieved by means of a particular purely virtual extension of the theory. The physical particles, the S matrix amplitudes and the correlation functions of common (nonlinear) composite fields (such as F a µν F µνa ,ψψ,ψγ µ ψ, etc.) do not change 5 . Nevertheless, the extension provides tools to define new, physical correlation functions, such as the ones that contain insertions of gauge invariant fields, and calculate them perturbatively. As we are going to show, it also allows us to specify gauge invariant initial, final and boundary conditions at finite τ on a compact Ω.
The extension consists of a certain set of purely virtual extra fields. In gauge theories [3] we have scalar fields φ a , together with their anticommuting partnersH a and H a , where a is the Lie-algebra index. In addition, it may be convenient to include certain Lagrange multipliers E a . The extension preserves renormalizability and unitarity. Unitarity is also the reason why the extra fields must be purely virtual: if not, the extension would propagate ghosts, and unitarity would be lost.
The crucial property, for our purposes, is that the extension allows us to switch to gauge invariant variables, and trivialize the gauge symmetry, to fulfill the conditions (3.8). The coherent states are then introduced as explained in the previous section, and the rest follows from there.
We focus on pure gauge theories, for simplicity, since there is no difficulty to add the matter fields, when needed. We separate the time and space components of the gauge fields by writing A µ = (A 0 , A). The dot on a field denotes its time derivative.
Instead of the common Lorenz gauge-fixing, given by the function ∂ µ A µa , we use the more general function ξȦ 0a + ∇ · A a , where ξ is an unspecified constant, which allows us to interpolate between different gauge choices. Then, the gauge-fixed Lagrangian is where D µ = (D 0 , D) is the covariant derivative, and F a =Ȧ a + ▽A 0a + gf abc A 0b A c are the 0i components of the field strength. The so-called special gauge [19], which we use in the examples of sections 7 and 8, is ξ = λ. The Feynman gauge is ξ = λ = 1. Like the Feynman gauge, the special gauge allows us to simplify many formulas. In addition, it allows us to keep a gauge-fixing parameter free, which is useful to study the gauge independence of the physical quantities. First, we rearrangeL gf , since no fields should be differentiated twice. For reasons that will become clear later, we also turn the derivatives contained in the gauge-fixing function onto B. We thus obtain Next, we introduce extra scalar fields φ a , and their anticommuting partners H a , transforming as [3] where φ = φ a T a , Λ = Λ a T a , Λ a (x) are the parameters of the gauge transformation, ad φ X ≡ [φ, X], and T a are the Lie algebra generators. We also introduce gauge invariant antipartnersH a and Lagrange multipliers E a . The gauge invariant fields A µd = A a µd T a are then where the subscript "d" stands for "dressed". The extension is a sort of mirror of the gauge-fixing sector. However, it must be gauge invariant. In its most convenient (and manifestly power counting renormalizable) form, it is specified by a functionξȦ 0a d + ∇ · A a d , whereξ is a free constant. It reads whereλ is another free constant. This expression of L ext is already rearranged (with respect to the expression appearing in [3]) to eliminate the double derivatives. It is easy to check that L ext is invariant under the local transformations (4.1) (for details on this, see [3]). The total action is L tot = L gf + L ext . The parametersξ andλ are part of the large arbitrariness we have, when we want to dress the elementary fields and make them gauge invariant. They are unique, however, in a power counting renormalizable context (preserving invariance under space rotations). Physically, they may parametrize different interplays between the physical process and the external environment, or the experimental apparatus.
The extension is equal to "1" on standard gauge invariant correlation functions (where "standard" means: independent of φ,H, H and E), as well as on the S matrix amplitudes, at τ = ∞, Ω = R 3 . We can prove this fact as follows. Focus on the E-dependent terms
Insert "1" in the form of the Gaussian integral with Lagrangian
Next: i) integrate on E a , which gives a functional delta function δ; ii) integrate onH a and H a , which gives a functional determinant J; iii) integrate on φ a , which appears only in δ and J; this integral gives 1, because J is there precisely for this purpose; finally, iv) integrate on Q a , which also gives 1, since the only Q a dependence that survives the first three operations is the one contained in the last term of (4.3). This chain of operations cannot be repeated as is when the insertions are φ dependent, as are those made of the invariant fields A µd . Thus, the gauge invariant insertions built with φ provide new, physical correlation functions and amplitudes. What we want to show is that these properties also allow us to study amplitudes between arbitrary gauge invariant initial and final states, with arbitrary gauge invariant boundary conditions, in a finite interval of time τ and on a compact space manifold Ω.
One might object that the fields φ a become propagating, as well asH a and H a . What are these fields, physically? They might even be ghosts, on general grounds. On top of that, we do not want to change the theory. We just want to study less common features of a standard theory.
These are the reasons why the whole extension has to be purely virtual. The extra fields φ a ,H a and H a propagate ghosts if they are treated as ordinary fields. They do not, if they are purely virtual. If the whole extension is purely virtual, it does not inject new degrees of freedom into the theory, and can be used as a mere mathematical tool to study uncommon quantities of a common theory.
Another great advantage of the extension is that it allows us to "trivialize" the gauge symmetry, by switching to appropriate dual variables. For example, we can abandon the original gauge potential A µ in favor of the gauge invariant one A µd . We can also abandon the parameters Λ of the gauge transformation in favor of If we express the gauge symmetry this way, it becomes trivial: δφ = Λ d , δA µd = 0. Then, we introduce new Faddeev-Popov ghosts C a d , by means of the identification Λ a d = θC a d , where θ is a constant, anticommuting parameter. Since the gauge symmetry is just an arbitrary shift of φ a , its closure is trivial, so we can take δC a d = 0. We can define new, gauge invariant anticommuting partners H d by means of the relations H = R(−φ, H d ).
Inverting (4.2), we can use the relations as a change of variables in the functional integral, to switch from the original variables A µ , C, φ and H to the dual variables A µd , C d , φ and H d . The switch has a trivial Jacobian determinant (if we use the dimensional regularization [21]). We do not changeC, B,H and E. It is much easier to specify gauge invariant initial, final and boundary conditions by means of the dual variables. To make the notation lighter, we put a tilde on A a µ , C a and H a , to emphasize that they are functions of A a µd , C a d , φ a and H a d , now, and then drop the "d" in A a µd , C a d , H a d . It will be sufficient to recall that, in the new notation, A a µ is inert under the gauge transformations (δ Λ A a µ = 0), and so are C a and H a . After the switch (4.5), the multipliers E a remain non derivative (differently from B), so we integrate them out. At the end (check [3] for details), we obtain the total action The trivialized local symmetry is δφ = Λ = θC, δC = θB, δB = δC = δA µ = δH = δH = 0. (4.7) Note that L tot is invariant without adding total derivatives. Thus, we are in the conditions of section 3. We can define the coherent states as explained there, and from there build the unitary operator U(t f , t i ) as explained in [1].
Gauge theories: quadratic sector
In this section we explain how to introduce coherent states in the free-field limit of gauge theories, which is the key part of the problem. In the next section it will be relatively straightforward to include the interactions. The quadratic part of the Lagrangian is practically the same as if we were working in QED. Thus, we suppress the index a and write, from (4.6), With the variables we have chosen, gauge invariance simply means invariance under the transformations δ Λ φ = Λ = θC, δ ΛC = θB, all the other fields being inert. Note that the first line of (5.1) is manifestly invariant, to the lowest order, while the terms appearing in the second line transform into one another, apart from the H-dependent ones, which are also invariant. Thus, we are in the conditions of section 3.
The field variables areΦ ≡ (φ, B, A 0 , A, C,C, H,H). From the moment, we ignore H andH, and restrict toΦ ≡ (φ, B, A 0 , A, C,C), because it is straightforward to treat H andH along the lines of ref. [1]. We discuss them anyway in the next section, when we include the interactions. For the time being, we also drop O(g).
The boundary conditions read where the list on the right-hand side collects given functions on ∂Ω. We can turn to vanishing boundary conditions by means of shifts whereΦ 0 are functions defined on the whole of Ω, which coincide with the right-hand side of (5.2) on ∂Ω. This way, the newΦ vanish on ∂Ω. Since the shift does not change the quadratic sector of the free Lagrangian, on which we are concentrating in the present section, we takeΦ 0 = 0 for the moment, and leave the rest of the discussion to the next section. Note thatΦ ∂Ω = 0 allows us to freely integrate the space integrals by parts. The momenta, which are are either gauge invariant, or transform into one another: The Hamiltonian is H = H bos + H gh , where so the extended Lagrangian L ′ of formula (3.6) is As explained in the previous two sections, it is convenient to introduce the shifted momenta (3.10), or (3.42), which arē while the other momenta are unchanged. DefiningΠ = (π φ , π B ,π A 0 ,π A ), the general form of the Lagrangian L ′ bos is where K 2 is a constant matrix and K T 2 is its transpose, while N = N ij and N 4 are other constant matrices. We do not specify them here (and, besides, most of them are just filled with zeros, like M), since they can be read directly from (5.6). It is sufficient to note that N ij 1 , N i 2 and N 4 are symmetric, while N i 3 are antisymmetric. Ultimately, we are in the situation described in general terms in subsection 3.1. We have eigenfunctionsΠ n , Φ n , with (real) frequencies ω n , where n is some label ranging in some set V. The complex conjugate eigenfunctions are those with some "conjugate" label n * , i.e.,Π * We then expandΠ and Φ in such a basis: with a n * = a * n . As before, we write V = U ∪ U * , so that each pair n, n * is split between U and U * . The orthonormality relations are (3.16). Using them, we can invert (5.8) as in (3.18), and obtain the expansion of the integrated bosonic Lagrangian, which reads iτ n ω n (a * nȧ n −ȧ * n a n ) − 2 n∈U τ n ω 2 n a * n a n . (5.9) Since we have six independent fields (for every value of the Lie algebra index a), which are the components φ, B and A µ of Φ, we can distinguish six classes of frequencies ω n . Two of them, which we denote by ω g n and ω g′ n , may depend on the gauge-fixing parameters ξ and λ, while the other four may depend on the parametersξ andλ, but not on ξ and λ.
Out of the four gauge independent frequencies, two are physical, denoted by ω ph n and ω ph′ n , and two must be quantized as purely virtual, denoted by ω d n and ω d′ n . The distinction between the two classes of gauge independent frequencies is somewhat flexible. In the absence of data (which require to make experiments about scattering processes where the restrictions to finite τ and compact Ω play crucial roles), the only theoretical constraints we have are that: a) the eigenfunctionsΠ n , Φ n , associated with each set of frequencies ω g n , ω g′ n , ω ph n , ω ph′ n , ω d n and ω d′ n , make a complete set for some component ofΠ, Φ; b) altogether, they are a complete set forΠ, Φ; c) the eigenfunctions have the right limits for Ω → R 3 . Such limits areξ andλ independent for ω ph n , ω ph′ n ,ξ and λ dependent for ω d n and ω d′ n . As an example of the flexibility we are referring to, we can consider linear combinations of solutions whose frequencies have the same limits for Ω → R 3 . As we show in the examples of sections 7 and 8, if the relative coefficients are appropriately oscillating, the mixing disappears when Ω → R 3 . This ambiguity reflects the large arbitrariness we have, when we formulate quantum field theory in a finite interval of time τ , on a compact space manifold Ω. Like the parametersξ andλ, different choices of the basis (7.2) may parametrize, in a way that remains to be clarified, different interplays between the physical process we want to study and the external environment where it is placed, or the apparatus we use to make the measurements.
In several cases, it may be helpful to first setξ =λ = 1, where the frequencies and eigenfunctions simplify and can often be written explicitly, make the choices of basis there, and then extend the choices toξ,λ = 1 by expanding in powers of δξ = (ξ − 2)/2 and δλ = (λ − 2)/2.
The gauge dependent frequencies ω g n and ω g′ n can be quantized as purely virtual or not, provided we implement this choice consistently everywhere. The physical quantities are unaffected by the choice, because they are gauge independent.
The coefficients are the coherent states Z n (t) = (z g n , z g ′ n , z d n , z d ′ n , z ph n , z ph ′ n ) = (z I n ) = (a n ). (5.11) The bosonic Lagrangian L ′ bos can be split accordingly. The two gauge dependent frequencies ω g n and ω g′ n are easy to calculate, since they must correspond, by the gauge symmetry, to those of the ghost Lagrangian L ′ gh . Repeating the procedure described above for L ′ gh , we find that the eigenfunctions we are talking about solve the standard problem △C n (x) = −ξω 2 n C n (x) in Ω, C n | ∂Ω = 0, and come in two copies (ghosts and antighosts).
The gauge transformations of the coherent states can be derived from the ones of the fields and the momenta, combined with the expansion (5.10), as explained in subsection 3.2. Since δφ = θC, δC = θB, there must be φ modes that transform into the ghost ones, and antighost modes that transform into the B ones. This means that the φ, B, C andC expansions have the structures (5.12) where the sums on n are understood, the dots collect the contributions of the A 0 and A modes, and ψ n and ψ ′ n are unspecified functions. The coherent states denoted by z φn and z φn do not contribute to the expansions of A 0 and A; the A 0 , A modes may contribute to the expansion of φ, but not to the one of B.
Gauge theories: interactions
Now that we have taken care of the quadratic part, we are ready to include the interactions. Working out the momenta π Φ from the Lagrangian (4.6), we obtain plus π φ , which we do not report here, because its expression can be read from the gauge transformations, which, by (3.4), are still (4.7) and (5.5): θπ φ = −δπC. Then, we make the redefinition (3.10). The only changes arē Then we make the shiftsΦ →Φ +Φ 0 , (6.4) whereΦ 0 are functions defined on the whole of Ω, with the sole requirement that they coincide with the right-hand side of (6.3) on ∂Ω. After the shift, the boundary conditions areΦ ∂Ω = 0, the gauge transformations are still (4.7), and we can freely integrate the space integrals by parts, to move space derivatives away from any field. It is important to stress that the conditions (6.3) apply to the Lagrangian (4.6), before even talking about momenta, so we do not have to worry about the behaviors of the momenta on ∂Ω at this stage.
Take the Lagrangian (4.6), and denote it by L tot (Φ) = L free (Φ) + L int (Φ). Once we implement the shift (6.4) on it, we obtain (6.5) where L(Φ,Φ 0 ) = L free (Φ)+ interactions. We can ignore the term ∇(ΦC(Φ 0 )), since it disappears as soon as we integrate on the space manifold Ω. The quadratic sector of L tot (Φ +Φ 0 ) coincides with L free (Φ), which is the one of L tot (Φ), up to interactions. Next, we proceed as explained in subsection 3.3. We define the momenta, redefine them according to (3.42) (that is to say, according to (6.1) with π →π,π →π), and get to the extended LagrangianL ′ ϕ . Since the quadratic sector of (6.5) is L free (Φ), the eigenfunctions coincide with those we had with trivial boundary conditions. So do the expansions in terms of coherent states (5.11). Once we integrate the Lagrangian and include endpoint corrections, to have the correct variational problem, the final action is (3.43), which just contains some linear corrections (and possibly different interactions) with respect to the action (3.23).
Once we have the action, the theory can be phrased diagrammatically. The diagrams are of the usual type, apart from the presence of external sources and the discretizations of the loop momenta [1].
When we want a transition amplitude, we must choose initial and final conditions z I n (t i ) = z I ni ,z I n (t f ) =z I nf for the coherent states. The physical degrees of freedom are the transverse components of A, which must be quantized as physical particles. Their initial and final conditions z ph ni , z ph ′ ni ,z ph nf andz ph ′ nf are free. The gauge degrees of freedom are φ, B, C andC. They can be quantized as purely virtual or not, provided the choice is implemented consistently everywhere. Their initial and final conditions are trivial, i.e., (6.6) and similarly for C andC. The purely virtual fields are A 0 , H,H and the longitudinal components of A. They are quantized as purely virtual particles, by removing their on-shell contributions to the diagrams perturbatively to all orders, according to the rules of ref.s [2,12], and setting the initial and final conditions of the coherent states associated with them to zero. This means (6.7) and similarly for H andH. The decomposition of A into "transverse" and "longitudinal" components is defined by the arrangement (5.10), after identifying the (physical vs purely virtual) eigenfunctions (5.11) and their frequencies. We illustrate these facts in the examples of the next two sections.
Note that we do not need to disentangle the physical and purely virtual degrees of freedom on ∂Ω, because purely virtual particles are not required to have trivial boundary conditions [1]. The freedom associated with their boundary conditions may describe some sort of interaction between the observer, or the environment, and the physical process we are observing.
The unitarity equation U † U = 1 holds under appropriate assumptions (such as the cancellation of the gauge anomalies at one loop). An easy way to prove the statement is to formulate the gauge sector (identified by the fields φ, B, C andC) as purely virtual, as in [22], because then we know that it does not contribute to the product in between U † and U.
Normally, instead, the fields of the gauge sector are treated as physical fields (because the gauge symmetry ensures that they mutually compensate inside the physical quantities). Then the product between U † and U is a sum over a complete set of states, which includes the gauge non invariant ones. Those states are studied by relaxing the initial and final conditions (6.6) on the gauge sector.
Gauge theories on the semi-infinite cylinder
In this section and the next one we illustrate the general theory in the cases Ω = semiinfinite cylinder and Ω = finite cylinder, concentrating on the frequencies and the eigenfunctions. We have seen that, once we have those, we can proceed straightforwardly. We choose the special gauges ξ = λ,ξ =λ, to simplify the calculations. This allows us to keep one free parameter (λ) in the gauge sector and one (λ) in the purely virtual sector.
We denote the semi-infinite cylinder by Ω = S 1 × [−ℓ, ∞), while r is the radius of the circle S 1 . Using cylindrical coordinates θ, z, we have It is convenient to reach the semi-infinite cylinder from the infinite cylinder (ℓ = ∞). We recall that the Lagrangian is (5.1) and the momenta are (5.4), while the shifted momenta are (5.7). Defining Φ = (φ, B, A 0 , A θ , A z ), we search for eigenfunctions of the form whereΦ 0 denotes a row of constants, while x = z/r, n ∈ Z,p is a rescaled momentum and ω is a rescaled frequency. Inserting (7.1) into the field equations derived from (5.1), the system has solutions when the frequencies arê Two degeneracies are present, since the gauge-dependent (i.e., λ-dependent) frequencieŝ ω g and theλ-dependent frequenciesω d appear twice. Instead, the physical frequency ω ph appears once. The independent solutions for the five components of Φ are ten: five correspond to the particles and five correspond to the antiparticles. We do not write their expressions explicitly. It is sufficient to recall that the most general solution contains 10 arbitrary constants. Now we move to the semi-infinite cylinder. Since the x dependence of the solutions (7.1) is as simple as e ipx , they cannot satisfy the boundary conditions Φ(t, θ, −ℓ) = 0 on Ω = S 1 × [−ℓ, ∞), if they are taken separately. However, if we take linear combinations of functions (7.1) with the sameω, we can impose the conditions Φ(t, θ, −ℓ) = 0 on them. This way, the number of arbitrary coefficients gets reduced to a half. Ultimately, we obtain five independent solutions, or a solution with five arbitrary coefficients.
Omitting the overall factor e inθ e −itω/r and the arbitrary constant in front, the physical solution reads . We see that λ-dependent contributions are present, but they are just pure gauge, since the field strength F = ∂ z A θ − ∂ θ (A z /r) is λ independent. It would be impossible to fulfill the boundary conditions of the semi-infinite cylinder without a pure gauge part.
To study the limit ℓ → ∞, we multiply the solution by factors such as 2e ∓iℓpα , and drop all the oscillating terms when ℓ gets large. The results are which coincide with the physical solutions at ℓ = ∞. If, instead, we multiply by 2e ∓iℓp β and repeat the same procedure, we obtain an ℓ = ∞ pure-gauge solution. The other ℓ = ∞ solutions are obtained in similar ways from the general ℓ < ∞ solution.
We can identify a solution by the integer n, a momentump (e.g.,p α in the example above) and a dispersion relation giving the frequencyω in terms of n andp. When we switch to coherent states, we can label them as The physical solutions correspond to z pĥ pn , and are quantized as physical particles. We can quantize all the other components of Zp n as purely virtual particles. This means that we give them trivial initial and final conditions, and remove the on-shell contributions due to them, inside the diagrams, perturbatively to all orders, with the procedures of [2,12].
The free action is S free = −i dp 2π n∈Z (Zp nf Θp n Ωp n Zp n (t f ) +Zp n (t i )Θp n Ωp n Zp ni ) + t f t i dt dp 2π n∈Z i(Zp n Θp n Ωp nŻpn −Żp n Θp n Ωp n Zp n ) − 2Zp n Θp n Ω 2 pn Zp n , (7.3) where Ωp n is the diagonal matrix of the frequencies, while Θp n is the diagonal matrix of the factors τ n = ±1 of (3.16).
As we have explained in the previous sections, there is a certain liberty in choosing the decomposition (7.2), since the only constraints are that: a) each set is complete for some field Φ (i.e., it can be used to expand the field, in order to functionally integrate over it); b) altogether, the eigenfunctions form a complete set for the fields Φ and the momentaπ Φ ; and c) the eigenfunctions have the right limits for ℓ → ∞.
Note that the solutions of the semi-infinite cylinder contain 5 arbitrary real constants, while those of the infinite cylinder contain twice as many. They are doubled by the sign choices in the multiplying factors e ±iℓpα , e ±iℓp β , etc., which are used for the large ℓ limit.
It may be puzzling that the number of integration variables of the functional integral "doubles" in the limit ℓ → ∞, so to speak. Actually, the number of variables is infinite, so we cannot really say that it doubles. It is convenient to explain what happens in detail, since similar instances are met frequently. Consider the Laplacian on the segment [0, ℓ] with Dirichlet boundary conditions. We have the eigenfunctions sin(πnx/ℓ), n ∈ Z, x ∈ [0, ℓ]. They "double" in the limit ℓ → ∞, because, after centering the segment by means of the shift x = y + (ℓ/2), one has to distinguish the cases n = even and n = odd, which give different eigenfunctions for ℓ → ∞ (sines and cosines, respectively). Similarly, sin(ℓp α ) = cos(ℓ(p α − π/(2ℓ))), so the doubling comes from negligible shifts ofp α , or ω, which give other eigenfunctions with the same dispersion relation for ℓ → ∞.
The experimental data we have today, which mainly concern S matrix amplitudes, are not sufficient to guide us uniquely through the wide freedom we face when τ < ∞ on a compact Ω. Probably, changing the basis of physical and purely virtual frequencies in (7.2) is equivalent to twisting the boundary conditions, or having different interplays between the experimental setup and the physical process. At any rate, once we make our choices of initial, final and boundary conditions, as well as the basis (7.2), everything else is uniquely determined.
Gauge theories on a cylinder
In this section we study gauge theories on a cylinder Ω = S 1 × [−ℓ/2, ℓ/2]. We start again from the parametrization (7.1) for the solutions of the field equations of the infinite cylinder. Then we superpose solutions with the same frequency, and impose the boundary conditions Φ(t, θ, −ℓ/2) = Φ(t, θ, ℓ/2) = 0. We find, as expected, that it is not sufficient to reduce the set of independent coefficients, as it was for the semi-infinite cylinder, but we must also discretize the frequencies.
Specifically, we insert Φ(t, θ, z) =Φ(x)e inθ e −itω/r into the equations, where x = z/r, n ∈ Z, andΦ(x) are linear combinations of We fix the coefficients of the linear combinations by means of the boundary conditions, after determining the frequenciesω that admit nontrivial solutions. TheB equation is independent of the other variables, and just reads where the prime denotes the derivative with respect to x. Moreover,φ does not enter any equation apart from its own, which reads φ ′′ = (n 2 − λω 2 )φ + ∆φ, (8.2) where ∆φ vanishes when all the other fields vanish. Theà 0 equation depends onà 0 and B, while the equations ofà θ andà z depend onà θ ,à z andB. The gauge-dependent frequencies arê They are associated with two eigenfunctions. One is whereẑ = z − (ℓ/2), and the other one is B kn = sin kπẑ ℓ , with nontrivialφ kn ,à 0 kn ,à θkn andà zkn . (8.4) We omit the expressions of the nontrivial fields here, since they are not crucial for our discussion. The solutions (8.4) are the only ones with a nontrivialB. The solutions (8.3) and (8.4) are those which, by gauge invariance, match the eigenfunctions of the ghosts C andC. Let us recall that the gauge transformations are δ Λ φ = Λ = θC and δ ΛC = θB. This means that there must exist Φ eigenfunctions that are made of φ only, and match the C eigenfunctions: these are (8.3). Moreover, there must exist Φ eigenfunctions where B matches theC eigenfunctions: these are (8.4). Said in different words, the coherent states that multiply the solution (8.3) transform into the C coherent states, while theC coherent states transform into the coherent states that multiply the solution (8.4), as explained in the last part of section 5.
and the solutions read
respectively. For the reasons we have explained before, the distinction between the physical frequenciesω ph kn and the purely virtual frequenciesω d′ kn is to some extent arbitrary. Once we have the frequencies and the eigenfunctions, we can proceed as in sections 3, 4, 5 and 6, obtain the coherent-state action (3.43), and work out the evolution operator U(t f , t i ) diagrammatically with the procedure of ref. [1].
Einstein gravity
In this section we study Einstein gravity. The Hilbert-Einstein action contains double derivatives of the metric tensor, so it cannot be used as is to study quantum field theory in a finite interval of time τ on a compact manifold Ω. The well-known "ΓΓ" action does not have this problem, but differs from (9.1) by a boundary term, which must be treated cautiously, in order to preserve general covariance. Moreover, in section 3 we have emphasized that we need an orthodox symmetry. In particular, the Lagrangian must be invariant without adding total derivatives, which is not true for the Hilbert Lagrangian of (9.1). The solution of these problems is as follows. First, we perform the purely virtual extension of ref. [4], at τ = ∞, Ω = R 3 . Then, we switch to the invariant metric tensor and trivialize the symmetry by means of a field redefinition. Third, we add (invariant) total derivatives and switch to the ΓΓ action (built with the invariant metric tensor). Fourth, we restrict to finite τ and compact Ω with the procedure of section 3, introduce the coherent states, and work out the final action (3.43). Having trivialized the symmetry, these operations are invariant.
Next, we introduce the extra vector ζ µ (x), which by definition transforms as 3) The right-hand side of (9.3) must be understood as a perturbative expansion in powers of ζ µ . As usual, the Faddeev-Popov ghosts C µ are introduced by writing ξ µ = θC µ (x), where θ is a constant anticommuting parameter. Using ζ µ , we can build the invariant metric tensor 4) where ζ ρ ,µ ≡ ∂ µ ζ ρ . The field ζ µ must be accompanied by anticommuting partnersH µ and H µ , as well as Lagrange multipliers E µ . The purely virtual extension of [4] requires ζ µ ,H µ , H µ and E µ to be purely virtual. As in the case of Yang-Mills theories, the extension amounts to introducing a certain expression in the functional integral, which is equivalent to "1" on the S matrix scattering amplitudes, and on the correlation functions of ordinary (which means ζ µ -independent) insertions of invariant composite fields. However, it allows us to build new, physical correlation functions, such as those that contain insertions of the invariant metric tensor (9.4).
Inside the functional integral, the purely virtual extension is a correction to the action, which reads (9.5) where g d is the determinant of g µνd , V µ (g, ζ) is an invariant function (δ ξ V µ = 0), andλ is a free parameter. For example, we can take V µ = ∂ ν g µν d , or a mirror of the special gauge. At this point, we make a change of field variables 7 on the total actionS gf +S ext , to switch from g µν , ζ µ , We do not change the other fields. This way, we abandon the original metric tensor g µν in favor of the invariant one, g µνd . Moreover, we trivialize the symmetry, since in the new variables the transformation of ζ µ d is just δζ µ d = ξ µ d ≡ θC µ d , while g µνd , C µ d and H µ d are invariant by construction. The trivialized symmetry thus reads 7 Note some different signs with respect to the notation of ref. [4].
all the other fields being invariant.
by construction. At this point, we eliminate the double derivatives by switching to the ΓΓ action, and restrict to a finite interval of time τ and a compact space manifold Ω: Note that the Lagrangian of this ΓΓ action, which is built with the invariant metric tensor, is manifestly invariant, so it satisfies the identity (3.2). The gauge-fixing sector must be rewritten as well, by adding total derivatives, in order to become invariant at the Lagrangian level. Taking G µ (g) = ∂ ν g µν for definiteness, we write (9.8) where g µν and C µ must be understood as functions of ζ µ d and C µ d , according to the change of variables defined by (9.6), and η µν is the flat-space metric. In (9.8) and (9.9) below, the integral symbol stands for the dtd 3 x integral restricted to the interval τ and the manifold Ω.
The extension (9.5) is rearranged as (9.9) for V µ = ∂ ν g µν d , after which we integrate E µ away, and proceed as in the case of gauge theories.
We have taken G µ (g) = ∂ ν g µν and V µ = ∂ ν g µν d , for concreteness, but it is easy to adapt the formulas to the special gauge and its mirror, or other choices.
The total action is S tot = S gf + S ext and its symmetry is (9.7). At this point, we read the Lagrangian L from S tot , and observe that it is orthodoxically symmetric, as is evident from the expression of (9.8), while the Lagrangian of (9.9) is manifestly invariant. Yet, L contains infinitely many time derivatives, due to the expansion of expressions like (9.3) in powers of ζ µ .
The expansion around flat space is defined by writing g µν d = η µν + 2κh µν d , where κ = √ 8πG and G is Newton's constant. If we make the replacements the perturbative expansion is the expansion in powers of κ.
Equations (9.6) show that ζ µ → κζ µ plus higher order corrections. The Taylor expansions of arguments such as x µ − ζ µ and x µ + ζ µ d inside (9.3), (9.4) and (9.6) raise the powers of κ by one unit for each derivative they generate on the fields. This means that we are in the situation described in appendix B. Applying the construction of section 3, with the rearrangement of appendix B, we build the correct action (3.43) for gravity restricted to a finite interval of time τ , on a compact space manifold Ω. Applying the procedure of [1], we build the evolution operator U(t f , t i ) between arbitrary initial and final states, with arbitrary boundary conditions, preserving general covariance.
Quantum gravity with purely virtual particles
The results of the previous section extend to quantum gravity with purely virtual particles, provided we replace the Hilbert-Einstein action with the appropriate action.
Since coherent states are "enemies" of higher derivatives, as we have learned repeatedly, we cannot adopt the higher-derivative formulation of ref. [9], where the Lagrangian density is made of the Hilbert-Einstein term R, plus the cosmological term, plus the quadratic terms R 2 and R µν R µν . We must start from the two-derivative formulation of ref. [20] at τ = ∞, Ω = R 3 , which we briefly recall here.
Besides the metric tensor g µν , the theory contains a scalar field φ of mass m φ (the inflaton) and a spin-2 purely virtual particle χ µν of a certain mass m χ . The action is where is the Hilbert-Einstein action with a cosmological constant Λ C , is the inflaton action, and g→g+ψ is the χ µν action, with We gauge-fix (10.1) as in (9.2), and make the purely virtual extension as in (9.5). Then we switch from the variables g µν , φ, χ µν , ζ µ , C µ , H µ to the variables g µνd , φ d , χ µνd , ζ µ d , C µ d , H µ d , by means of (9.6) and The action (10.1) is invariant under the change of variables g µν , φ, χ µν → g µνd , φ d , χ µνd , which is just a diffeomorphism. This means that we can simply view (10.1) as a function g µνd , φ d and χ µνd . Next, we add total derivatives to eliminate the terms like ϕ 1d · · · ϕ n−1d ∂∂ϕ dn in favor of terms like ϕ 1d · · · ϕ n−2d ∂ϕ n−1d ∂ϕ nd , in the quadratic sector of the Lagrangian. Moreover, we rearrange the gauge-fixing part as in (9.8) and the purely virtual extension as in (9.9). At that point, we can identify the eigenfunctions and the coherent states. As far as the interaction sector is concerned, we rearrange it as explained in appendix B. Then we use the procedure of section 3 to build the final action (3.43) for the restriction to finite τ and compact Ω. From that point onwards, we can proceed as explained in section 3 and ref. [1], and build the evolution operator U(t f , t i ) between arbitrary initial and final states, with arbitrary boundary conditions.
Unitarity in the presence of a cosmological constant
The cosmological constant Λ C is nonvanishing, because renormalization turns it on anyway, even if we start from a vanishing Λ C . A nonzero Λ C raises some issues that we must address.
First of all, flat space is not a solution of the field equations (with φ = 0, χ µν = 0), so it would be better to formulate perturbation theory by expanding the metric tensor g µν around a de-Sitter or anti-de-Sitter metric, according to the sign of Λ C , rather than the flat-space metric. However, an expansion of that type does not allow an easy switch to energy/momentum space by means of Fourier transforms, and makes the calculations of loop diagrams, and the proofs of general theorems, very hard.
Since the physical results do not depend on the expansion we make, we may insist on using the expansion around flat space, in spite of its non standard features. For example, it generates one-leg vertices and a spurious graviton mass term, which can even be of tachyonic type, depending on the sign of Λ C .
Whatever difficulties the expansion may generate, they are of a spurious nature, which means that they must compensate, and ultimately cancel out. In this spirit, the expansion around flat space is preferable, because its unusual features are simpler to deal with.
The other problem concerns the S matrix: we do not know how to define asymptotic states and S matrix amplitudes on non-flat spacetimes [6]. What about unitarity, then?
Although we cannot claim that the S matrix is unitary in a strict sense, when Λ C = 0, we can still claim that it is unitary up to effects due to the cosmological constant [22]. Those effects are small for all practical purposes: a scattering process should involve wavelengths as large as the universe to be affected by Λ C in a non negligible way.
Besides, now we have a simpler way out. Thanks to the results of this paper and [1], we are less dependent on the paradigms that have dominated the scene since the birth of quantum field theory. In particular, we can study unitarity without being tied to the S matrix, by concentrating on the evolution operator U(t f , t i ).
We have shown that we can build a unitary U(t f , t i ) in a finite interval of time τ = t f −t i , on a compact space manifold Ω, with arbitrary initial and final states, and arbitrary boundary conditions. The goal has been achieved both in Einstein gravity (which is not renormalizable, but this does not jeopardize its perturbative unitarity) and in quantum gravity with purely virtual particles (which is renormalizable). In the first case the cosmological constant can be added with no difficulty, and U(t f , t i ) remains well-defined and unitary for every τ < ∞. In the second case, the cosmological constant is already present by default.
This means that the cosmological constant does not have a problem with unitarity. It does have problems with the very notions of S matrix and asymptotic states. Given that the difficulties only appear in the τ → ∞ limit, the τ < ∞ formalism we have developed here might suggest new ways to investigate asymptotic states in gravity with a cosmological constant.
Conclusions
When we study gauge theories and gravity on a compact manifold, possibly with boundary, and on a finite interval of time, we face the nontrivial task of formulating the initial, final and boundary conditions in invariant ways. The ordinary gauge potential A µ and the metric tensor g µν are not straightforward to handle, in this respect. Nor are the field strength F µνa , in non-Abelian gauge theories, or the curvature tensors R, R µν , R µνρσ , in gravity, because none of them is invariant.
The purely virtual extensions of gauge theories and gravity formulated in ref.s [3,4] come to the rescue, because they allow us to define invariant matter and gauge fields ψ d and A µ d , and an invariant metric tensor g µνd , without changing the ordinary physical quantities, such as the S matrix amplitudes and the correlation functions of nonlinear invariant composite fields, like F a µν F µνa ,ψψ, etc. Yet, they allow us to study new correlation functions, like those of the invariant fields ψ d , A µ d and g µνd . They also provide a way of formulating invariant initial, final and boundary conditions in gauge theories and gravity on a compact manifold Ω, in a finite interval of time τ .
Switching to the invariant variables ψ d , A µ d and g µνd , it is also possible to "trivialize" the symmetries. Then it is relatively straightforward to organize the action properly, and work out the eigenfunctions and the frequencies for the expansions of the fields. The functional integral is defined as the integral on the coefficients of those expansions. Coherent states are introduced, and the evolution operator U(t f , t i ) is worked out between arbitrary initial and final states. The formalism we have developed allows us to calculate U(t f , t i ) diagrammatically, and perturbatively, for arbitrary boundary conditions on ∂Ω. In all the operations we make, the local symmetries are under control, so U(t f , t i ) is gauge invariant and invariant under general coordinate transformations.
We have illustrated the basic properties of the formalism in Yang-Mills theory on two relatively simple manifolds: the semi-infinite cylinder and the cylinder.
The limit τ → ∞, Ω → R 3 (which would give the usual S matrix) is only regular when the cosmological constant Λ C vanishes, due to the problems related to the definitions of asymptotic states and S matrix amplitudes at Λ C = 0. Yet, such problems are not problems of unitarity per se, because the evolution operator U(t f , t i ) of quantum gravity is unitary for every τ < ∞.
It might be impossible to test the S matrix predictions for a long time, in quantum gravity. Hopefully, working with U(t f , t i ) at finite τ on a compact Ω can allow us to explore more options, and figure out experimental setups that could amplify tiny effects like those of quantum gravity till they become detectable.
B Higher-derivative interactions
In this appendix we extend the results of section 3 to interaction Lagrangians that contain arbitrarily many derivatives of the fields, as long as their number grows together with the power of some coupling. This part is only needed for gravity. We show that we can rearrange the Lagrangian L ′ so as to finally have an action with the form and the properties of (3.43).
We assume that the Lagrangian L(φ,φ) is decomposed as (3.1), that the symmetry is orthodox and linear, that the quadratic sector L free (φ,φ) has the same structure as in section 3 (no more than one derivative on each field, no more that two derivatives in each term), but we allow L int (φ,φ) to contain arbitrary monomials ∂ m 1 φ 1 · · · ∂ mn φ n of the fields, differentiated an arbitrary number of times. For definiteness, we assume that L int (φ,φ) is proportional to some coupling λ, which we use to trace the interaction terms. We write them as O(λ), or O(λ n ), n > 1, when we mean higher orders.
We proceed as in section 3 up to the integrated Lagrangian L ′ , expressed in terms of coherent states. This means that: we make the shift (3.34) with the conditions (3.44); then we work out the momentaπ I ϕ , make the redefinition (3.42), and expandπ I ϕ , ϕ I in coherent states. We obtain the same quadratic part we had before, then the linear terms due to ∆L ′ ϕ , plus interactions L ′ int (z,z) = O(λ). Before the expansion in coherent states, we have a wide freedom. For example, we can change the interaction sector of the Lagrangian by adding gauge invariant total space derivatives. After the switch fromπ I ϕ , ϕ I to coherent states, these corrections give legit vertices. Moreover, the switch takes full care of the space sector, so we do not need to worry about the space derivatives any longer. What we have to do, instead, is rearrange the interaction part L ′ int (z,z), to remove the time derivatives of z andz, which are still there, and can be arbitrarily many. We achieve this goal by adding (gauge invariant) total time derivatives to L ′ int . We can arrange L ′ (z,z) into a sum L ′ (z,z) = L ′ free (z,z) + L ′ int0 (z,z) + L ′ intder (z,z), (B.1) where L ′ free (z,z) includes the quadratic terms, as well as the linear terms due to ∆L ′ ϕ , L ′ int0 (z,z) = O(λ) is free of time derivatives, while L ′ intder (z,z) = O(λ) vanishes when all the time derivatives are set to zero.
We also assume that the each term of L ′ intder has a power of λ that is equal to the number of its time derivatives, at least. We remove L ′ intder iteratively by means of field redefinitions and dropping gauge invariant total derivatives, without affecting the symmetry and the other properties of the Lagrangian L ′ . We proceed by induction. We assume that L ′ intder has N powers of λ more than one for each time derivative, and write L ′ intder = O(λ N )O(λ∂ t ) to mean this. We give a procedure to rearrange the Lagrangian so that the new L ′ intder is O(λ N +1 )O(λ∂ t ). Since we are able to do so for arbitrary N, starting from N = 0, we remove L ′ intder entirely. Replacing the functional derivatives of (A.1) with ordinary derivatives, we can write the operator ∆ as ∆ ≡ ∞ j=0 a n∈Û v aT jn σ 1 ∂ l ∂u a jn + c.c., (B.2) where u a jn and v a jn denote the j-th time derivatives of u a n and v a n , respectively. We know that L ′ intder must be gauge invariant by itself (∆L ′ intder = 0), since ∆ does not mix derivatives and orders of the interactions. Using theorem (3.25), we can write where X 0 is a function that depends only on w α jn andw α jn (the j-th time derivatives of w α n andw α n ), and Y is another function. Since every term L ′ intder must contain time derivatives, X 0 has the form X 0 = j>0 α n∈Û w α jn X αj n + c.c., for certain ∆ invariant functions X αj n , and their conjugates. We can write X 0 = α n∈Ûẇ α n X α n + X tder 0 + c.c., where X α n are other ∆ invariant functions, and X tder 0 are gauge invariant total derivatives. As part of the rearrangement to get to the correct final action, we drop X tder 0 . Now we consider Y . Since it must contain time derivatives, its form is where ǫ α n is the statistics of w α n . Note that X α n , Y α n andỸ a n+ are O(λ N +1 )O(1). At this point, we removeX by means of the field redefinitions, w α n →w α n − (−1) ǫ α n X α n + ∆Y α n 2τ α n iω α n ,ū a n →ū a n − σ 1Ỹ a n+ 2τ a n iω a n ,v a n →v a n − ∆Ỹ a n+ 2τ a n iω a n , (B.4) and their conjugates. We show that this operation replaces L ′ intder with higher-order derivative interactions O(λ N +1 )O(λ∂ t ), and preserves the key properties of L ′ free and L ′ int0 . When we apply the redefinition (B.4) to L ′ intder we obtain O(λ N +1 )O(λ∂ t ) at least, which go into the new L ′ intder . When we apply (B.4) to L ′ free minus the universal kinetic terms (3.31) and (3.32), we obtain: a) interaction terms with no derivatives, which go into the new L ′ int0 ; plus b) O(λ N +1 )O(λ∂ t ), which go into the new L ′ intder . The same occurs when we apply (B.4) to L ′ int0 . It remains to apply the redefinition (B.4) to (3.31) and (3.32). The second orders of the Taylor expansions give O(λ N +1 )O(λ∂ t ). So, it is sufficient to focus on the first orders of the Taylor expansions.
The first term subtracts the first one of (B.3). The rest is a gauge invariant total derivative, which we remove. From (3.32) we get the correction which cancels the rest of (B.3), plus gauge invariant total derivatives. In the end, we remain with a L ′ intder that is O(λ N +1 )O(λ∂ t ). That is to say, we have raised its λ power by one unit. Iterating in N, we can make L ′ intder disappear entirely. Summarizing, the effects of the iterated redefinitions (B.4), the rearrangements and the droppings of gauge invariant total derivatives in the interaction sector are: 1) they cancel the term L ′ intder ; 2) they do not affect the symmetry transformations (3.28); this is evident from (B.4), using (A.2); 3) they do not affect the universal kinetic terms (3.31) and (3.32); 4) they do not affect L ′ free ; 5) they do not change the structure of L ′ int0 ; 6) they leave the Lagrangian L ′ orthodoxically symmetric. At the end, we have the correct L ′ : L ′ (z,z) = L ′ free (z,z) + L ′ int0 (z,z). (B.5) Note that point 6) is tautologically true now: a gauge invariant Lagrangian of the form (B.5) is necessarily orthodoxically invariant, if the symmetry is linear, since the universal kinetic terms are invariant by themselves, and the rest does not contain time derivatives. The field redefinitions (B.4) are perturbative, so their Jacobian determinant is trivial, if we use the analytic or dimensional regularization techniques [21].
To get to the action (3.23), we integrate on time, add the usual endpoint corrections, as in (3.23) and (3.43), and we are done. | 22,205.4 | 2023-06-12T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Effects of pay-for-performance based antimicrobial stewardship on antimicrobial consumption and expenditure: An interrupted time series analysis
Objectives To evaluate the impact of pay-for-performance on antimicrobial consumption and antimicrobial expenditure in a large teaching hospital in Guangzhou, China. Methods We collected data from hospital information system from January 2018 through September 2022 in the inpatient wards. Antimicrobial consumption was evaluated using antibiotic use density (AUD) and antibiotic use rate (AUR). The economic impact of intervention was assessed by antimicrobial expenditure percentage. The data was analyzed using interrupted time series (ITS) analysis. Results Following the implementation of the intervention, immediate decreases in the level of AUD were observed in Department of Hematology Unit 3 (β = −66.93 DDDs/100PD, P = 0.002), Urology (β = −32.80 DDDs/100PD, P < 0.001), Gastrointestinal Surgery Unit 3 (β = −11.44 DDDs/100PD, P = 0.03), Cardiac Surgery (β = −14.30 DDDs/100PD, P = 0.01), ICU, Unit 2 (β = −81.91 DDDs/100PD, P = 0.02) and Cardiothoracic Surgery ICU (β = −41.52 DDDs/100PD, P = 0.05). Long-term downward trends in AUD were also identified in Organ Transplant Unit (β = −1.64 DDDs/100PD, P = 0.02). However, only Urology (β = −6.56 DDDs/100PD, P = 0.02) and Gastrointestinal Surgery Unit 3 (β = −8.50 %, P = 0.01) showed an immediate decrease in AUR, and long-term downward trends in AUR were observed in Pediatric ICU (β = −1.88 %, P = 0.05) and ICU Unit 1 (β = −0.55 %, P = 0.02). Conclusion This study demonstrates that the adoption of pay-for-performance effectively reduces antibiotic consumption in specific departments of a hospital in Guangzhou in the short term. However, it is important to recognize that the long-term impact of such interventions is often limited. Additionally, it should be noted that the overall effectiveness of the intervention across the entire hospital was not significant.
Introduction
The discovery of antibiotics stands out as one of the most significant medical advancements of the 20th century [1].Regrettably, the overuse of antibiotics has hastened the rise of antimicrobial resistance (AMR).This is a severe public health issue and a worldwide menace, particularly in light of COVID-19.According to estimates, drug-resistant infections will cause approximately ten million deaths annually by 2050 [1].Numerous recent reports have highlighted a surge in multidrug-resistant organisms during the COVID-19 pandemic [2][3][4][5][6][7][8].Additionally, AMR also presents a significant issue in Guangdong, China [9].As per the Status Report on Antimicrobial Administration and Antimicrobial Resistance in China (2022), Guangdong province has a high rate of antibiotic use (AUR) among tertiary comprehensive inpatients in 2021, as well as a high density of antibiotic use (AUD) in core data hospitals in 2021.It is crucial to address this issue, particularly in light of COVID-19, by implementing antimicrobial stewardship (AMS) measures.
The Infectious Diseases Society of America (IDSA) stated in 2012 that AMS encompasses coordinated interventions intended to enhance and assess the judicious use of antimicrobial agents.This is achieved by promoting the selection of an optimal antimicrobial drug regimen, which includes appropriate dosing, duration of therapy, and administration route [10].
Similarly, the Chinese Ministry of Health launched a long-term national AMS campaign in 2011 [11].The campaign protocol mainly comprised setting targets for antimicrobial management, implementing educational program and prescription audit, establishing financial incentive, and the Chinese Ministry of Health requiring local health authorities to formulate interventions based on local conditions.Furthermore, the AMS policy of China was implemented in 2012, and the guidelines for clinical application of antimicrobials were updated in 2015 [12].Many previous studies have shown that some of the measures in this AMS policy have been associated with appropriate antibiotic use, reduced prevalence of antibiotic-resistant pathogens, and improved clinical outcomes [13][14][15][16][17].
Generally, numerous published studies have shown that implementing AMS can considerably decrease antimicrobial consumption, costs, and adverse drug events [18][19][20][21], including the implementation of financial incentives, pay-for-performance and penalties [22][23][24][25][26][27][28].However, such studies have mainly been conducted in primary health care and rarely in tertiary hospitals.As a result, we initiated an AMS with pay-for-performance at a tertiary hospital to investigate its effectiveness.
This study aimed to use the interrupted time series (ITS) analysis, the strongest and quasi-experimental approach for evaluating longitudinal effects of intervention [29], to evaluate the impact of the AMS with pay-for-performance in a large teaching hospital in Guangzhou on antimicrobial consumption providing constructive policy suggestions for future AMS in China.We hypothesized that this intervention may help to optimize the use of antibiotics.
Setting and study design
This study was carried out at an academic teaching hospital with 3956 beds located in Guangzhou, China.Antimicrobial consumption was a crucial target for the AMS, and data were extracted from the hospital information system and analyzed for this purpose.Specifically, monthly antibiotic use density (AUD) and antibiotic use rate (AUR) data were collected from each inpatient department between January 2018 and September 2022.Additionally, monthly antibiotic expenditure (AE) data for the entire hospital and total medication expenditure (ME) data for the entire hospital were collected from January 2018 to June 2022.
Antimicrobial stewardship
The pay-for-performance based AMS was officially implemented in April 2021 and distributed to all departments in the form of documents.Bonus payment delivered to departments for antimicrobial consumption not beyond a specified threshold and penalties imposed on departments for antimicrobial consumption beyond a specified threshold.And the supplementary materials (Table 1)
Table 1
The inpatient departments analyzed.provide those specified thresholds for each clinical department and outline the specific implementation details of pay-for performance.Feedback on antimicrobial consumption indicators was provided in the following two forms: (1) in the form of envelopes, this feedback was given once a month; (2) Remind departments with serious use of antibiotics to carry out rectification at the Pharmaceutical Affairs Committee (all middle-level and above cadres of the hospital participated); The Pharmaceutical Affairs Committee is held quarterly.In addition, hospitals implemented multiple soft AMS, including education and training initiatives, throughout the study period, both before and after the intervention.However, it is important to note that the implementation of these soft measures was not altered in any way by the introduction of the pay-for-performance.The objective of these initiatives is to not only guide clinicians in the rational use of antimicrobials but also to establish a basis for implementing stricter measures, such as pay-for-performance.By combining soft measures with hard measures, the goal is to foster a culture of responsible antibiotic use within the healthcare profession.These combined efforts are designed to address the issue comprehensively and promote the appropriate utilization of antibiotics.
Ethical consideration
The study did not require ethical approval because the patient's privacy was not violated in the study.And it did not include any interventions that required the use of human subjects.
Study outcomes
In this study, we selected three metrics to evaluate inpatient antimicrobial consumption: AUD, AUR, and the percentage of antibiotic cost (AE/ME).
In this study, the three metrics and calculation methods are as follows: AUD (DDDs / 100PD) = cumulative DDDs number of days of patients admitted during the same period * 100 AUR (%) = number of discharged patients used antibiotics number of discharged patients during the same period * 100 Percentage of antimicrobial expenditure (%) = antibiotic expenditure (AE) medication expenditure (ME)during the same period * 100
Analysis of department selection
Because of the large number of departments in the hospital, we selected representative departments for analysis based on data availability and the setting of penalty thresholds of AUD greater than 40 DDDs/100PD, which aligns with the global average AUD [30].This is also in line with the guidelines outlined in the "Notice on Continuing to do a good job in the management of Clinical application of antibacterial Agents" (No. 8 [2020] of the State Health Office) which states that the AUD in inpatients in general tertiary hospitals should be controlled under 40 DDDs/100PD.By doing so, we aimed to assess the impact of the intervention in departments that pull up the AUD of hospital-wide, and in which departments the results of the intervention would be more pronounced.The screening results can be found in Table 1.In addition, Departments that could not be analyzed due to missing values included: Department of Hematology, Unit 2, Gastrointestinal Surgery, Unit 1 and Emergency ICU.
Statistical analysis
We will provide a brief description of the ITS analysis.Segmented linear regression (SLR) was used to conduct the analysis, and the formula for the model is as follows: Where Y t represents the main outcome indicator, time is a continuous variable ranging from 1 to 57 in this study, intervention is a dummy variable assigned a value of 0 before the intervention and 1 after the intervention, post-time is a variable that calculates the number of months after the intervention (with a value of 0 before the intervention and ranging from 0 to 17 after the intervention), and ε is a random variable; COVID-19 is a dummy variable indicating the pre-COVID-19 period (April 2019 to December 2019, coded 0), COVID-19 period (January 2020 to March 2020, coded 1), and post-COVID-19 (coded 2) [31]; β 0 represents the baseline level of the outcome when time = 0, β 1 represents the baseline trend before intervention, β 2 represents the change in level following the intervention, β 3 represents the change in trend following the intervention, β 41 is the level change during the COVID-19 period, and β 42 is the level change post-COVID-19.The final trend of the outcome after intervention is represented by the sum of β 1 and β 3 .If necessary, we used harmonic terms specifying two sine and cosine pairs to adjust for seasonality.The Durbin Watson (DW) method was employed to detect the presence of first-order auto-correlation in the time series.In case such correlation was detected, we used the feasible generalized least square (FLGS) method to adjust for the first-order auto-correlation errors, implemented with the Cochrane-Orcutt estimation.
Statistical analysis was conducted using R 4.2.2 (Vienna, Austria), and all tests were two-sided with significance determined at a P- value of less than 0.05.
Overall changes in antimicrobial consumption in hospital-wide inpatient departments due to AMS
Table 2 shows that there was a significant reduction in average AE/ME after the implementation of the intervention, decreasing from 11.02 % to 2.42 % (P < 0.001).However, AUD remained relatively stable (from 45.89 DDDs/100PD to 44.99 DDDs/100PD) compared to the pre-intervention period (P = 0.38).Similarly, no significant increase was observed in AUR after the intervention (increasing from 38.64 % to 39.17 %, P = 0.24).
ITS analysis of total hospital
As Table 3 and Fig. 1B showed, AUR decreased at a rate of 0.04 % (P = 0.37) per month before the intervention.At the beginning of the intervention, AUR decreased to a level of 1.04 % (P = 0.20).After that, the downward trend turned into an upward trend at a rate of 0.20 % (P = 0.004) per month.But, the changes of AUD and AE/ME due to intervention were not statistically significant (Table 3).
ITS analysis of Internal Medicine system AUD
Table 4 and Fig. 2 demonstrate the results of the ITS analysis, which are as follows: (1) Department of Hematology, Unit 1 (Fig. 2A): Before implementing the intervention, AUD decreased at a rate of 1.99 DDDs/100PD (P < 0.001) per month.However, an immediate increase of 27.35 DDDs/100PD (P < 0.001) was observed at the beginning of the intervention.After the intervention, the decline trend slowed down, with a rate of 1.54 DDDs/100PD (P = 0.33) per month.(2) Department of Hematology, Unit 3 (Fig. 2B): During the preintervention period, AUD decreased at a rate of 1.18 DDDs/100PD (P = 0.25) per month.Then, an immediate and substantial decline of 66.93 DDDs/100PD (P = 0.002) was observed in the first month of the intervention.After that, the declining trend appeared to change minimally, at a rate of 0.53 DDDs/100PD (P = 0.74) per month.(3) Emergency Ward (Fig. 2C): Before the intervention, AUD increased at a rate of 0.43 DDDs/100PD (P = 0.71) per month.However, at the beginning of the intervention, AUD continued to increase, reaching a level of 57.00 DDDs/100PD (P = 0.01) immediately.After that, the increasing trend turned into a decreasing trend, declining at a rate of 3.70 DDDs/100PD (P = 0.06) per month.Additionally, the changes in other departments of the Internal Medicine system due to the intervention were not statistically significant.A detailed summary of these results can be found in the supplementary materials (Table S2 and Fig. S1).
ITS analysis of surgical system AUD
In Table 4 and Fig. 3, the outcomes are presented.(1) Urology (Fig. 3A): During the pre-intervention period, AUD increased at a rate of 0.40 DDDs/100PD (P = 0.02) per month.However, after introducing the intervention, there was an immediate and substantial decrease of 32.80 DDDs/100PD (P < 0.001).After the intervention, although not statistically significant, the increasing trend appeared to decrease slightly at a rate of 0.21 DDDs/100PD (P = 0.55) per month.(2) Gastrointestinal Surgery, Unit 3 (Fig. 3B): Before the intervention, no significant increase in AUD was observed at a rate of 0.29 DDDs/100PD (P = 0.27) per month.However, in the first month of the intervention, AUD immediately decreased by 11.44 DDDs/100PD (P = 0.03).After the intervention, the increasing trend accelerated to a rate of 1.68 DDDs/100PD (P = 0.007) per month.(3) Cardiac Surgery (Fig. 3C): AUD increased at a rate of 0.48 DDDs/ 100PD (P = 0.08) per month before the intervention.As expected, there was an immediate decrease in AUD by 14.30 DDDs/100PD (P = 0.01) in the first month of the intervention.After the intervention, the increasing trend appears to have slowed, increasing by 0.40 DDDs/100PD (P = 0.87) per month.(4) Organ Transplant Unit (Fig. 3D): Before implementing the intervention, AUD increased at a rate of 1.42 DDDs/100PD (P < 0.001) per month.At the onset of the intervention, AUD immediately declined to a level of 10.53 DDDs/ 100PD (P = 0.17).Following this, the increasing trend turned into a decreasing trend at a rate of 1.64 DDDs/100PD (P = 0.02) per month.Additionally, the supplementary materials (Table S2 and Fig. S2) contain the results of other departments within the Surgery system.
ITS analysis of ICU system AUD
As displayed in Table 4 and Fig. 4. (1) Pediatric ICU (Fig. 4A): AUD decreased at a rate of 3.41 DDDs/100PD (P < 0.001) per month before the intervention was implemented.Then an immediate rise of 14.75 DDDs/100PD (P = 0.42) was observed at the beginning month of the intervention.After the intervention, the downward trend turned into an upward trend at a rate of 0.82 DDDs/100PD (P = 0.02).(2) Neurology ICU (Fig. 4B): AUD increased at a rate of 0.13 DDDs/100PD (P = 0.91) per month before the intervention was implemented.Then an immediate rise of 45.22 DDDs/100PD (P = 0.05) was observed at the beginning month of the intervention.After the intervention, the upward trend appeared to turn into a downward trend at a rate of 2.42 DDDs/100PD (P = 0.24) per month.(3) ICU, Unit 2 (Fig. 4C): An increasing trend at a rate of 1.80 DDDs/100PD (P = 0.29) per month was observed in the pre-intervention period.Then AUD instantly decreased by 81.91 DDDs/100PD (P = 0.02) in the first month of intervention.After that, the increasing trend seemed to accelerate slightly to a rate of 3.09 DDDs/100PD (P = 0.69) per month.(4) Cardiothoracic Surgery ICU (Fig. 4D): An increasing trend at a rate of 2.51 DDDs/100PD (P = 0.02) per month was observed in the pre-intervention period.However, AUD instantly decreased by 41.52 DDDs/100PD (P = 0.05) in the first month of intervention.After that, the increasing trend seemed to accelerate slightly to a rate of 3.18 DDDs/100PD (P = 0.73) per month.Furthermore, the results of other departments of ICU system can be found in the supplementary materials (Table S2 and Fig. S3).
ITS analysis of Internal Medicine system AUR
As shown in Table 5 and Fig. 5. (1) Department of Pediatrics, Unit 2 (Fig. 5A): AUR decreased at a rate of 0.33 % (P = 0.03) per month before the intervention.Then an immediate increase of 8.44 % (P = 0.006) was observed in the first month of the intervention.After that, the decreasing trend seemed to accelerate slightly to a rate of 0.327 % (P = 0.99) per month.(2) Emergency Ward (Fig. 5B): a gradually decrease in AUR at a rate of 0.50 % (P = 0.22) per month was observed before the intervention was implemented.However, at the beginning of the intervention, AUR increased to a level of 18.42 % (P = 0.02) immediately.After that, the downward trend seemed to accelerate slightly to a rate of 0.36 % (P = 0.85) per month.ITS analysis results of other departments of Internal Medicine system can be found in the supplementary materials (Table S3 and Fig. S4).
ITS analysis of surgical system AUR
Table 5 and Fig. 6 illustrates the results.( 1) Urology (Fig. 6A): a increasing trend at a rate of 0.13 % (P = 0.33) per month of AUR was observed during the pre-intervention.Then an immediate decrease to a level of 6.56 % (P = 0.02) was observed at the beginning of the intervention.After that, the upward trend seemed to accelerate slightly to a rate of 0.17 % (P = 0.86) per month.(2) Gastrointestinal Surgery, Unit 3 (Fig. 6B): AUR gradually decreased at a rate of 0.01 % (P = 0.95) per month during the pre-intervention period.At the beginning of the intervention, AUR dropped greatly by 8.50 % immediately (P = 0.01).But after that, the decreasing trend significantly turned to upward trend at a rate of 0.83 % (P = 0.008) per month.( 3) Microsurgery (Fig. 6C): AUR gradually increased at a rate of 0.33 % (P = 0.10) per month before the intervention.Then an immediate increase to a level of 8.01 % (P = 0.05) was observed at the beginning of the intervention.After that, the upward trend seemed to accelerate slightly to a rate of 0.40 % (P = 0.85) per month.
Other surgery system departments' ITS results can be found in the supplementary materials (Table S3 and Fig. S5).
ITS analysis of ICU system AUR
As shown in Table 5 and Fig. 7. (1) Pediatric ICU: AUR seemed to gradually increase at a rate of 0.06 % (P = 0.92) per month before the intervention.Then an immediate drop of AUR to a level of 8.73 % (P = 0.32) was observed in the first month of the intervention.After that, the upward trend turned to downward trend at a rate of 1.83 % (P = 0.05) per month.Other ICU system departments' ITS results can be found in the supplementary materials (Table S3 and Fig. S6).
Discussion
In general, our findings indicate a significant short-term effect of pay-for-performance based AMS in reducing antimicrobial consumption across many clinical departments within a large teaching hospital in Guangzhou, China.However, we only observe a lasting impact of this intervention on reducing antimicrobial consumption in a few departments.It is important to highlight that our study did not identify a significant impact of the intervention on reducing the AUD and AE/ME at the hospital.And surprisingly, the intervention seemed to contribute to an upward trend in AUR throughout the hospital.
We noted a significant reduction in the percentage of hospital-wide antibiotic cost (AE/ME) in pre-post-comparisons. Interestingly, however, this significant reduction was not present in the ITS analysis results.Perhaps we can interpret this phenomenon based on the intervention itself.The intervention solely established thresholds for AUD and AUR without making any assertions regarding the cost of antimicrobials.Different from our study, Qian et al. reported that AMS which included setting a specific target for antimicrobial use was associated with the long-term trend of AE/ME and slowed the pre-intervention decline trend in AE/ME in inpatients [32].We believe that this difference may be associated with the relatively single interventions in our study.In the study conducted by Qian et al. they implemented several interventions, which encompassed not only rewards and punishments but also the establishment of an antibiotic control system.This system was then integrated into the existing hospital information system.
Although our study found that the intervention had no significant effect on the overall AUD in the hospital, and was even associated with an upward trend in the overall AUR in the hospital, the analysis of specific key departments showed that the intervention led to a decline in antimicrobial consumption in some departments.Given the overuse of antibiotics and the resistance associated with overuse in China, these results are encouraging.
For certain departments, the intervention initially resulted in a notable and immediate decrease in antimicrobial consumption.In reality, research in behavioral economics and public health has demonstrated that financial incentives and penalties are generally considered robust tools for altering provider behaviors [33].However, the long-term impact of intervention did not show significant results or the decline in antimicrobial consumption gradually slowed down during the post-intervention period.This was observed in departments such as Department of Hematology, Unit 3, Gastrointestinal Surgery, Unit 3, and so on.To our knowledge, this may be mainly attributed to reducing power at different stages.Initially, the high antimicrobial consumption at baseline provided ample room for reduction.When antimicrobial consumption reaches low levels, there is little room for further reductions.Similar findings were also observed in other AMS reports.For example, a study found no significant change in antimicrobial consumption in a Tertiary Women's and Children's Hospital following AMS because of the low base levels of antimicrobial consumption in the institution, even though the primary AMS strategy employed in the study was a prospective audit and feedback approach [34].
In contrast to the significant impact of the intervention on a number of departments in the short term, the majority of departments in the hospital were relatively unaffected (statistically insignificant) by the intervention.We believe that one possible reason is that department leaders value performance in other areas, such as surgery.And they don't care about fines for substandard antimicrobial indicators.
Overall, our findings are generally consistent with previous studies that have demonstrated this intervention's effectiveness in reducing antimicrobial consumption.Gong, S et al. have reported that a significant decline in antibiotic use and corresponding expenditure in both ambulatory and inpatient clinical settings after the inclusion of bonus to an AMS based on prior authorization alone [35].Borek et al. reported that bonus can optimize antibiotic prescribing in primary care general practices throughout England [26].Balinskaite et al. demonstrated that the introduction of bonus for local healthcare commissioners was associated with a significant reduction in both total and broad-spectrum antibiotic prescribing in primary care throughout England [27].Martens, J. D et al. also reported that behavior independent bonus can be a help in changing prescription behavior of general practitioners, and effects are small-scale and temporary [28].
The preliminary results of our study are very promising and motivate us to continue our efforts to reduce AUR and AUD.In addition, given the short-term impact of our findings on antimicrobial consumption, we recommend implementing such hard measures in departments where there is considerable room for improvement in antimicrobial use, such as those with high baseline antimicrobial use.In such cases, such hard measures are more likely to help reduce antimicrobial consumption.
Several limitations associated with this study must be acknowledged.Firstly, it is a retrospective study conducted at a single center without a control group.Therefore, our findings may not be directly generalizable to other settings.Although we applied ITS analysis to minimize internal validity threats, we cannot ensure that the pay-for-performance based AMS were solely responsible for the changes reported in our findings.Therefore, A prospective multi-center study with a control group is necessary to generalize the utility of the intervention in the future.Secondly, we only examined changes in AUD, AUR, and economic indicators.However, these indicators do not directly reflect the appropriateness of antibiotic prescribing.Finally, due to the lack of case data for individual patients, we were unable to analyze clinical outcomes after re-intervention or control for other factors that might affect changes in antibiotic use.Despite these limitations, this study can still serve as a reference for further research aimed at addressing these limitations.
Conclusion
The pay-for-performance based AMS proved effective in reducing antibiotic consumption in certain departments of a large teaching hospital in Guangzhou, particularly in the short-term period.Future studies will be necessary to verify its effectiveness, identify areas for improvement, and establish evidence on the causal mechanisms that incentivize doctors' prescribing patterns for antibiotics.Furthermore, our study serves as a useful reference for other hospitals looking to implement similar antimicrobial stewardship programs, including in which departments and the specific details of implementation.This strategy is simple, economical, and feasible, and its replication in other healthcare settings could prove beneficial in addressing the issue of antibiotic resistance.
Fig. 1 .
Fig. 1. Results of the ITS analysis of total hospital antibiotic use density (A), antibiotic use rate (B) and antibiotic expenditure/medication expenditure (C).
Fig. 2 .
Fig. 2. Results of the ITS analysis of Internal Medicine system antibiotic use density.A: Department of Hematology, Unit 1; B: Department of Hematology, Unit 3;C: Emergency Ward.
Fig. 3 .
Fig. 3. Results of the ITS analysis of Surgical system antibiotic use density.A: Urology; B: Gastrointestinal Surgery, Unit 3; C: Cardiac Surgery; D: Organ Transplant Unit.
Fig. 4 .
Fig. 4. Results of the ITS analysis of ICU System antibiotic use density.A: Pediatric ICU; B: Neurology ICU; C: ICU, Unit 2; D: Cardiothoracic Surgery ICU.
Fig. 5 .
Fig. 5. Results of the ITS analysis of Internal Medicine system antibiotic use rate.A: Department of Pediatrics, Unit 2; B: Emergency Ward.
Fig. 6 .
Fig. 6. Results of the ITS analysis of Surgical system antibiotic use rate.A: Urology; B: Gastrointestinal Surgery, Unit 3; C: Microsurgery.
Fig. 7 .
Fig. 7. Results of the ITS analysis of ICU system antibiotic use rate.
Table 2
Overall changes of total hospital.
Table 3
Results of the ITS analysis of total hospital.
Table 4
Results of the ITS analysis of AUD.
Table 5
Results of the ITS analysis of AUR. | 5,966.8 | 2024-06-01T00:00:00.000 | [
"Medicine",
"Economics"
] |
Novel applications of Convolutional Neural Networks in the age of Transformers
Convolutional Neural Networks (CNNs) have been central to the Deep Learning revolution and played a key role in initiating the new age of Artificial Intelligence. However, in recent years newer architectures such as Transformers have dominated both research and practical applications. While CNNs still play critical roles in many of the newer developments such as Generative AI, they are far from being thoroughly understood and utilised to their full potential. Here we show that CNNs can recognise patterns in images with scattered pixels and can be used to analyse complex datasets by transforming them into pseudo images with minimal processing for any high dimensional dataset, representing a more general approach to the application of CNNs to datasets such as in molecular biology, text, and speech. We introduce a pipeline called DeepMapper, which allows analysis of very high dimensional datasets without intermediate filtering and dimension reduction, thus preserving the full texture of the data, enabling detection of small variations normally deemed ‘noise’. We demonstrate that DeepMapper can identify very small perturbations in large datasets with mostly random variables, and that it is superior in speed and on par in accuracy to prior work in processing large datasets with large numbers of features.
Novel applications of Convolutional
CNNs still play critical roles in many of the newer developments such as Generative AI, they are far from being thoroughly understood and utilised to their full potential.Here we show that CNNs can recognise patterns in images with scattered pixels and can be used to analyse complex datasets by transforming them into pseudo images with minimal processing for any high dimensional dataset, representing a more general approach to the application of CNNs to datasets such as in molecular biology, text, and speech.We introduce a pipeline called DeepMapper, which allows analysis of very high dimensional datasets without intermediate filtering and dimension reduction, thus preserving the full texture of the data, enabling detection of small variations normally deemed 'noise'.We demonstrate that DeepMapper can identify very small perturbations in large datasets with mostly random variables, and that it is superior in speed and on par in accuracy to prior work in processing large datasets with large numbers of features.
There are exponential increases in data 1 especially from highly complex systems, whose non-linear interactions and relationships are not well understood, and which can display major or unexpected changes in response to small perturbations, known as the 'Butterfly effect' 2 .
In domains characterised by high-dimensional data, traditional statistical methods and Machine Learning (ML) techniques make heavy use of feature engineering that incorporates extensive filtering, selection of highly variable parameters, and dimension reduction techniques such as Principal Component Analysis (PCA) 3 .Most current tools filter out smaller changes in data, mostly considered artefacts or `noise`, which may contain information that is paramount to understanding the nature and behaviour of such highly complex systems 4 .
The emergence of Deep Learning (DL) offers a paradigm shift.DL algorithms, underpinned by adaptive learning mechanisms, can discern both linear and non-linear data intricacies, and open avenues to analyse data that is not possible or practical by conventional techniques 5 , particularly in complex domains such as image, temporal sequence analysis, molecular biology, and astronomy 6 .DL models, such as Convolutional Neural Networks (CNNs) 7 , Recurrent Neural Networks (RNNs) 8 , Generative Networks 9 and Transformers 10 , have demonstrated exceptional performance in various domains, such as image and speech recognition, natural language processing, and game playing 6 .CNNs and LSTMs were found to be great tools to predict behaviour of so called `chaotic` systems 11 .Modern DL systems often surpass human-level performance, and challenge humans even in creative endeavours.
CNNs utilise a unique architecture that comprises several layers, including convolutional layers, pooling layers, and fully connected layers, to process and transform the input data hierarchically 5 .CNNs have no knowledge of sequence, and therefore are generally not used in analysing time-series or similar data, which is traditionally attempted with Recurrent Neural Networks (RNNs) 12 and Long Short-Term Memory networks (LSTMs) 8 due to their ability to capture temporal patterns.Where CNNs have been employed for sequence or time-series analysis, 1-dimensional (1D) CNNs have been selected because of their vector based 1D input structure 13 .However, attempts to analyse such data in 1D CNNs do not always give superior results 14 .In addition, GPU (Graphical Processing Units) systems are not always optimised for processing 1D CNNs, therefore even though 1D CNNs have fewer parameters than 2-dimensional (2D) CNNs, 2D CNNs can outperform 1D CNNs 15 .
Transformers, introduced by Vaswani et al. 10 , have recently come to prominence, particularly for tasks where data are in the form of time series or sequences, in domains ranging from language modelling to stock market prediction 16 .Transformers leverage self-attention, a key component that allows a model to weigh and focus on various parts of an input sequence when producing an output, enabling the capture of long-range dependencies in data.Unlike CNNs, which use local receptive fields, self-attention weighs the significance of various parts of the input data 17 .
Following success with sequence-based tasks, Transformers are being extended to image processing.Vision-Transformers in object detection 18 , Detection Transformers 19 and lately Real-time Detection Transformers all claim superiority over CNNs 20 .However, their inference operations demand far more resources than CNNs and trail CNNs in flexibility.They also suffer similar augmentation problems as CNNs.More recently, Retentive-Networks have been offered as an alternative to Transformers 21 and may soon challenge the Transformer architecture.
CNNs can recognise dispersed patterns
Even though CNNs are widely used, there are some misconceptions, notably that CNNs are largely limited to image data, and require established spatial relationships between pixels in images, both of which are open to challenge.The latter is of particular importance when considering the potential of CNNs to analyse complex non-image datasets, whose data structures are arbitrary.
Moreover, while CNNs are universal function approximators 22 , they may not always generalise 23 , especially if they are trained on data that is insufficient to cover the solution space 24 .It is also known that they can spontaneously generalise even when supplied with a small number of samples during training after overfitting, called 'grokking' 25,26 .CNNs can generalise from scattered data if given enough samples, or if they grok, and this can be determined by observing changes to training versus testing accuracy and loss.
Non-image processing with CNNs
While CNNs have achieved remarkable success in computer vision applications, such as image classification and object detection 7,27 , they have also been employed in other domains to a lesser degree with impressive results, including: (1) natural language processing, text classification, sentiment analysis and named entity recognition, by treating text data as a one-dimensional image with characters represented as pixels 16,28 ; (2) audio processing, such as speech recognition, speaker identification and audio event detection, by applying convolutions over time frequency representations of audio signals 29 ; (3) time series analysis, such as financial market prediction, human activity recognition and medical signal analysis, using one-dimensional convolutions to capture local temporal patterns and learn features from time series data 30 ; and (4) biopolymer (e.g., DNA) sequencing, using 2D CNNs to accurately classify molecular barcodes in raw signals from Oxford Nanopore sequencers using a transformation to turn a 1D signal into 2D images-improving barcode identification recovery from 38 to over 85% 31 .Indeed, CNNs are not perfect tools for image processing as they do not develop semantic understanding of images even though they can be trained to do semantic segmentation 32 .They cannot easily recognise negative images when trained with positive images 33 .CNNs are also sensitive to the orientation and scale of objects and must rely on augmentation of image datasets, often involving hundreds of variations of the same image 34 .There are no such changes in the perspective and orientation of data converted into flat 2D images.
In the realm of complex domains that generate huge amounts of data, augmentation is usually not required for non-image datasets, as the datasets will be rich enough.Moreover, introducing arbitrary augmentation does not always improve accuracy; indeed, introducing hand-tailored augmentation may hinder analysis 35 .If augmentation is required, it can be introduced in a data-oriented form, but even when using automated augmentation such as AutoAugment 35 or FasterAutoAugment 36 , many of the augmentations (such as shearing, translation, rotation, inversion, etc.) should not be used, and the result should be tested carefully, as augmentation may introduce artefacts.
A frequent problem with handling non-image datasets with many variables is noise.Many algorithms have been developed for noise elimination, most of which are domain specific.CNNs can be trained to use the whole input space with minimal filtering and no dimension reduction, and can find useful information in what might be ascribed as 'noise' 4,37 .Indeed, a key reason to retain 'noise' is to allow discovery of small perturbations that cannot be detected by other methods 11 .
Conversion of non-image data to artificial images for CNN processing
Transforming sequence data to images without resorting to dimension reduction or filtering offers a potent toolset for discerning complex patterns in time series and sequence data, which potentiates the two major advantages of CNNs compared to RNNs, LSTMs and Transformers.First, CNNs do not depend on past data to recognise current patterns, which increases sensitivity to detect patterns that appear in the beginning of time-series or sequence data.Second, 2D CNNs are better optimised for GPUs and highly parallelizable, and are consequently faster than other current architectures, which accelerates training and inference, while reducing resource and energy consumption during in all phases including image transformation, training, and inference significantly.
Image data such as MNIST represented in a matrix can be classified by basic deep networks such as Multilevel Perceptrons (MLP) by turning their matrix representation to vectors (Fig. 1a).Using this approach analysis of images becomes increasingly complex as the image size grows, increasing the input parameters of MLP and the computational cost exponentially.On the other hand, 2D CNNs can handle the original matrix much faster than MLP with equal or better accuracy and scale to much larger images.
Just like how a simple neural network analyses a 2D image by turning it into a vector, the reciprocal is also true-data in a vector can be converted to a 2D matrix (Fig. 1b).Vectors converted to such matrices form arbitrary patterns that are incomprehensible to human eye.A similar technique for such mapping has also been proposed by Kovelarchuk et al. using another algorithm called CPC-R 38 .
Attribution
An important aspect of any analysis is to be able to identify those variables that are most important and the degree to which they contribute to a given classification.Identifying these variables is particularly challenging in CNNs due to their complex hierarchical architecture, and many non-linear transformations 39 .To address this problem many 'attribution methods' have been developed to try to quantify the contribution of each variable (e.g., pixels in images) to the final output for deep neural networks and CNNs 40 .
Saliency maps serve as an intuitive attribution and visualisation tool for CNNs, spotlighting regions in input data that significantly influence the model's predictions 27 .By offering a heatmap representation, these maps illuminate key features that the model deems crucial, thus aiding in demystifying the model's decision-making process.For instance, when analysing an image of a cat, the saliency map would emphasise the cat's distinct features over the background.While their simplicity facilitates understanding even for those less acquainted with deep learning, saliency maps do face challenges, particularly their sensitivity to noise and occasional misalignment with human intuition [41][42][43] .Nonetheless, they remain a pivotal tool in enhancing model transparency and bridging the interpretability gap between ML models and human comprehension.
Several methods have been proposed for attribution, including Guided Backpropagation 44 , Layer-wise Relevance Propagation 45 , Gradient-weighted Class Activation Mapping 46 , Integrated Gradients 47 , DeepLIFT 48 , and SHAP (SHapley Additive exPlanations) 49 .Many of these methods were developed because it is challenging to identify important input features when there are different images with the same label (e.g., 'bird' with many species) presented at different scales, colours, and perspectives.In contrast, most non-image data does not have www.nature.com/scientificreports/such variations, as each pixel corresponds to the same feature.For this reason, choosing attributions with minimal processing is sufficient to identify the salient input variables that have the maximal impact on classification.
DeepMapper
Here we introduce a new analytical pipeline, DeepMapper, which applies a non-indexed or indexed mapping to the data representing each data point with one pixel, enabling the classification or clustering of data using 2D CNNs.This simple direct mapping has been tried by others but has not been tested with datasets with sufficiently large amounts of data in various conditions.We use raw data with minimal filtering and no dimension reduction to preserve small perturbations in data that are normally removed, in order to assess their impact.
The pipeline includes conversion of data, separation to training and validation, assessment of training quality, attribution, and accumulation of results in a pipeline.The pipeline is run multiple times until a consensus is reached.The significant variables can then be identified using attribution and exported appropriately.
The DeepMapper architecture is shown in Fig. 2. The complete algorithm of DeepMapper is detailed in the "Methods" section and the Python source code is supplied at GitHub 50 .
Methods
DeepMapper is developed to implement an approach to process high-dimensional data without resorting to excessive filtering and dimension reduction techniques that eliminate smaller perturbations in data to be able to identify those differences that would otherwise be filtered out.The following algorithm is used to achieve this result: 1. Read and setup the running parameters.2. Read the data into a tabulated form in the form of observations, features, and outcome (in the form of labels, or if self-supervised, the input itself).
If the input data includes categorical features, these features should be converted to numbers and normalised before feeding to DeepMapper.The data are normalised using log normalisation, then folded to a matrix.Folding is performed either directly with the natural order of the data or by using the index that is generated or supplied during the data import.After folding, the data are kept in temporary storage and separated to 'train' and 'test' using SciPy train test split.
Training is done using either using CNNs that are supplied by the PyTorch libraries, or a custom CNN supplied (ResNet18 is used by default).Intermediary results are run through attribution algorithms supplied by the Captum 51 and saved to run history log.The run is then repeated until convergence is achieved, or until a predetermined number of iterations are performed by shuffling training testing and validation data.Results are summarised in a report with exportable tables and graphics.Attribution is applied to true positives and true negatives, and these are translated back to features to be added to reports.Further details can be directly found in the accompanying code 50 .
Vol The results of DeepMapper analysis can be used in 2 ways: 1. Supervised: DeepMapper produces a list of features that played a prominent role in the differentiation of classes.2. Self-supervised: Highlights the most important features in differentiating observations from each other in a non-linear fashion.The output can be used as an alternative feature selection tool for dimension reduction.
In both modes, any hidden layer can be examined as latent space.A special bottleneck layer can be introduced to reduce dimensions for clustering purposes.
Results
We present a simple example to demonstrate that CNNs can readily interpret data with a well dispersed pattern of pixels, using the MNIST dataset, which is widely used for hand-written image recognition and which humans as well as CNNs can easily recognise and classify based on the obvious spatial relationships between pixels (Fig. 3).This dataset is a more complicated problem than datasets such as the Gisette dataset 52 that was developed to distinguish between 4 and 9.It includes all digits and uses a full randomisation of pixels, and can be regenerated with the script supplied 50 and changing the seed will generate different patterns.
We randomly shuffled the data in Fig. 3 using the same seed 50 to obtain 60,000 training images such as those shown on the right side of each digit, and validated the results with a separate batch of 20,000 images (Fig. 3).Although the resulting images are no longer recognizable by eye, a CNN has no difficulty distinguishing and classifying each pattern with ~ 2% testing error compared to the reference data (Fig. 4).This result demonstrates that CNNs can accurately recognise global patterns in images without reliance on local relationships between neighbouring pixels.It also confirms the finding that shuffling images only marginally increases training loss 23 and extends it to testing loss (Fig. 4).
Testing DeepMapper
Finding slight changes in very few variables in otherwise seemingly random datasets with large numbers of variables is like finding a needle in a haystack.Such differences in data are almost impossible to detect using traditional analysis tools because small variations are usually filtered out before analysis.
We devised a simple test case to determine if DeepMapper can detect one or more variables with small but distinct variations in otherwise randomly generated data.We generated a dataset with 10,000 data items with 18,225 numeric variables as an example of a high-dimensional dataset using PyTorch's uniform random algorithms 53 .The algorithm sets 18,223 of these variables to random numbers in the range of 0-1, and two of the variables into two distinct groups as seen in Table 1.
We call this type of dataset 'Needle in a haystack' (NIHS) dataset, where very small amounts of data with small variance is hidden among a set of random variables that is order(s) of magnitude greater than the meaningful components.We provide a script that can generate this and similar datasets among the source supplied 50 .
DeepMapper was able to accurately classify the two datasets (Fig. 5).Furthermore, using attribution Deep-Mapper was also able to determine the two datapoints that have different variances in the two classes.Note that DeepMapper may not always find all the changes in the first attempt as neural network initialisation of weights is a stochastic process.However, DeepMapper overcomes this matter via multiple iterations to establish acceptable training and testing accuracies as described in the Methods.
Comparison of DeepMapper with DeepInsight
DeepInsight 54 is the most general approach published to date for converting non-image data into image-like structures, with the claim that these processed structures allow CNNs to capture complex patterns and features in the data.DeepInsight offers an algorithm to create images that have similar features collated into a "well organised image form", or by applying one of several dimensionality reduction algorithms (e.g., t-SNE, PCA or KPCA) 54 .However, these algorithms add computational complexity, potentially eliminate valuable information, limit the abilities of CNNs to find small perturbations, and make it more difficult to use attribution to determine Vol:.(1234567890)
To identify important input variables, DeepInsight authors later developed DeepFeature 55 using an elaborate mechanism to associate image areas identified by attribution methods to the input variables.DeepMapper uses a simpler approach as each pixel corresponds to only one variable and can use any of the attribution methods to link results to its input space.While both DeepMapper and DeepInsight follow the general idea that non-image data can be processed with 2D CNNs, DeepMapper uses a much simpler and faster algorithm, while DeepInsight chooses a sophisticated set of algorithms to convert non-image data to images, dramatically increasing computational cost.The DeepInsight conversion process is not designed to utilise GPUs so cannot be accelerated by better hardware, and the obtained images may be larger than the number of data points, also impacting performance.
One of the biggest differences between DeepFeature and DeepMapper is that DeepFeature in many cases selects multiple features during attribution because DeepInsight pixels represent multiple values, whereas each DeepMapper pixel represents one input feature, therefore it can determine differentiating features with pinpoint accuracy at a resolution of 1 pixel per feature.
The DeepInsight manuscript offers various examples of data to demonstrate its abilities.However, many of the examples use low dimensions (20-4000 features) while today's complex datasets may regularly require tens of thousands to millions of features such as in genome analysis in biology and radio-telescope analysis in astronomy.ringnorm example, is a 20 dimensional, 2 class classification with 7400 samples 54 .Another example, Madelon, introduced an artificially generated dataset 2600 samples and 500 dimensions, where only 5 principal and 20 derived variables containing information.Instead, we used a much more complicated example than Madelon, an NIHS dataset 50 that we used to test DeepMapper in the first place.We attempted to run DeepInsight with NIHS data, but we could not get it to train properly and for this reason we cannot supply a comparison.
The most complex problem published by DeepInsight was the analysis of a public RNA sequencing gene expression dataset from TCGA (https:// cance rgeno me.nih.gov/) containing 6216 samples of 60,483 genes or dimensions, of which DeepInsight used 19,319.We selected this example as the second demonstration of application of DeepMapper to high dimensional data, as well as a benchmark for comparison with DeepInsight.The charts demonstrate although the training continued for 50 epochs, about 15 epochs for shuffled images (b) would be enough, as further training starts causing overfitting.The decrease of accuracy between normal and shuffled images is about 3%, and this difference cannot be improved by using more sophisticated CNNs with more layers, meaning shuffling images cause a measurable loss of information, yet still hold patterns recognisable by CNNs.We generated the data using the R script offered by DeepInsight 54 and ran DeepMapper as well as DeepInsight using the generated dataset to compare accuracy and speed.In this test DeepMapper exhibited much improved processing speed with near identical accuracy (Table 2, Fig. 6).
Discussion
CNNs are fundamentally sophisticated pattern matchers that can establish intricate mappings between input features and output representations 6 .They excel at transforming various inputs into outputs, including identifying classes or bounding boxes, through a series of operations involving convolution, pooling, and activation functions 7,56 .
Even though CNNs are in the centre of many of today's revolutionary AI systems from self-driving cars to generative AI systems such as Dall-E-2, MidJourney and Stable Diffusion, they are still not well understood nor efficiently utilised, and their usage beyond image analysis has been limited.
While CNNs used in image analysis are constrained historically and practically to a 224 × 224 matrix or a similar fixed size input, this limitation arises for pre-trained models.When CNNs have not been pre-trained, one can select a much wider variety of sizes as input shape depending on the CNN architecture.Some CNNs are more flexible in their input size that implemented with adaptive pooling layers such as ResNet18 using adaptive pooling 57 .This provides flexibility to choose optimal sizes for the task in hand for non-image applications, as most non-image applications will not use pre-trained CNNs.Here we have demonstrated uses of CNNs that are outside the norm.There is a need for analysis of complex data with many thousands of features that are not primarily images.There is also a lack of tools that offer minimal conversion of non-image data to image-like formats that then can easily be processed with CNNs in classification and clustering tasks.As a lot of this data is coming from complex systems that have a lot of features, DeepMapper offers a way of investigating such data in ways that may not be possible with traditional approaches.
Although DeepMapper currently uses CNN as its AI component, alternative analytic strategies can easily be substituted in lieu of CNN with minimal changes, such as Vision Transformers 18 or RetNets 21 , which have great potential for this application.While Transformers and RetNets have input size limitations for inference in terms of number of tokens.Vision Transformers can handle much larger inputs by dividing images to segments that incorporate multiple pixels 18 .This type of approach would be applicable to both Transformers and RetNets, and future architectures.DeepMapping can leverage these newer architectures, and others, in the future 57 .2.
(CNNs) have been central to the Deep Learning revolution and played a key role in initiating the new age of Artificial Intelligence.However, in recent years newer architectures such as Transformers have dominated both research and practical applications.While
Figure 1 .
Figure 1.Conversion of images to vectors and vice versa.(a) Basic operation of transformation of an image to a vector, forming a sequence representation of the numeric values of pixels.(b) Transforming a vector to a matrix, forming an image by encoding numerical values as pixels.During this operation if the vector size cannot be mapped to m X n because vector size is smaller than the nearest m X n, then it is padded with zeroes to the nearest m X n.
3 .
Identify features and labels.4. Do only basic filtering that eliminates observations or features if all of them are 0 or empty.5. Normalise features.6. Transform tabulated data to 2-dimensional matrices as illustrated in Fig.1aby applying a vector to matrix transformation.
Figure 2 .
Figure 2. DeepMapper architecture.DeepMapper uses sequence or multi-variate data as input.The first stepof DeepMapper is to merge and if required index input files to prepare them into matrix format.The data are normalised using log normalisation, then folded to a matrix.Folding is performed either directly with the natural order of the data or by using the index that is generated or supplied during the data import.After folding, the data are kept in temporary storage and separated to 'train' and 'test' using SciPy train test split.Training is done using either using CNNs that are supplied by the PyTorch libraries, or a custom CNN supplied (ResNet18 is used by default).Intermediary results are run through attribution algorithms supplied by the Captum51 and saved to run history log.The run is then repeated until convergence is achieved, or until a predetermined number of iterations are performed by shuffling training testing and validation data.Results are summarised in a report with exportable tables and graphics.Attribution is applied to true positives and true negatives, and these are translated back to features to be added to reports.Further details can be directly found in the accompanying code50 .
Figure 3 .
Figure 3.A sample from MNIST dataset (left side of each image) and its shuffled counterpart (right side).
Figure 4 .
Figure 4. Results of training MNIST dataset (a) and the shuffled dataset (b) with PyTorch model ResNet18 50 .The charts demonstrate although the training continued for 50 epochs, about 15 epochs for shuffled images (b) would be enough, as further training starts causing overfitting.The decrease of accuracy between normal and shuffled images is about 3%, and this difference cannot be improved by using more sophisticated CNNs with more layers, meaning shuffling images cause a measurable loss of information, yet still hold patterns recognisable by CNNs.
Figure 5 .
Figure 5.In this demonstration of analysis of high dimensional data with very small perturbations, DeepMapper can find these small variations in a few (in this example two) variables out of very large number of random variables (here 18,225).(a) DeepMapper representations of each record.(b) The result of the test run of the classification with unseen data (3750 elements).(c) The first and second variables in the graph are measurably higher than the other variables.
Figure 6 .
Figure 6.Analysis of TCGA data by DeepInsight vs DeepMapper: The image on the top was generated by DeepInsight using its default values and a t-SNE transformer supplied by DeepInsight.The image at the bottom was generated by DeepMapper.Image conversion and training speeds and the analysis results can be found in Table2.
51If the analysis is supervised, then transform class labels to output matrices.8.Begin iteration: a. Separate the data into training and validation groups.b.Train on the dataset for required number of epochs, until reaching satisfactory testing accuracy and loss, or maximum a pre-determined number of iterations.c.If satisfactory testing results are obtained, then: i. Perform attributions by associating each result to contributing input pixels using Captum, a Python library for attributions51.ii.Accumulate attribution results by collecting the attribution results for each class.
Table 1 .
Generated variables and their random ranges. | 6,552 | 2024-05-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Pharmacological study of anti-allergic activity of Syzygium cumini ( L . )
Myrtaceae is a plant family widely used in folk medicine and Syzygium and Eugenia are among the most important genera. We investigated the anti-allergic properties of an aqueous leaf extract of Syzygium cumini (L.) Skeels (SC). HPLC analysis revealed that hydrolyzable tannins and flavonoids are the major components of the extract. Oral administration of SC (25-100 mg/kg) in Swiss mice (20-25 g; N = 7/group) inhibited paw edema induced by compound 48/80 (50% inhibition, 100 mg/kg; P ≤ 0.05) and, to a lesser extent, the allergic paw edema (23% inhibition, 100 mg/kg; P ≤ 0.05). SC treatment also inhibited the edema induced by histamine (58% inhibition; P ≤ 0.05) and 5-HT (52% inhibition; P ≤ 0.05) but had no effect on plateletaggregating factor-induced paw edema. SC prevented mast cell degranulation and the consequent histamine release in Wistar rat (180200 g; N = 7/group) peritoneal mast cells (50% inhibition, 1 μg/mL; P ≤ 0.05) induced by compound 48/80. Pre-treatment of BALB/c mice (18-20 g; N = 7/group) with 100 mg/kg of the extract significantly inhibited eosinophil accumulation in allergic pleurisy (from 7.662 ± 1.524 to 1.89 ± 0.336 x 106/cavity; P ≤ 0.001). This effect was related to the inhibition of IL-5 (from 70.9 ± 25.2 to 12.05 ± 7.165 pg/mL) and CCL11/eotaxin levels (from 60.4 ± 8.54 to 32.8 ± 8.4 ng/mL) in pleural lavage fluid, using ELISA. These findings demonstrate an anti-allergic effect of SC, and indicate that its anti-edematogenic effect is due to the inhibition of mast cell degranulation and of histamine and serotonin effects, whereas the inhibition of eosinophil accumulation in the allergic pleurisy model is probably due to an impairment of CCL11/eotaxin and IL-5 production. Correspondence
Introduction
Myrtaceae is a plant family widely used in folk medicine in different countries and Eugenia and Syzygium are among its most important genera.Species of this family are often used for several medicinal purposes, including the treatment of diarrhea (1) and pain.Experimental data also suggest the action of these species on inflammatory pro-cesses, respiratory diseases (2), and allergic disorders (3).The seeds of Syzygium cumini (L.) Skeels (SC; Myrtacea, syn., Eugenia jambolana Lamk) have been reported to be useful as astringents in diarrhea as well as dysentery (4).Other parts of the plant have been reported to possess anti-diabetic (5), bactericidal (6) and anti-mutagenic (7) properties.The ethanolic bark extract has been reported to have anti-inflammatory activity in carrageenan and formaldehyde paw edema (2).The same extract was also shown to inhibit histamine-, serotonin (5-HT)-and prostaglandin 2-induced paw edema (8).
The allergic process has an important inflammatory component in which mast cell activation and degranulation are the first phenomena observed.During this process, mast cells release several inflammatory mediators including histamine, 5-HT, plateletaggregating factor (PAF), leukotrienes, and a variety of cytokines which can elicit many events associated with allergic inflammation, such as edema formation and cellular infiltration (9).Eosinophil accumulation is the main feature of allergic inflammation, these being some of the most abundant leukocytes present at the site.The triggering and regulation of eosinophil accumulation in allergic inflammation depend on the release of cytokines and chemokines such as interleukin-4 (IL-4), IL-5 and CCL11/eotaxin in response to an antigen challenge (10,11).Once they reach the allergic site, eosinophils degranulate and release several mediators, including leukotrienes, major basic protein, PAF, cationic protein, and eosinophil-derived neurotoxin that contribute to extensive tissue damage (12).The modulation of eosinophil accumulation is one of the main targets for the discovery of anti-allergic compounds because of its potential tissue damaging effects.
In the present study, we investigated the ability of an aqueous leaf extract of SC to inhibit mouse allergic edema formation and eosinophil accumulation in mouse allergic pleurisy and the mechanisms involved in such phenomena
Animals
Male Swiss Webster (20-25 g) and BALB/ c mice (18-20 g) and Wistar rats (180-200 g) from our own colony (CECAL-FIOCRUZ) were housed in a room with controlled temperature and lighting, with free access to lab chow and tap water.All experiments were conducted in accordance with the ethical guidelines of the International Association for the Study of Pain (13) and the institutional guidelines for animal use (CEUA 00050/00).
Plant material and extract preparation
Entire branches of SC were collected in Rio de Janeiro in January 1999.The plant material was identified by Dr. Graziela M. Barrozo (in memoriam), Botanic Garden of Rio de Janeiro, and a voucher was deposited in the Herbarium of the Botanic Garden of Rio de Janeiro under number HB-83016.The aqueous extract of fresh leaves of SC was obtained by decoction of leaves in distilled water (100 g/L) for 3 to 5 min.The extract was filtered, lyophilized, stored at room temperature, and dissolved in distilled water immediately before use.
HPLC characterization of the extract
The separation of the components of the mixture was dependent on the pH of the mobile phase, with pH near 4 being satisfactory.Thus, 0.1% phosphoric acid was added to the mobile phase to adjust its pH to 4.1.The mobile phase eluted all the phenolic compounds within 30 min at 0.75 mL/min.Analysis was performed on a Supelcosil LC 18 (250 mm x 4.6 mm, I.D., 5 µm particle size) column.
A 40.0-mg sample of the extract was accurately weighed and transferred to a 3.00-mL microcentrifuge tube, suspended in 2.00 mL methanol and cooled in an ice bath.The suspension was sonicated (Odontrobrás, Ribeirão Preto, SP, Brazil) for 20 min and then centrifuged for 10 min at 3000 rpm, at 4ºC (Beckman Coulter, Fullerton, CA, USA).The supernatant was decanted and stored at -20ºC.Before injection into the column, the samples were filtered in an UltraFree-MC 0.22-µm filter unit (Millipore, São Paulo, SP, Brazil) with a Durapore membrane (Millipore) with 0.5-mL capacity.
HPLC data were collected and processed with a Shimadzu Class-VP 6.12 model apparatus (Kyoto, Japan) equipped with a diodearray detector and SP3 chromatographic software.The gradient of mobile phase elution was programmed for solvent A (acetonitrile: water, 5:95, v/v) and solvent B (acetonitrile: water, 90:10, v/v).The column was previously equilibrated with the mobile phase (100% solvent A) for 30 min at a flow rate of 0.75 mL/min.The mobile phase was adjusted to pH 4.0 with phosphoric acid.The flow rate was 0.75 mL/min and the injection volume 20 µL.The column temperature was maintained at 25ºC during analysis.The gradient program started with 0% of solvent B and increased linearly to 100% of solvent B in 30 min or 3.33%/min.Injections were performed in triplicate.After 44 min, the elution program was returned to the initial condition and held there for 10 min in order to recondition the column.
Treatments
Mice fasted overnight, received the antihistamine agent, the H 1 -receptor antagonist promethazine (10 mg/kg) or aqueous extract (25-100 mg/kg) orally (po) in a final volume of 200 µL, 1 h before stimulation.The control groups were similarly treated with vehicle alone.In some experiments of allergic pleurisy, dexamethasone was given intraperitoneally (2 mg/kg) 24 and 1 h before stimulation (N = 7/group).
Swiss mice sensitization and allergic paw edema were induced as described (15).Briefly, animals were sensitized with a subcutaneous injection of 100 µL of a mixture of OVA (50 µg), and aluminum hydroxide (5 mg).Fourteen days later, mice were challenged by an ipl injection of OVA (3 µg/paw) and the induction of paw edema was evaluated 30 min after stimulation (N = 7/group).
Allergic pleurisy
Active sensitization of BALB/c mice was achieved with a subcutaneous injection of Freund's complete adjuvant emulsion (100 µL) containing OVA (100 µg).Fourteen days later, mice were challenged with an intrathoracic injection of OVA (12.5 µg/cavity, N = 7/group) as described elsewhere (16,17).Briefly, an adapted needle was inserted into the right side of the thoracic cavity of OVAsensitized animals to permit the intrapleural administration of OVA diluted in sterile pyrogen-free saline (50 µL).Sensitized mice challenged with saline vehicle alone were used as negative controls.
At 24 h after the stimulus, mice were killed with excess carbon dioxide and their thoracic cavities were rinsed with 1 mL PBS containing 10 mM EDTA, pH 7.4.Total leukocyte counts were made with an automatic Coulter counter (Beckman Coulter, Fullerton, CA, USA).Differential cell counts were made using stained cytospin (Shandon, Pittsburgh, PA, USA) by the May-Gruenwald-Giemsa method under light microscopy (100X).Counts are reported as numbers of cells (x 10 6 ) per cavity.
Enzyme-linked immunosorbent assay
Levels of eotaxin and IL-5 in the cellfree pleural fluid were evaluated by sandwich enzyme-linked immunosorbent assay using matched antibody pairs from Pharmingen (San Diego, CA, USA), according to manufacturer instructions.Results are reported as picograms per cavity of two experiments in triplicate and values were obtained by comparison with a standard curve (0.015-15 ng/mL for eotaxin and IL-5).
Mast cell purification and histamine measurement
Rat peritoneal mast cells were isolated as previously described (18).Briefly, male Wistar rats (N = 7/group) were killed with excess carbon dioxide and the peritoneal cavity was rinsed with 20 mL of heparinized (10 IU/mL) calcium-and magnesium-free Hank's solution (HBSS -).The fluid was collected, centrifuged at 150 g for 10 min at 4ºC, the pelleted cells were resuspended in HSSB -containing 0.1% bovine serum albumin and submitted to a continuous isotonic Percoll gradient (72%) for mast cell isolation.Purified mast cells were resuspended in HSSB containing Ca 2+ and Mg 2+ .Purity (95%) and viability (98%) were evaluated by Toluidine blue and Trypan blue exclusion staining, respectively.The cells were added to a 24-well plate (10 5 cells/well) and preincubated for 1 h with 1 µg/mL of dried SC extract dissolved in saline or with disodium cromoglycate at 10.2 µg/mL in a 5% CO 2 atmosphere, at 37°C.After this period, cells were incubated for 30 min with 5 µg/ mL C48/80.The reaction was stopped in ice.Histamine was quantified in the supernatant by a fluorimetric assay as previously described (19).The fluorescence intensity was measured at 450 nm (excitation at 360 nm) with a Spectra Max Gemini EM spectrofluorometer (Molecular Devices, Sunnyvale, CA, USA).Percent inhibition of histamine release was calculated as follows: % inhibition = 100 -{(histamine release with SC x 100)/ histamine release without SC}.
Statistical analysis
Data are reported as means ± SEM and were analyzed statistically by one-way ANOVA, and differences between groups were assessed using the Student-Newman-Keuls post-test.A P value <0.05 was considered significant.
Chemical characterization of the extract
The yield of the crude aqueous extract was 6.5% based on fresh leaves (g/g leaves).HPLC fingerprinting of the aqueous SC leaf extract showed an elution diagram consistent with the presence of tannins and flavonoids (Figure 1A).The peaks were grouped into three regions based on the UV absorption profile, and these regions showed the typical patterns of UV absorption, supporting the presence of ellagitannin (Figure 1B), gallotannin (Figure 1C) and flavonoids (Figure 1D) in SC.The presence of flavonoids was also observed by TLC using NP/PEG (data not shown).
Effect of Syzygium cumini extract on paw edema triggered by compound 48/80 or ovalbumin
To assess the effect of SC extract (crude extract obtained from fresh leaves by decoction in water) on allergic reactions, we first used the model of anaphylaxis edema caused by the mast-cell degranulator C48/80 or by OVA in sensitized animals.The ipl administration of C48/80 into the mouse hind paw triggered a significant edema 30 min after the injection, as shown in Figure 2A.Oral pre-treatment with SC extract inhibited edema formation at doses of 25, 50, and 100 mg/kg (maximal inhibition of 50% at 25 mg/ kg) to almost the same extent as promethazine, an anti-histaminic compound (65% inhibition at the dose of 10 mg/kg; Figure 2A, inset).It is noteworthy that oral treatment with 200 or 400 mg/kg SC extract inhibited edema formation at the same intensity (64 and 58%).Figure 2B shows that OVA (50 µg) triggered paw swelling within 30 min in sensitized mice.Oral pre-treatment with SC extract (25-100 mg/kg) led to a slight (20%) inhibition of allergic paw edema with no differences observed between the doses tested, even when we used the doses of 200 or 400 mg/kg of oral SC extract (25 and 32.8% inhibition, respectively; data not shown).Conversely, pre-treatment with promethazine (10 mg/kg; po) was able to of the extract against the edema induced by each of these mediators.As observed in Figure 3A, ipl injection of histamine (100 µg/paw) into the hind paw of naive mice induced a significant paw edema 30 min after the injection which was inhibited by oral treatment with SC extract, with a maximal inhibition of 58% achieved at 50 mg/kg (P ≤ 0.05).Figure 3B shows that the ipl injection of 5-HT (100 µg/paw) into the hind paw induced a significant paw edema 30 min after the stimulus, which was inhibited by treatment with SC at the dose of 100 mg/ kg (51% inhibition; P ≤ 0.05).Conversely, the mouse paw edema induced by PAF (1 µg/paw; 30 min) was not affected by oral pre-treatment with SC extract at the doses of 25, 50, or 100 mg/kg (Figure 3C).These results suggest that the anti-edematogenic effect of oral SC extract on allergen-induced paw swelling was due to an anti-histamine and anti-serotonin effect.
Effect of Syzygium cumini extract on histamine release from rat peritoneal mast cells
Mast cell degranulation followed by the release of vasodilating mediators (mainly histamine) is the major component of allergic edema.In this set of experiments we investigated the effect of the SC extract on mast cell degranulation by means of histamine release.As shown in Table 1, stimulation with C48/80 (5 µg/mL) induced the release of 20 ng/mL histamine in rat peritoneal mast cells that was inhibited by treatment with disodium cromoglycate, a classic mast cell membrane stabilizer (31% inhibition).Pre-treatment with SC extract (1 µg/mL, 1 h before C48/80) significantly inhibited the release of histamine from mast cells (49.5%).
Effect of Syzygium cumini extract on allergic pleurisy
Twenty-four hours after the intrathoracic inhibit edema by 50% (Figure 2B, insert).
Effect of Syzygium cumini extract on paw edema triggered by different inflammatory mediators
Histamine, 5-HT and PAF are the major inflammatory mediators involved in allergic edema formation.In an attempt to understand the effect of SC, we analyzed the effect injection of OVA (12.5 µg/cavity), an intense accumulation of total leukocytes (Figure 4A), mononuclear cells (Figure 4B) and eosinophils (Figure 4D) was observed, while remaining numbers of neutrophils were present in the pleural cavity of BALB/c mice (Figure 4C).Dexamethasone pre-treatment (2 mg/kg, intraperitoneally) significantly inhibited the influx of total leukocytes (76%), mononuclear cells (62%) and eosinophils (99% inhibition).The oral administration of SC (100 mg/kg) 1 h before stimulation markedly inhibited the eosinophil accumulation (75% inhibition, P ≤ 0.001) in the pleural cavity.It is important to note that this inhibition was selective, with no effect on the numbers of total leukocytes, mononuclear cells or neutrophils.
Inhibition of CCL11/eotaxin and IL-5 production by Syzygium cumini extract
In order to understand the effects of the SC extract on eosinophil mobilization during an allergic inflammation, we investigated the effect of in vivo oral pre-treatment with SC on CCL11/eotaxin and IL-5 levels in pleural wash fluid.As observed in Figure 5, intrathoracic antigen challenge significantly increased the levels of CCL11/eotaxin and IL-5 in pleural lavage fluid of sensitized animals 24 h after the challenge (N = 7/group; P ≤ 0.05).Oral treatment with the SC extract led to a decrease in the levels of IL-5 (from 70.9 ± 25.2 to 12.05 ± 7.165 pg/mL; two experiments in triplicate; P ≤ 0.05) and CCL11/eotaxin (from 60.4 ± 8.54 to 32.8 ± 8.4 ng/mL; two experiments in triplicate; P ≤ 0.05) in pleural wash fluid.This effect was similar to the inhibition observed after dexamethasone treatment (from 70.9 ± 25.2 to 4.29 ± 2.60 pg/mL IL-5 and from 60.4 ± 8.54 to 13.40 ± 3.91 ng/mL CCL11/eotaxin).
Discussion
The results of the current study demon- strate that the leaf extract of SC displays a marked anti-allergic property.Treatment with the SC extract inhibited the paw edema induced by C48/80, a potent mast cell degranulator, to an extent comparable to the effect of promethazine, a classical anti-histaminic used to relieve symptoms of allergic reactions.Treatment with the SC extract also inhibited the paw edema triggered by antigen challenge, although this effect on allergic paw edema was not equivalent to the effect observed on C48/80-induced edema.
Moreover, these results indicate a different mechanism of inhibition of C48/80 and antigen-induced paw edema by the SC extract, suggesting an action on specific targets.Histamine, 5-HT and PAF have been extensively reported to be the major mediators involved in allergic edema formation.Supporting these observations, treatment with SC displayed an inhibitory effect on edema induced by histamine and 5-HT, but failed to inhibit PAF-induced paw edema.The participation of PAF in edema forma- tion and eosinophil mobilization in allergic inflammation has been reported (20,21) and the lack of effect of the SC extract on PAFinduced edema and the participation of the other mediators in the triggering of allergic paw edema may account for discrete effect of the SC extract on allergic paw edema.These results suggest that the SC extract can be much more effective in inhibiting reactions whose mechanism depends on the release of histamine and of 5-HT.
Kim and colleagues (3) showed that treatment with an aqueous extract of S. aromaticum (L.) Merr.et Perry (Myrtaceae) flower buds had an inhibitory effect on C48/80induced systemic anaphylaxis and IgE-mediated passive cutaneous anaphylaxis reaction.These results were due to an inhibitory action of this extract on histamine release from mast cells.In agreement with this report, we observed that the SC extract also had a direct effect on mast cell degranulation, inhibiting in vitro the histamine release induced by C48/80.This result suggests that the anti-edematogenic effect of SC on C48/ 80 or antigen-induced paw edema may be due to an action on the mast cell degranulation process.However, the effect of SC leaf extract on the edema triggered by exogenous histamine and serotonin also suggests that the SC extract has a direct effect on the inhibition of these mediators.
The presence of polyphenol, gallic acid, ellagic acid derivatives (22,23), tannins (24,25), and glycosylated flavonoids (23,26,27) has been reported in Syzygium species.We extended the previous observation that SC leaf extracts contain flavonoids (23,27).Ramirez and Roa Jr. (28) showed a correlation between the anti-inflammatory activity and the content of total phenolic compounds in the extracts of SC.Our results on the antiedematogenic effect of the SC extract also support the earlier observation of Slowing and colleagues (29) that the presence of flavonoid glycosides may be associated with the anti-inflammatory activity of a methanol extract of E. jambos leaves.The presence of phenolic compounds, especially flavonoids, in the aqueous extract of SC leaves and its anti-edematogenic activity justify the use of aqueous extracts and infusions of the plant in folk medicine.
Treatment with the SC extract inhibited eosinophil accumulation in allergic pleurisy, without a significant change in mononuclear cell or neutrophil recruitment.This treatment also inhibited the rise of IL-5 and CCL11/eotaxin levels in pleural lavage fluid in allergic pleurisy.
Eosinophils play an important role in the pathophysiology of allergic diseases (30,31).The accumulation and survival of these cells are regulated by IL-5 secreted by activated T lymphocytes, as well as by chemokines released by the epithelium.Among the C-C chemokines, CCL11/eotaxin, an eosinophilspecific chemoattractant, is one of the most important mediators of allergic inflammation, with a potent and selective effect in mobilizing eosinophils from bone marrow to the blood (11,(32)(33)(34).The effect of treatment with the SC extract on IL-5 and CCL11/ eotaxin levels may explain the specific inhibition of eosinophil accumulation induced by SC.
These activities of the SC extract may probably be due to the presence of flavonoids in the extract, since these substances isolated from Myrtaceae species, including SC, are known to exert a potent inhibitory effect on a variety of enzymes related to cell activation and to the production of inflammatory mediators (35,36).Some isolated flavonoids possess anti-inflammatory (37), antiallergic (38) and analgesic (39) activities; however, few flavonoids from SC leaves (23,27) and flowers have been isolated or identified (40).
Our findings indicate an anti-allergic activity of the SC extract observed by the inhibition of edema formation, mast cell degranulation and histamine release as well as the inhibition of eosinophil accumulation and CCL11/eotaxin and IL-5 production.Taken together, the present results suggest the potential of SC as a herbal-based therapy for the treatment of allergic diseases.
Table 1 .
Effect of Syzygium cumini extract and disodium cromoglycate on histamine release from rat peritoneal mast cells challenged in vitro with compound 48/80.
Data regarding histamine release are reported as the mean ± SD of two experiments in triplicate.Rat peritoneal mast cells were incubated with S. cumini extract (SC; 1 µg/ mL) or disodium cromoglycate (DSCG; 10.2 µg/mL) and challenged in vitro with compound 48/80 (C48/80; 5 µg/mL) 1 h later.Histamine present in the supernatant was quantified by fluorimetric assay.F.A. Brito et al. | 4,809 | 2007-01-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
FLAMINGO: calibrating large cosmological hydrodynamical simulations with machine learning
ABSTRACT To fully take advantage of the data provided by large-scale structure surveys, we need to quantify the potential impact of baryonic effects, such as feedback from active galactic nuclei (AGN) and star formation, on cosmological observables. In simulations, feedback processes originate on scales that remain unresolved. Therefore, they need to be sourced via subgrid models that contain free parameters. We use machine learning to calibrate the AGN and stellar feedback models for the FLAMINGO (Fullhydro Large-scale structure simulations with All-sky Mapping for the Interpretation of Next Generation Observations) cosmological hydrodynamical simulations. Using Gaussian process emulators trained on Latin hypercubes of 32 smaller volume simulations, we model how the galaxy stellar mass function (SMF) and cluster gas fractions change as a function of the subgrid parameters. The emulators are then fit to observational data, allowing for the inclusion of potential observational biases. We apply our method to the three different FLAMINGO resolutions, spanning a factor of 64 in particle mass, recovering the observed relations within the respective resolved mass ranges. We also use the emulators, which link changes in subgrid parameters to changes in observables, to find models that skirt or exceed the observationally allowed range for cluster gas fractions and the SMF. Our method enables us to define model variations in terms of the data that they are calibrated to rather than the values of specific subgrid parameters. This approach is useful, because subgrid parameters are typically not directly linked to particular observables, and predictions for a specific observable are influenced by multiple subgrid parameters.
INTRODUCTION
The evolution of the large-scale distribution of matter in the Universe is highly sensitive to the underlying cosmological model.Current probes have given us our concordance cosmological model ΛCDM, which consists of a spatially flat universe, where dark energy and cold dark matter dominate the current energy density (for a review see Frieman et al. 2008).
The concordance model has been independently validated by a large array of probes.These include the cosmic microwave background (CMB) (e.g.Planck Collaboration et al. 2020), galaxy clustering and gravitational lensing (e.g.Abbott et al. 2022;Heymans et al. 2021), baryon acoustic oscillations (BAO) (e.g.Alam et al. 2021), and more (for a review see Turner 2022).While all the probes ★ E-mail<EMAIL_ADDRESS>agree with the ΛCDM model, tensions remain between early universe probes, like the CMB, and late-time probes, like the distance ladder and weak lensing.For the 0 and 8 parameters, the tension is at the level of a few standard deviations (e.g.Heymans et al. 2021;Abbott et al. 2022;Riess et al. 2022).Next generation surveys like Euclid1 and LSST2 will measure the matter power spectrum to per cent level accuracy (Euclid Collaboration et al. 2020).The results from these surveys will provide us with a stringent test of the concordance model, and show us whether these tensions will force us to modify the ΛCDM model.
Most of the modelling work for large-scale structure is done with collisionless -body simulations (e.g.Heitmann et al. 2016a;DeRose et al. 2021;Euclid Collaboration et al. 2019).-body sim-ulations model the evolution of cold dark matter and can accurately predict the structure and clustering of dark matter haloes under the effect of gravity only.The dark part of the matter component is dominant in mass and hence, predictions from these simulations may provide stringent cosmological constraints.However, baryons change the distribution of dark matter through back reaction effects, but, with the exception of gravitational lensing, we are limited to observing the imprint of the distribution of dark matter on the baryonic matter.Most of the baryonic matter is found in the tenuous intergalactic medium (e.g.Nicastro et al. 2018;Macquart et al. 2020), which is very challenging to observe directly.Large-scale structure surveys use galaxies, which are located within dark matter haloes, to map the distribution of matter.
One of the main difficulties for hydrodynamical simulations is the implementation and tuning of relevant astrophysical processes that originate on unresolved scales through subgrid physics.Processes like star formation and black hole growth occur on parsec scales, and are not resolved.The resulting feedback from stars and active galactic nuclei (AGN), do influence the distribution of matter on cosmological scales (Van Daalen et al. 2011, 2020;Debackere et al. 2020;Schneider et al. 2020).Therefore, we need to create simulations that model their effect on the resolved scales.
Subgrid physics models are characterised by a set of free parameters, in the sense that there is both uncertainty in the processes we try to model and uncertainty in how the models are affected by numerical limitations.An example of the latter is the impact of numerical over-cooling on galactic wind models (see Dalla Vecchia & Schaye 2012).The numerical effects combined with the general nonlinearity of galaxy formation makes it difficult to implement subgrid physics based solely on first principles.Instead, we have to calibrate the model by comparing it to a selection of observations, a partial forfeit of their predictive power.As argued by Schaye et al. (2015), this is a necessary sacrifice.By ensuring certain relations are reproduced, the simulation retains predictive power for other relations.Calibrating subgrid physics forces us to find a balance between how many observables one tries to match and how many of the results can be deemed predictions.
In this paper we discuss the calibration strategy used for the low-, intermediate-and high-resolution simulations of the FLAMINGO project (Full-hydro Large-scale structure simulations with All-sky Mapping for the Interpretation of Next Generation Observations; Schaye et al. 2023).The intermediate-resolution FLAMINGO model has the same resolution ( gas = 1.07 × 10 9 M ⊙ ) as used for the BA-HAMAS project (McCarthy et al. 2017(McCarthy et al. , 2018)), but in a volume of (2.8 Gpc) 3 .This volume is over two orders of magnitude larger than BAHAMAS.Additionally, FLAMINGO includes a suite of feedback and cosmology variations in (1 Gpc) 3 volumes.This includes a high ( gas = 1.34×10 8 M ⊙ ) and a low ( gas = 8.56×10 9 M ⊙ ) resolution variation.Our goal is to expand the large-scale structure science of the BAHAMAS project to larger volumes, different resolutions, and more cosmology and astrophysics variations with a new code and an improved subgrid physics model.The FLAMINGO simulation outputs also include on-the-fly full sky lightcones, both as particles and as maps, for a variety of observables.Similarly to BAHAMAS, we will calibrate to the observed present-day galaxy stellar mass function (SMF) and the gas fractions in groups and clusters of galaxies ( gas ).We opt for the SMF to ensure we can reproduce galaxy clustering and lensing statistics if we use the correct cosmology.The gas fraction is used to ensure we have a realistic distribution of gas in and around clusters, which is not only important for cluster cosmology, but also for baryonic effects on the matter power spectrum (Semboloni et al. 2011;Schneider & Teyssier 2015;Debackere et al. 2020;Van Daalen et al. 2020;Aricò et al. 2021;Salcido et al. 2023).While our fiducial models are calibrated to reproduce the data, we also calibrate the subgrid physics to the gas fraction and SMF data that has been shifted relative to the observed values.These feedback variations will enable future FLAMINGO projects to test the importance of astrophysical effects constrained by the uncertainties in the data.
For BAHAMAS, and also for simulations like EAGLE and Il-lustrisTNG, calibration was done by hand by varying the subgrid parameters within some reasonable range until the simulation lined up with the calibration targets.This approach works reasonably well in the context of galaxy formation, but it introduces biases into the parameter selection.For cosmology applications we require a more systematic and controlled approach.We want to be able to sample the parameter space with a Markov Chain Monte Carlo (MCMC) method and to find the posterior probabilities of each of the subgrid parameter values.This approach also allows us to take into account potential systematic effects in the data and/or simulations.
Because N-body simulations are too computationally expensive to be used directly in MCMC-like methods, we make use of machine learning, specifically emulation using Gaussian processes.While it is too expensive to run a new simulation for each MCMC step, we can train an emulator on a carefully sampled selection of input simulations.The emulator then gives us the predicted observable as a continuous function of the input parameters, which can be fed into any likelihood calculation code.Emulator-based methods have been used in combination with semi-analytic models of galaxy formation (Bower et al. 2010;Vernon et al. 2014;Rodrigues et al. 2017;Elliott et al. 2021) and have become particularly popular for cosmology.By training emulators on dark-matter-only simulations, their full nonlinear matter power spectrum can be predicted with per cent level precision (e.g.Heitmann et al. 2009Heitmann et al. , 2016b;;Euclid Collaboration et al. 2019;Angulo et al. 2021;Moran et al. 2022).
We directly emulate our calibration targets: the SMF and the gas fractions in groups and clusters.This allows us to create a continuous simulation-based model that can be compared with observations.With the emulator we can use MCMC to directly fit the subgrid physics parameters to the observational data, while modelling statistical and systematic errors in both the simulations and the data.This procedure not only gives us a well-calibrated model, but also lets us determine the maximum variations allowed by the model.In this way our resulting simulations can provide upper and lower lim-its on the expected baryonic effects.More general machine learning techniques have been used to calibrate hydrodynamical simulations.Jo et al. (2023) calibrate to baryonic observables in the (25 Mpc)3 volumes of the CAMELS project (Villaescusa-Navarro et al. 2021) and Oh et al. (2022) apply a similar methodology to zooms of Milky Way haloes.However, these methods have not been applied to simulations of large cosmological volumes and they have not accounted for possible observational biases.
This paper is structured as follows.In Section 2 we describe the most relevant aspects of our simulation method and galaxy formation models.In Section 3 the reasoning for our calibration targets is explained, and we describe our compilation of data and how we include potential observational and simulation-originated biases in our analysis.In Section 4 we describe how we obtain the training data for the emulators.We also discuss how the emulators are trained and how we estimate the uncertainty in the predictions of the emulators.We describe our likelihoods and our fitting method in Section 5.In Section 6 we show the results of fitting the emulators at the three FLAMINGO resolutions.We also discuss how the emulators can be used to better understand subgrid physics using parameter sweeps and we use the emulator to find models that skirt or exceed the observational allowed range for the cluster gas fractions and the SMF.Finally, we summarise our method, strategy and results in Section 7. In this work, 500 is defined as the radius within which the mean internal density is 500 times the critical density.The radius 500 also defines 500 , which is the mass inside 500 .
SIMULATIONS
The simulation methods and galaxy formation model are described in detail in Schaye et al. (2023).Here we will provide a summary of the most relevant aspects.We describe in more detail the subgrid prescriptions that we calibrate in this work, namely those for stellar feedback ( §2.1), the growth of supermassive black holes ( §2.2), and AGN feedback ( §2.3), and we will motivate the choice of priors for the subgrid parameters that are varied (these are listed in Table 2).
All simulations in this work use the open-source code Swift (Schaller et al. 2023).Swift is an N-body gravity and smooth particle hydrodynamics (SPH) solver that makes use of a fine-grained tasking framework and runs across multiple compute nodes using MPI.Gravity is solved using the Fast Multiple Method (Greengard & Rokhlin 1987).We use the sphenix SPH scheme (Borrow et al. 2022b) with a Wendland (1995) 2 kernel.Massive neutrinos are implemented into Swift via the method of Elbers et al. (2021).
Initial conditions are generated using a modified version of mono-fonIC (Hahn et al. 2021) that includes massive neutrinos.We use unperturbed initial conditions for the neutrino particles.We do not include large scale neutrino perturbations in the initial conditions, as these have a negligible effect in the small box sizes used for this work.We adopt the '3x2pt + all' cosmology from Abbott et al. (2022) (Ω m = 0.306, Ω b = 0.0486, 8 = 0.807, H 0 = 68.1, = 0.967) with a minimal neutrino mass of 0.06 eV.The particle masses and gravitational softening lengths corresponding to the three different resolutions that we will consider are listed in Table 1.
For simulations with volumes as large as FLAMINGO, it is currently impossible to resolve all the processes that are important for galaxy formation.Therefore, we make use of subgrid models.FLAMINGO builds upon the models of OWLS (Schaye et al. 2010), used for Cosmo-OWLS (Le Brun et al. 2014), BAHAMAS (Mc-Carthy et al. 2017), and EAGLE (Schaye et al. 2015), ported from the code gadget (Springel 2005) to Swift.
We use the radiative cooling tables from Ploeckinger & Schaye (2020), which are based on photo-ionisation models run with cloudy (Ferland et al. 2017) that include both the metagalactic and interstellar radiation fields, and that account for self-shielding, dust, and cosmic rays.
As we are unable to resolve the multiphase interstellar medium, we follow Schaye & Dalla Vecchia (2008) and impose a temperature floor.The pressure of gas with hydrogen number densities H > 10 −4 cm −3 and an overdensity greater than 100 is limited from below to / B = 800 K ( H /10 −4 cm −3 ) 4/3 , where B is the Boltzmann constant.
During the simulation gas particles can be stochastically converted into star particles following the description of Schaye & Dalla Vecchia (2008).Particles with total hydrogen number density 3 H > 10 −1 cm −3 , an overdensity > 10 and within 0.3 dex of the temperature floor are stochastically allowed to convert into stars with a probability given by the particle's star formation rate, where g is the gas particle mass, = 5/3 is the adiabatic index, and is the gravitational constant.The star formation rate is derived such that self-gravitating discs reproduce the observed Kennicutt-Schmidt relation (Kennicutt Jr. 1998;Kennicutt Jr. et al. 2007).We assume the gas fraction, g , is unity, = 1.515 × 10 −4 M ⊙ yr −1 pc −2 , and = 1.4.
For the low-resolution simulation we were forced to relax the star formation parameters, as the default prescription was unable to form enough stars, even in large haloes and without stellar feedback.For low resolution, all particles with density H > 10 −3 cm −3 , overdensity > 10 and temperature < 10 5 K are star forming.
Each stellar particle is treated as a simple stellar population with a Chabrier (2003) initial mass function (IMF).Following Wiersma et al. (2009), we model stellar mass loss and track the abundances of the individual elements H, He, C, N, O, Ne, Mg, Si, and Fe.We also include type Ia supernova with rates taken from Schaye et al. (2015).
Stellar feedback
Although we will often refer to stellar feedback as supernova feedback, it may also represent other sources of energy released by massive stars that are unresolved by our simulations such as stellar winds, radiation pressure or cosmic rays.
Stellar feedback is implemented kinetically.The energy budget is normalised to the expected kinetic energy from core collapse supernovae, assuming that each star with a mass between 8 and 100 M ⊙ injects 10 51 erg of kinetic energy into its surrounding medium.A fraction SN of this energy is assumed to be coupled to the ISM on scales resolved by the simulation and is used to kick neighbouring gas particles with a target velocity Δ SN .We use the method of Chaikin et al. (2022a) 4 to inject the kinetic energy in a statistically isotropic manner while ensuring that both momentum and energy are conserved.Note that if the relative velocities between the star and gas particles are nonzero, energy conservation results in differences between the actual and target kick velocities.Following Dalla Vecchia & Schaye (2008) and Richings & Schaye (2016), we inject the kinetic energy probabilistically during each time step after the star particle has formed.The probability that a star particle kicks a given SPH neighbour is where Δ SN denotes the amount of energy released by the star particle of age during a time step Δ and ngb is the total gas mass in the star particle's SPH kernel.The feedback efficiency, SN , and the target kick velocity Δ SN are the two stellar feedback parameters that are varied during the calibration.The effect of stellar feedback generally scales with SN , which sets the amount of energy that is injected.Based on the calibration of BAHAMAS (McCarthy et al. 2017) and after some experimentation with runs in which we varied only one parameter, we settled on prior ranges of 0.2 − 0.9 and 0 − 0.5 for high-and intermediate-resolution, respectively.The low-resolution simulations do not require any stellar feedback at all because of the strong suppression of star formation due to the limited resolution and because galaxies in the regime where stellar feedback dominates (stellar mass * ≪ 10 11 M ⊙ ) are only sampled by ≲ 10 stellar particles.
If the kick velocity is too small, then stellar feedback ceases to be effective because of excessive radiative losses caused by the toolow post-shock temperatures (the well-known numerical over-cooling problem, see Dalla Vecchia & Schaye 2012) and/or because the velocities are small compared to the escape velocities.The lower limits for Δ SN are 80 and 200 km s −1 for the high-and intermediateresolution simulations, respectively.Our additional tests showed that for lower velocities the kicks stopped having a significant effect.
If the kick velocity is too large, then the feedback becomes poorly sampled, thus limiting its effectiveness.Our aim is to calibrate the SMF down to masses corresponding to just a few stellar particles.The expectation value for the number of kicks imparted by a single stellar particle is given by Chaikin et al. (2022a) ⟨ kicks, SN ⟩ = 1.85 where we assumed the stellar and gas particles to have the same mass.Based on the above considerations and some small test runs, we limit the maximum kick velocity to 400 and 800 km s −1 for the highand intermediate-resolution simulations, respectively.This implies ⟨ kicks, SN ⟩ ≈ 2 and ⟨ kicks, SN ⟩ ≈ 0.4 for high-and intermediateresolution respectively.There should be at least four kicks for objects with 10 stellar particles at each resolution.
Black hole growth
Following Di Matteo et al. (2008) As we do not properly resolve dynamical friction at our resolution, BHs are repositioned by hand to the minimum of the gravitational potential following the method of Bahé et al. (2022) 5 .For BH mergers we also follow the prescription by Bahé et al. (2022).
Besides merging with other BHs, BHs grow via accretion of gas, which is assumed to occur at a modified Bondi-Hoyle rate, where BH is the BH mass, s is the sound speed of the gas, is the gas density, is the speed of light and BH is the velocity of the BH with respect to its environment.The factor is a boost factor that is added because we do not resolve the Bondi radius and because we lack the resolution to model the phase structure of the ISM.We use the parametrization of Booth & Schaye (2009), where H, * = 0.1 cm −3 , which corresponds to the density threshold for star formation in the intermediate-and high-resolution simulations (we use the same value for all resolutions).The logarithmic density slope BH is a free parameter that we vary during the calibration.After some experimentation using simulations where only a single parameter is varied between runs, we settled on priors of 0 − 0.9, 0.1 − 0.9 and 0 − 3 for high , intermediate and low resolution, respectively.
The gas accretion rate is capped at the Eddington (1913) rate.Following Bahé et al. (2022), the BH is allowed to 'nibble' on neighbouring gas particles until the gas particles only have half of their original mass remaining.
AGN feedback
In all but two of the simulations AGN feedback energy is injected into the medium surrounding the BH in thermal form using the prescription from Booth & Schaye (2009).The model used in the remaining simulations is based on jet feedback and is described in §2.3.1.While accreting gas, the BH adds a fraction r f = 0.015 of the accreted rest mass energy to an internal feedback energy reservoir, where r = 0.1 is the assumed radiative efficiency and f = 0.15 is the assumed AGN feedback efficiency, i.e. the fraction of the radiated energy that is coupled to the gas surrounding the BH.Once enough energy is available to increase the temperature of heat gas particles by Δ AGN , this energy is injected into the neighbouring gas particles.The energy injected in a single event is proportional to heat Δ AGN , where Δ AGN is the increase in temperature that is applied to heat neighbours.We find that it is the product heat Δ AGN that is most important for regulating how much gas is expelled from clusters, and that Δ AGN and heat are largely degenerate.We therefore fix heat to one and use Δ AGN as a free parameter that is varied in the calibration.Following the findings by Chaikin et al. (2022b), we inject the thermal energy into the nearest neighbour of the BH, which gives results that are nearly indistinguishable from a statistically isotropic approach.
To choose the prior for Δ AGN we take a similar approach as for the stellar feedback kick velocity.However, instead of avoiding velocities that are too low to have an effect, we now have to make sure that feedback raises the temperature to a value sufficiently high to avoid catastrophic numerical over-cooling.The sampling issue is also slightly different than for stellar feedback.While stellar feedback is limited to young stars, BHs can inject energy throughout their lives and hence the time sampling of these events becomes important.If the time between AGN feedback events becomes too long, then the BHs will be unable to self-regulate.If BHs cannot regulate their growth, then this can lead to an unrealistic mass distribution of both the BHs and their host galaxies.To summarise, we have two main considerations: (i) What is the Δ AGN below which radiative losses are already severe at injection for the densities at which stars form?
(ii) What is the Δ AGN above which the time between AGN events becomes longer than the BH growth time?Dalla Vecchia & Schaye (2012) demonstrated that the density above which thermal feedback becomes ineffective can be predicted based on the ratio of the radiative cooling time, which depends on the density and temperature, and the sound crossing time across a resolution element, which depends on the numerical resolution.According to their equation 18, feedback becomes inefficient for densities exceeding H, = 0.25 cm −3 Δ AGN 10 7.5 K 3/2 Comparing this to our threshold for star formation ( H = 10 −1 cm −3 for intermediate/high resolution and 10 −3 cm −3 for low resolution), yields minimum values of log 10 Δ AGN /K = 6.9, 7.2, and 6.2 for the high, intermediate, and low resolution, respectively.However, the above equation assumes radiative losses to be dominated by Bremsstrahlung and Dalla Vecchia & Schaye (2012) showed that it underestimates the radiative losses for Δ AGN < 10 7 K.For this reason we do not consider values below 10 7 K.On the other hand, since we inject the energy at the end of the time step, the feedback can do work during a single time step even if the temperature is too low to avoid overcooling, which means that somewhat lower values than implied by the above equation (but still higher than 10 7 K) may still be of interest.
If we define Δ BH to be the gas mass that must be accreted for the BH to have sufficient energy to heat a single gas particle, then the ratio of the time between AGN feedback events and the time of BH growth is given by (Booth & Schaye 2009), where = 5/3 is the ratio of specific heats and = 0.6 is the mean particle mass in units of the proton mass H .Given that we expect to need AGN feedback to quench star formation in galaxies with stellar mass * ≳ 10 11 M ⊙ and that in this mass range BHs are observed to have masses BH ∼ 10 −3 * (Häring & Rix 2004), we need the BHs to become self-regulating when BH ≪ 10 8 M ⊙ .The condition AGN < BH then implies that for our heat = 1 we require Δ AGN ≲ 10 8.5 K for intermediate resolution, and values 8 times higher (lower) for high (low) resolution.
Based on the above considerations and some small test runs, we adopted the flat priors log 10 Δ AGN /K = 7.7 − 8.9, 7.5 − 8.5, and 7.0 − 9.5 for high, intermediate and low resolution, respectively.For both intermediate and high resolution the prior ranges are smaller than what is possible based on our considerations.From our test runs we found that these ranges bracket a sufficiently large range in the observables we are interested in and the smaller ranges lead to slightly better sampling of the parameter space around the best-fitting model.For low resolution the prior extends to (unnecessarily) high values, but we will see that the best-fitting value is actually similar to those for the other resolutions.We can afford a larger prior range for the low resolution simulations as we are only sampling two parameters.
Jet feedback
In addition to the fully thermal AGN feedback scheme described above, we also calibrate a kinetic AGN feedback variation.The model used for kinetic AGN feedback is based on the spin-driven jet feedback model described by Huško et al. (2022), implemented into swift.In this model energy is injected by kicking two particles on opposite sides of the BH, according to its angular momentum vector.The angular momentum of the BH is calculated in a subgrid model for an accretion disc that is based on general relativistic magneto-hydrodynamics simulations of single BHs in the low accretion regime (< 0.01 Eddington).For more details see Huško et al. (2022).The spin from black holes that remains after mergers is computed according to the description by Rezzolla et al. (2008).
Due to the relatively low resolutions used for FLAMINGO, we make some simplifications to the complete model.As we intend for the jet model to be maximally different from the thermal feedback mode, we do not switch from kinetic to thermal feedback at high Eddington rates, and instead use the kinetic feedback at all accretion rates.Instead of using the efficiencies based on the subgrid accretion model, we fix the jet efficiency to = 0.015.This efficiency is equal to the combined coupling and radiative efficiency, f r , for the thermal mode feedback.This implies that for each unit of mass accreted by the BH, the same amount of energy becomes available in the jet model as for the fiducial thermal model.While we do not use a spin-dependent feedback efficiency, we do still use the subgrid model to track the angular momentum vector of the BH and use it to select the direction in which gas particles are kicked.The BH accretion model is identical to that described in §2.2, and for calibration of the jet model we vary the boost factor BH .
When the BH has accreted enough mass, two neighbouring gas particles are kicked with a total kinetic energy equal to where jet is the target jet velocity (we use the term target because it is the energy that is fixed, similarly to the supernova kicks, see §2.1), which is a free parameter that we calibrate.The jet velocity plays a role similar to Δ AGN for the case of thermal feedback.As the energy is injected in kinetic form, the model is less affected by thermal losses, but picking velocities that are too low will make the gas unable to escape to large distances (see Huško et al. 2022).For very high values we again run into sampling issues.Based on these considerations and some initial tests, we use flat priors over the range of jet /(km s −1 ) = 10 2.7 − 10 3.5 , corresponding in energy to Δ AGN /K ≈ 10 7.1 − 10 8.7 .We only calibrate this model at intermediate resolution.
OBSERVATIONAL DATA AND BIASES
Before we can start to calibrate our simulations, we need to have observational data to compare with our simulations.We calibrate to the galaxy stellar mass function (SMF) and the gas fractions in groups and clusters ( gas,500c ( 500 )).
One of the goals of the FLAMINGO simulations is to predict galaxy clustering and cross correlations between galaxies and other tracers of the matter distribution.The SMF allows us to constrain the stellar content of haloes as a function of their mass.This is not only crucial for the prediction of observations using galaxies, the stellar mass also directly affects the distribution of dark matter in haloes, and the orbits of subhaloes.Although matching the SMF does not ensure that each halo contains the correct stellar mass, it suggests the relation is at least statistically plausible provided the model assumes the correct cosmology.
Besides galaxy clustering, we also wish to use FLAMINGO to investigate other cosmological observables tracing the distribution of matter, such as X-ray emission, the Sunyaev-Zeldovich (SZ) effect and lensing maps.From studies by Semboloni et al. (2013), Van Daalen et al. (2020) and Salcido et al. (2023) we know that the gas fractions in clusters have a large impact on the matter power spectra on scales relevant for e.g.cosmic shear.By calibrating to the observed gas fractions, we can also make robust predictions for the distribution of gas expelled from group/cluster cores.
We calibrate to the same observables as were used for the BA-HAMAS simulation (McCarthy et al. 2017(McCarthy et al. , 2018)).In this section we will discuss the data that we considered and the observational biases that we account for.
The galaxy stellar mass function
Constraining the SMF has been the goal of a large number of studies, many of which are based on the SDSS (Li & White 2009;D'Souza et al. 2015;Bernardi et al. 2013Bernardi et al. , 2017) ) or the more recent GAMA survey (Baldry et al. 2012;Wright et al. 2017;Driver et al. 2022).A compilation of these data sets is shown in the left panel of Fig. 1.It is clear that there are substantial systematic differences between some of the different groups that have tried to measure the SMF, particularly at the low-and high-mass ends.However, some of the most significant outliers are older results.While there are still discrepancies at the high-mass end, the results from the three most recent studies, D 'Souza et al. (2015); Bernardi et al. (2017); Driver et al. (2022), are in reasonable agreement over a large part of the mass range.Instead of trying to combine different data sets, we limit the fitted mass range to * < 10 11.5 M ⊙ and we choose to use the most recent GAMA result from Driver et al. (2022) at = 0.Not only is this the most recent study, it also provides a useful prior for possible biasing due to cosmic variance.The upper mass limit also decreases the possible bias we get due to our choice of simulation aperture (see §4.2 and Appendix A for more details).We always set a simulationresolution dependent lower mass limit on the mass range we use for fitting.The mass ranges we use can be found in Table 3.
Fitting the SMFs from simulations to observations requires special On the left we plot the SMF.On the right we plot the cluster gas fraction versus total mass, both measured at 500 .Where available we display the 1 measurement errors, which do not include intrinsic scatter.The X-ray data are binned from a compilation of available data, see §3.2.1, except the lowest mass point, which is obtained from a fit by Lovisari et al. (2015).We show the individual clusters as black dots.
Note that the X-ray data are plotted without any correction for the hydrostatic mass bias.For this work we use the Driver et al. (2022) data for the SMF, and the X-ray and Akino et al. (2022) data for the gas fractions.
Table 3. Mass ranges used for each observable when fitting the emulator to data.The values are rounded because the exact ranges vary with the values of the observational bias factors.
Observable SMF * lower limit M ⊙ ) SMF * upper limit (M ⊙ ) gas,500c 500 lower limit (M ⊙ ) gas,500c 500 upper limit (M ⊙ ) High-res [m8] 10 8.67 10 11.50 10 13.50 10 13.73 Intermediate-res [m9] 10 9.92 10 11.50 10 13.50 10 14.36 Low-res [m10] 10 11.17 10 11.50 10 13.50 10 14.53 care.There are some important differences/sources of uncertainty that need to be taken into account: (i) Observations suffer from random errors in measuring the mass.while simulations have no mass measurement errors (at least for a fixed definition of a galaxy, i.e. for a given subhalo finder).Simulations do suffer from randomness errors (see Borrow et al. 2022a), as discussed by these authors, this issue is negligible for our analysis because we consider large ensembles of galaxies..
(ii) Observations possibly suffer from systematic errors, which may originate from spectral energy distribution fitting, corrections for dust extinction, surface brightness profile fitting, and/or selection effects.
(iii) Observations may suffer from cosmic variance.
Before discussing how we take each of these effects into account, we note that the uncertainty in the stellar IMF is not directly relevant because the observational analysis and the simulations use the same IMF.The observed SMF also depends on the assumed cosmology, but this is close enough to the one used in the simulations to have a negligible effect on the comparison.
Random errors on the observed stellar mass
Symmetric observational scatter in the measured stellar mass will cause a systematic shift in the inferred SMF.Because there are more galaxies in lower mass bins, it is more likely for galaxies to scatter to a higher mass bin than to a lower mass bin.This is especially important at the high-mass end, where the SMF is steep.This effect is known as Eddington (1913) bias.We account for it by adding scatter to the simulation masses.We adopt the lognormal scatter from Behroozi et al. (2019), which has a redshift-dependent standard deviation of where we sample the lognormal distribution for each galaxy.This then adds an Eddington-like bias to the simulation results, consistent with observations.
Systematic errors in the observed stellar mass
There are systematic discrepancies between the different observations.The reason for this is mostly found in the stellar population synthesis and dust correction models used, as the observed luminosity functions agree better between different studies than the mass functions.However, at the FLAMINGO resolution, the stellar masses can be predicted much more accurately than the star formation histories, current-day star formation rates and dust extinction rates.Therefore, calibration to the SMF is preferable over a direct comparison with the luminosity function.
To account for potential systematic shifts in the observed stellar masses, we include a stellar mass bias parameter log 10 ( * ,obs ) → log 10 ( * ,obs ) + log 10 * , where the bias * is assumed to be independent of mass.Note that the sign is defined such that a positive stellar mass bias implies the observations underestimate the true stellar mass.We use a lognormal prior to constrain the bias parameter.The prior is taken from Behroozi et al. (2019) (their eq.25) and is based on the existing tensions between observed time-integrated star formation rates and observed SMFs, where N (, ) is a normal distribution with mean and standard deviation .
We adopt a mass-independent bias.While a mass-dependent bias might have improved the agreement between the data and the simulations, the mass dependence is unknown and therefore there is no obvious parametrization of the mass dependence.This implies the new free parameters would have no clear priors.Additionally, we note that our decision not to fit above a stellar mass of 10 11.5 M ⊙ has a similar effect as switching to a much higher stellar mass bias above this mass.
Cosmic variance
Driver & Robotham (2010) showed that the error on the SMF due to cosmic variance can be 5 − 10 per cent for surveys like GAMA and the SDSS, depending on the volume considered.Cosmic variance can bias the number density measurements, because the survey may consist of slightly over-or under-dense regions.For our mass range we assume that this effect is independent of mass (S.P. Driver, private communication).To account for cosmic variance, we allow the observed number densities to shift up and down slightly, Note that the sign is defined such that a positive cosmic variance bias implies the observations underestimate the number density of galaxies.We constrain this bias parameter with a Gaussian prior taken from Driver et al. (2022).They estimate the error due to cosmic variance to be about 6 per cent, so our prior is given by
The cluster gas mass fractions
Data for the cluster gas mass fractions, gas,500c , come in two varieties.They are either obtained purely from X-ray observations, or from a combination of X-ray and weak gravitational lensing observations where the latter are used to measure the total cluster mass.
For the X-ray only data, the density and temperature profiles fitted to the observations are used to measure the total mass assuming the gas is in hydrostatic equilibrium (HSE).In both cases the gas mass is obtained by integrating the density profile measured from X-ray observations out to the measured value of 500 .Table 4 summarises all the different sets of data that we use.
As was the case for the SMF, there are biases that we need to account for when we compare observations with simulations.There are four distinct issues that we take into account: (i) At the low-mass end selection effects become important, because at fixed halo mass objects with a higher gas content will tend to emit more X-ray radiation.Any X-ray selected sample may therefore have gas fractions that are biased high, particularly at low masses.
(ii) The measurement of total mass from X-ray data under the assumption of HSE is well documented to be biased low (e.g.Hoekstra et al. 2015;Eckert et al. 2016;Smith et al. 2016).
(iii) For the weak lensing data, we make use of the fits of the relation between gas fraction and mass provided by the authors.The fits are preferred to individual measurements as the fits account for the selection function of the sample.However, for our purposes the fits need to be sampled at particular masses.This needs to be done in a way that limits the covariance between the samples and that is representative of the data used (i.e.no extrapolation).
(iv) As clusters are rare objects they are usually observed over a large redshift range.Furthermore, because weak lensing is most efficient when the lens is halfway between the observer and the background galaxies, weak lensing observations tend to probe higher redshifts than X-ray data.Clusters evolve over time, so we need to make sure the simulation samples are representative for the observational samples we compare them with.
For the cluster gas fractions the largest mass we can fit for is limited by the box size of each simulation.The upper mass limit used for fitting therefore changes with resolution (as we use a different box size for each resolution).The upper limits can be found in Table 3.
X-ray data
The first set of gas fraction data we describe is the X-ray (or HSE) data.For each data set we store 500 and gas,500c , with asymmetric errors where available, and correct the data to the FLAMINGO cosmology ( 500 ∝ ℎ −1 , gas,500c ∝ ℎ −1.5 ).The combined data set has 581 objects but contains duplicates.For each object that appears more than once we calculate a new data point by taking an unweighted mean of the different measurements.The mean is taken in both 500 and gas,500c .Because the duplicates are often based on (in part) the same data, the errors will not be independent and we combine them via where is the number of times a single object appears in the set.This leaves us with 533 objects.Note that we do not use the errors for the re-binning, as we make use of bootstrap re-sampling to compute the errors.We need to consider redshift evolution.The emulators will be trained on simulation snapshots corresponding to a single redshift.Imposing a redshift cut of < 0.25 causes the median redshift of the X-ray sample to become 0.1, thus allowing us to compare with simulation snapshots at = 0.1.The redshift cut reduces the sample to 310 objects.The individual masses and gas fractions are shown as black dots in Fig. 1.
We combine the X-ray measurements by computing the median gas fraction in eight logarithmically spaced hydrostatic mass bins between 10 13.8 and 10 15.0 M ⊙ .For each bin, the error on the median is obtained by taking the difference between the median and the 16th−84th percentiles obtained from bootstrap resampling the objects.This gives us asymmetric errors around the median.As our Table 4. Overview of the cluster gas mass fraction data used for this work.The first column lists the reference from which the data were obtained, the second column lists the number of objects, where 'fit' indicates that the main result is a fitted relation between 500 and gas,500 , the third column shows how the total mass was measured (HSE: X-ray data assuming hydrostatic equilibrium; WL: weak gravitational lensing), and the final column contains comments on the selection method.
We account for hydrostatic mass bias by adding a constant bias term to the HSE masses, log 10 500 = log 10 500,HSE − log 10 ( HSE ). ( Note that values HSE < 1 imply that the hydrostatic mass estimate underestimates the true mass.We neglect the effect of hydrostatic bias on the gas fraction because it is comparatively small (McCarthy et al. 2017).This is because both the total and gas mass increase with increasing 500c .The measured gas fraction will differ only at the level of the change in cumulative gas fraction between the true and biased 500c .This is expected to cause only mild changes in the gas fraction (see e.g.fig.6 of Velliscig et al. 2014).Before calculating the median that we compare with the simulation we thus adjust all the observed HSE masses.By combining both X-ray and weak lensing observations, we can constrain the hydrostatic bias.However, we found that our compilation of data on its own is not constraining enough without the use of a prior.To define our prior, we take the values 0.72 ± 0.08 from Eckert et al. (2016) and 0.76 ± 0.06 from Hoekstra et al. (2015) and combine the two to obtain the Gaussian prior HSE = N (0.74, 0.10).( 18) Eckert et al. (2016) and Hoekstra et al. (2015) estimate the hydrostatic mass bias by directly comparing the masses they obtain from weak lensing and from X-rays.
Weak lensing data
We complement the X-ray data with the latest HSC-XXL weak gravitational lensing data from Akino et al. (2022).Higher-mass data from Mulroy et al. (2019) and Hoekstra et al. (2015) are available and plotted in Fig. 1, but the box size used for our calibration runs is too small to make use of them.To compare with the weak lensing data, we make use of the power-law fits to the relation between the gas fraction and mass given by the authors.These fits take selection effects into account.Because the power-law fits have two free parameters, sampling them at more than two masses would result in strong covariance between the sampled points.We therefore use the fit to create two data points that are spaced equally far from the pivot used by the authors.This gives us gas,500 ( 500 = 10 13.5 M ⊙ ) = 0.054 ± 0.010 and gas,500 ( 500 = 10 14.5 M ⊙ ) = 0.106 ± 0.023.Due to the limited box size, we use only the lower, 500 = 10 13.5 M ⊙ , point for fitting high-and intermediate-resolution simulations.For low resolution we are able to include the second 500 = 10 14.5 M ⊙ point.
The median redshift of the HSC-XXL sample is = 0.3.We therefore construct a separate emulator for gas,500c at = 0.3, which we use to fit the weak lensing data.The fits make use of self-similar scaling to move the different clusters to the same redshift, so we could have corrected them to the redshift = 0.1 used for the X-ray data.However, we prefer to use a redshift close to that of the actual sample, to minimize the size of the correction.Akino et al. (2022) give both the weak lensing inferred and the true 500 , as they correct for the expected bias on the weak lensing inferred 500 .We make use of their calibrated true 500 masses.
EMULATOR CONSTRUCTION
Cosmological hydrodynamical simulations are too expensive to be run for each step in an MCMC chain used to evaluate likelihoods.In order to use simulation outputs in MCMC methods, we therefore make use of emulators trained on a set of simulations.Emulators are used to interpolate results in the parameter space between training simulations.They are able to predict the output of the simulations as a continuous function of the input parameters, in a fraction of the original computation time.This method has previously been applied to the matter power spectrum (e.g.Heitmann et al. 2009Heitmann et al. , 2016b;;Euclid Collaboration et al. 2019;Angulo et al. 2021) and to baryonic observables (e.g.Oh et al. 2022;Jo et al. 2023).By using emulators, we can interpolate between the results of a set of training simulations and obtain a fully continuous prediction of how the simulation responds to changes in subgrid parameters.
Training sets
The first step in setting up the emulator is to create a training set.In our training set we want to vary those subgrid parameters that we know are important for the calibration.As discussed in Section 2, for the intermediate-and high-resolution simulations we vary the following four parameters: the stellar feedback efficiency, SN , the target kick velocity for stellar feedback, Δ SN , the power-law slope of the density dependence of the black hole accretion boost factor, BH , and the AGN heating temperature, Δ AGN ( jet , the target kick velocity for AGN feedback in the jet model).For the low-resolution simulations we do not require stellar feedback and therefore vary only the last two parameters.The ranges over which the parameters are varied are motivated in Section 2 and listed in Table 2 (Table C1 for the jet model).
To optimise the parameter space, we make use of a Latin hypercube, first proposed by McKay et al. (1979).To set up a Latin hypercube with sims nodes, we start with an ordered list of sims independent samples along every dimension of the hypercube, where the number of dimensions equals the number of subgrid parameters that are varied.These samples are then combined and shuffled to create a set of sims points that are distributed uniformly within the hypercube, where in our case = ( SN , log 10 Δ SN , BH , log 10 Δ AGN ) for intermediate and high resolution, and = ( BH , log 10 Δ AGN ) for low resolution.Our criterion for optimising the sampling is the 'maximin' approach, which maximises the minimum distance that sampled points are away from each other.An in depth explanation of how the method works is provided by Heitmann et al. (2009).We apply to each sample a random shift of at most half the average spacing between samples.We then run the sims simulations corresponding to the nodes of the Latin hypercube.
We use the public package swiftemulator 6 (Kugel & Borrow 2022), built on the package george (Ambikasaran et al. 2015) set up the Latin hypercube as well as to train and test the emulators.swiftemulator streamlines the emulation process for results obtained from Swift runs.Within swiftemulator we use the Latin hypercube generator from pyDOE (Baudin et al. 2012).
We use sims = 32.The sampling of parameter space provided by the Latin hypercube used for intermediate resolution is shown in Fig. 2. The box sizes used for the training are (100 Mpc) 3 , (200 Mpc) 3 and (400 Mpc) 3 for high, intermediate, and low resolution, respectively.The volume is a compromise between computational cost and the maximum mass for which we train the emulator.Each run cost ∼ 800, ∼ 1300 and ∼ 1600 cpu hours for low, intermediate and high resolution respectively.Using single simulations with an eight times larger volume at each resolution and with the results of Schaye et al. (2023), we have verified that these box sizes are sufficiently large for box size effects to be negligible with respect to the production runs.
Obtaining the required simulation output
From our simulation we take three snapshots at = 0, 0.1 and 0.3.For each snapshot we find haloes and subhaloes using VELOCIraptor (Elahi et al. 2019;Cañas et al. 2019).After an initial friends of friends group search it uses the full 6-D phase space information to disentangle the central and satellite subhaloes.
One of the difficulties of comparing with data, is that we have to choose how to define the edge of simulated galaxies.Observed cluster gas mass fractions are measured within 500 .For the stellar masses needed to compute the SMF, the situation is less clear.Ideally, we would create mock observations, fit them with Sérsic profiles and integrate these to obtain stellar masses, which is the procedure adopted by observational studies.This was recently done for the EAGLE simulation by De Graaff et al. (2022).However, the resolution of the FLAMINGO simulations is too limited to mimic the observational strategy.As shown by Schaye et al. (2023), FLAMINGO signif-icantly overestimates the sizes of low-intermediate mass galaxies, which means we cannot create realistic virtual galaxy observations.Based on the findings of De Graaff et al. ( 2022), we choose to calibrate the SMF using a 3D aperture with a radius of 50 kpc for the simulations.A comparison between different choices of aperture can be found in Appendix A, where we show that the aperture becomes only important above a stellar mass of ≈ 10 11 M ⊙ .
Before computing the galaxy SMF, we first add random errors to the simulation stellar masses as described in §3.1.1.The SMF is then sampled in 25 logarithmically spaced mass bins between 10 9 M ⊙ and 2 × 10 12 M ⊙ for intermediate-and low-resolution simulations, and 40 bins between 10 8 M ⊙ and 2 × 10 13 M ⊙ for high-resolution simulations.We choose to use a finer binning than is available for the observational data to allow the emulator to capture the finer features of the predicted SMF.Tests with different binning strategies show this had no effect on the results.We have enough galaxies across the fitted mass range for the Poisson errors to still be very small even with finer binning.The uncertainty we provide to the emulator is the Poisson error for each bin.
For the gas fraction we instead opt for an adaptive binning strategy.While the simulation volumes used for the calibration are large enough to constrain the SMF over the adopted mass range, at the high cluster mass end, we always run out of clusters before we run out of data to compare with.For all resolutions we use 20 bins between 500 of 10 13 and 10 15 M ⊙ although we never manage to make use of this entire range.As the higher mass bins start to run out of objects, we allow the highest mass bin to stretch to include a sufficient number of objects.We require each bin to contain at least ten objects.We also limit the stretching of the bin to half the original bin width.The uncertainties we provide to the emulator are based on the 16th−84th percentiles.As the emulator only takes symmetrical errors, we take mean of the absolute difference between the median and 16th percentile and the difference between the median and 84th percentile.For both the SMF and the cluster gas fraction we discard any empty bins.
Training using Gaussian processes
After measuring the SMF and cluster gas fraction for each node of the hypercube, we can train an emulator for each observable.Because each individual node of the Latin hypercube requires a cosmological hydro simulation, we are operating in a regime where we have a limited number of samples.We also know a priori that the observables we want to emulate (i.e., the galaxy number density and group and cluster gas fractions) vary smoothly with mass and with the values of the subgrid parameters.Both these properties are in the regime in which Gaussian processes give excellent predictive power with respect to the input data (see e.g.Rasmussen et al. 2004;Rasmussen & Williams 2006).
We set up a different Gaussian process for each relation we emulate.We combine the mass (either stellar or 500 ) and subgrid parameters into a single input data vector x = (log 10 , ), from which the emulator then predicts the dependent quantity, which is either the number density of galaxies, ( * ), or the gas fraction, gas,500c .Each emulator thus has + 1 parameters, where is the number of subgrid parameters that are varied.In order to limit the dynamic range, we transformed many of the inputs to log-space.This includes the masses (aperture stellar mass or 500 ), the values of the SMF and the two subgrid parameters that are sampled in log-space (Δ SN and Δ AGN ).This is an important step as it greatly increases the smoothness of the emulated relations, making it much easier for the emulator to give accurate predictions.As the input relations are smooth over the range we are interested in, we do not require any other transformations of the input.We feed the data directly into the Gaussian process.We use a squared exponential kernel where represents a diagonal matrix containing the hyperparameters that set the scale for each input parameter, and x and x' are two positions in parameter space.The hyperparameters are optimised based on maximising the marginal likelihood (see Rasmussen & Williams 2006).As we train a separate Gaussian process for each relation, we also have a separate set of hyperparameters for each relation.We have verified the posteriors of the hyperparameters to ensure that the values we use are well converged.
Error estimation
It is important to verify that the emulator is able to give accurate results before we use it to find best-fitting subgrid and bias parameters.Moreover, we need to quantify the accuracy of the emulator because we will account for emulation errors when fitting to data.The best way to measure the uncertainty in the emulator predictions is to perform test simulations that span the emulated parameter space.However, this implies that we would need to run many additional simulations.To save time, we choose instead to measure the uncertainty by making use of k-fold cross-validation, which we will refer to as cross-checks.
We create sims new data sets, where sims is the number of nodes in our Latin hypercube (32 in our case).For each of these data sets we take out one simulation and retrain the emulator on the reduced set of sims − 1 samples.We then test how accurately the emulator is able to predict the simulation that was left out.We do this by taking the ratio between the result from the run that was left out, and the prediction of the emulator for the parameter values of the left-out run.This gives us a value for each mass bin in the training data.We combine the ratios for all mass bins and sims emulators into a single list and compute the standard deviation, crosscheck .The error on the emulator prediction, emu , is then given by where (, ) is the value predicted by the emulator for mass and at parameter values .The result of the cross checks for the Latin hypercube of intermediate-resolution simulations can be seen in Fig. 3.It is important to note that cross checks are a conservative method to estimate the uncertainty.The input for cross-checks is uniformly sampled, implying that a significant fraction of the test points is located near the boundaries of the parameter space, where a Gaussian process is naturally less accurate.
From Fig. 3 it is clear that our emulators do not suffer from significant systematic errors for our three calibration targets, the = 0 SMF, = 0.1 X-ray cluster gas fractions, and = 0.3 weak lensing cluster gas fractions.There are no significant trends with mass, and the medians ratio is centered close to one, which corresponds to an error of zero.
It is clear that the emulator for the SMF is more accurate than the emulators for the gas fractions.This is a reflection of the way we constrain the input simulations.In the case of the SMF, the errors on the input are Poisson errors, which are quite small for our simulation volumes in the mass range we are interested in.The gas errors are based on the 16th−84th percentiles of the simulated gas fractions in each mass bin, which can be larger than the 5 per cent accuracy that the emulator attains.2022), for the = 0.1 gas fractions we combine the error from the X-ray data with the error due to hydrostatic bias and for the z=0.3 gas fraction we show the error on the weak lensing data by Akino et al. (2022).The emulator predictions are accurate enough to predict to simulation output within the observed constraints Table 6.Accuracy of the emulators, crosscheck , for the three different simulation resolutions and the jet model AGN variation, in percentages.The values are obtained by taking the standard deviation of the ratio between the result from the simulation omitted from the Latin hypercube and the prediction from the emulator trained on all but that simulation.
Calibration target High Intermediate Low Jet log 10 SMF 2.7 2.2 1.5 1.9 gas,=0.1 8.9 7.5 4.8 7.1 gas,=0.37.9 6.7 4.2 6.1 The emulator accuracy for all resolutions can be found in Table 6.The emulators become more accurate going to lower resolution.There are several possible reasons for this trend.First, we used larger box sizes for the lower-resolution simulations, so the uncertainty intrinsic to the simulation is smaller at fixed mass.Second, we used a slightly larger parameter range for high resolution than for intermediate resolution, while for low resolution we only used two parameters, greatly reducing the sampled space.
The obtained accuracy is sufficient, as it is higher than the observational scatter/uncertainty.Any deviations between the model and the data at the level of the emulator error would still be consistent with the observational constraints, especially as we allow for observational biases in our analysis.
USING THE EMULATOR FOR PARAMETER ESTIMATION
To use the emulator as the model that we compare with observational data, we need a way to optimise the subgrid parameters (see Section 2) and, optionally, the observational bias factors log 10 * , CV , and HSE (see Section 3).
For parameter optimisation we use the Markov chain Monte Carlo (MCMC) package emcee (Foreman-Mackey et al. 2013).We use the ensemble sampler, which we give our posterior likelihood.For every fit we have done using MCMC, we have varied the number of walkers and steps to ensure the resulting values are converged.We discard the first 500 steps of each chain to avoid systematic errors due to the burn-in phase.
To evaluate the goodness of fit of an emulator prediction to the observations, we first define the log likelihood for a single observed mass bin.For the SMF this is given by ln P SMF ( where HSE is an observational bias factor due to the assumption of hydrostatic equilibrium that was discussed in §3.2.For gas fractions measured from weak lensing plus X-ray observations the log likelihood definition is identical except that we assume the masses are unbiased, implying HSE = 1 (see e.g.Becker & Kravtsov 2011;Bahé et al. 2012).Note that for the likelihood of both the SMF and the cluster gas fraction we include a variance term to account for the error on the emulator prediction.This is added to avoid situations where we over-fit with respect to the uncertainty from the emulator alone.
The likelihood for the observational data is a combination of the likelihoods of the individual mass bins of the three data sets where SMF , HSE and WL are the number of (re-binned) observational data points (i.e.mass bins) for the SMF, the X-ray cluster gas fraction and the weak lensing cluster gas fraction, respectively.The values of depend on the fitted mass ranges (Table 3) and vary with resolution.We normalise each likelihood by the number of data points to ensure each separate likelihood is not directly dependent on the number of bins used.Furthermore, we average the likelihoods from the two types of cluster gas fraction data to ensure that the cluster gas fraction and SMF data carry equal weight.In an unweighted fit, the SMF would drive the results, because it is much better constrained.As the baryon fractions are the main driver of the baryonic suppression of the matter power spectrum (see e.g.Van Daalen et al. 2011, 2020;Debackere et al. 2020;Schneider et al. 2020;Salcido et al. 2023), we choose to give the gas fractions equal weight in our analysis.
We then combine the different likelihoods into a single posterior, log P posterior = log P likelihood + log P prior , where the total prior is log P prior = log P bias ( * ) + log P bias ( cv ) + log P bias ( HSE ) + log P subgrid (), P bias are our priors for the observational bias factors, and P subgrid is our combined prior for the subgrid parameters in that we wish to calibrate.For the subgrid parameters, we use flat priors that do not extend beyond the ranges used for the Latin hypercube (see Table 2) in order to avoid extrapolations.The priors on the bias factors were discussed in Section 3. We also calculate the reduced 2 for some of our models.We define the reduced where is the number of sub-grid and bias parameters used for the fit.
RESULTS
In this section we will describe the main results from our calibration approach.We use the emulators to perform parameter sweeps in §6.1, then we discuss the fitting results, first at intermediate resolution in §6.2 and then at the other resolutions in §6.3, and finally we discuss how we use the emulator to set up two AGN feedback variations in §6.4.
Parameter sweeps
Emulators can be used to investigate the effect of individual parameters via parameter sweeps, where the emulator predicts the effect of varying a single parameter over the range used for the Latin hypercube, while keeping all other parameters fixed to their best-fitting values.Parameter sweeps can give valuable insight into the importance of particular physical processes and prevent calibration through emulation from becoming a black box.The result of the subgrid parameter sweeps for our intermediate resolution runs are shown in Fig. 4. Looking at the response of the calibration targets, it is clear that the different parameters have distinct effects, indicating that the fits will not have any strong degeneracies between the varied subgrid parameters.
Increasing the slope of the black hole accretion rate boost factor suppresses the high-mass end of the SMF, but has almost no effect on the low-mass end and the cluster gas fractions.Increasing the AGN temperature jump leads to a mild reduction of the high-mass SMF, but a strong decrease of the cluster gas fractions.The effects of increasing the stellar feedback energy and kick velocity are more similar.In both cases the stellar masses are decreased, leading to a mass-dependent stretching of the SMF towards lower masses.Depending on the galaxy mass, the SMF can either increase or decrease, though the effect is small for the high-mass end.Cluster gas fractions decrease when either of the stellar feedback parameters increases, presumably because the stronger stellar feedback suppresses black hole growth and hence AGN feedback (Bower et al. 2017).
The best-fitting intermediate-resolution model
The best-fitting (i.e.maximum likelihood) values of the subgrid and observational bias parameters can be found in Tables 2 and 7, respectively.These tables also list the medians and 16 − 84 per cent confidence levels of the posterior distributions.2).The left and right columns show the galaxy stellar mass function and cluster gas fractions, respectively.In each row a single subgrid parameter is varied across the allowed range.From top to bottom we vary the slope of the black hole accretion rate boost factor slope, the AGN heating temperature, the stellar feedback energy, and the stellar feedback kick velocity.The grey regions indicate the mass ranges that are excluded for fitting (see also Table 3).Parameter sweeps help gain insight into how changes in subgrid model parameters map onto observables.
The posteriors for the subgrid and bias parameters resulting from fitting the emulator predictions for intermediate-resolution simulations to the data are shown in Fig. 5.The first thing to note is that the maximum likelihood model (solid, red circle) lies comfortably within the 68 per cent confidence intervals (inner contour) for each parameter and that it does not lie close to an edge of the parameter space.The chosen parameter ranges, i.e. the imposed priors, are thus sufficiently large for the models to bracket the target data and they do not drive the results.
It is also clear that there are no strong degeneracies between any of the subgrid parameters or between any of the bias parameters.The absence of strongly degenerate subgrid parameters is partially by construction, because we chose to fix some of the parameters that would otherwise have caused the results to become degenerate (e.g.
Table 7. Results from the fitting for the observational bias factors.The second column shows the median and 16th and 84th percentiles, the third column lists the maximum likelihood value which we denote as the best-fitting.
Bias
Median+CL heat and Δ AGN , see §2.3).There is, however, significant degeneracy between the slope of the density dependence of the black hole accretion boost factor ( BH ) and the stellar mass bias ( * ).These two parameters are anti-correlated.Increasing the bias shifts the observed SMF towards higher masses, which means the black hole boost factor needs to decrease to allow more stars to form in high-mass galaxies, whose growth is controlled by AGN feedback.The best-fitting values for the galaxy mass and cosmic variance biases are log 10 * = 0.026 and CV = 0.995, respectively.The fitted hydrostatic bias, HSE = 0.743, enables the model cluster gas fractions to agree simultaneously with the Akino et al. (2022) weak lensing data and the compilation of X-ray data.For all the bias values we find posteriors that are in agreement with the priors, so we conclude that our fitting does not put any significant additional constraints on the bias parameters.
The best-fitting emulator predictions for intermediate resolution are compared with the data in the middle row of Fig. 6, which also shows the result of a (200 Mpc) 3 simulation run with the best-fitting subgrid parameter values (i.e.our fiducial model).The left and right panels show the SMF and cluster gas fractions, respectively.The gas fractions are shown for both the redshift of the X-ray data, = 0.1 (light blue line and dark blue data points), and the redshift of the weak lensing data, = 0.3 (purple line and dark purple data points).Grey regions and dotted line styles indicate mass ranges that were excluded from the fit.The ranges can be found in Table 3.Note that the fitted bias factors have been used to shift the data.We obtain good agreement with the fitted observations with a reduced 2 = 1.23 for the combined fit to the SMF and the cluster gas fractions.The good agreement between the blue and the red lines demonstrates that the emulator was able to predict accurately what the fiducial simulation would look like in the fitted mass range.
Remarkably, the simulations fit the SMF down to galaxy masses corresponding to slightly fewer than ten stellar particles.Comparing the predicted gas fractions at = 0.1 and 0.3, we see there is very little evolution.The model overshoots the gas fractions for cluster masses between 500 ≈ 10 13.8 M ⊙ and ≈ 10 14.5 M ⊙ , by about 1.We emphasize, however, that our observational error bars are about a factor of five smaller than the observed object-to-object scatter.Unfortunately, a box size of (200 Mpc) 3 (or even (400 Mpc) 3 ) is not large enough to constrain the gas fractions in haloes with 500 ≥ 10 15 M ⊙ .Performing the same analysis in a larger volume would potentially allow the emulator to train up to the range where the 500 - gas relation starts to flatten.
The best-fitting subgrid high-and low-resolution models
Although we use the simulation-based emulator to fit for the observational biases, the biases refer to observational effects and should thus be the same for all models.We therefore do not vary them between the different simulation resolutions.We use the intermediate-resolution simulations to fit the biases, because their resolution and box size enable us to fit a substantial mass range for both the SMF and the cluster gas fractions (see Fig. 6).For the other resolutions we keep the observational biases fixed to the values listed in Table 7.In this way we ensure that a direct comparison can be made between the three different resolutions 7 .
Fixing the observational biases to the values found for intermediate resolution leaves only four parameters to fit for high resolution.For low resolution we only have two parameters to vary because we turn off stellar feedback as these simulations do not resolve the masses below which stellar feedback dominates (see §2.1).The bestfitting parameter values for each resolution can be found in Table 2. Corner plots of the posterior distributions for the subgrid parameters are shown in Appendix B. A comparison of the best-fitting emulator prediction, the data and runs using the predicted best-fitting subgrid parameter values is shown in the top and bottom rows of Fig. 6 for (100 Mpc) 3 high-and (400 Mpc) 3 low-resolution volumes respectively.
At high resolution there is again excellent agreement between the emulator prediction and the observed data, with reduced 2 = 1.15.The high-resolution simulation resolves the largest range of stellar mass in the SMF, from ≈ 10 8.6 M ⊙ to ≈ 10 11.5 M ⊙ .There is a dip around a mass of 10 10.2 M ⊙ and a slight bump around the knee of the mass function.but the maximum deviation from the data is less than 5 per cent.It seems that the emulator was unable to predict the dip, and the best-fitting simulation falls outside of the predicted errors.Comparing the predicted errors between the different resolutions, it is clear that the high-resolution simulation has the largest predicted error.This is due to it using the smallest box size.This causes the emulator prediction to be too "smooth" when compared with simulation results.The deviation at the dip is less than the 1 uncertainty due to cosmic variance.The small box size (100 Mpc) 3 used for calibration at high resolution, limits the mass range that can be used to fit the gas fractions to halo masses lower than 6 × 10 13 M ⊙ .This leaves only two data points to compare to.The agreement in the fitted range is however very good.
Comparing the best-fitting subgrid parameter values for the highresolution model to those for intermediate resolution (Table 2), we see that the stellar feedback requires about twice as much energy and about half as high a kick velocity.This reflects the need for stronger stellar feedback when higher gas densities are resolved and the fact that feedback can be efficient down to smaller wind velocities in the lower-mass haloes that remained unresolved at intermediate resolution.While the AGN heating temperatures are very similar, the high-resolution simulations require a much smaller slope of the black hole accretion rate boost factor, BH = 0.038 (where zero corresponds to no boost) versus BH = 0.514 at intermediate resolution.
Since the high-resolution simulation can resolve higher gas densities, 7 The Driver et al. (2022) data points at *,obs ≤ 10 10 M ⊙ were updated after we had already finished the (2.8 Gpc) 3 intermediate-resolution FLAMINGO simulation.To be able to use the updated data for the calibration of the highresolution simulations, which resolve the SMF down to masses for which the data were updated, we re-fit the observational biases at intermediate resolution while keeping the subgrid parameters constant.The stellar mass bias changed from log 10 * = 0.031 to 0.026, the cosmic variance bias changed from CV = 1.014 to 0.995 and the HSE bias from HSE = 0.745 to 0.743.The bias values changed by a negligible amount with respect to the 16th−84th percentile confidence levels, for both * and HSE the change is less than 3 per cent of the 16th−84th percentile range.For CV the change is ∼ 15 per cent of the 16th−84th percentile range.The values we report in Table 7 2. The posteriors show that we can find a single solution that fits the simulations to the observational data.
and hence higher black hole accretion rates, we do not need to boost the accretion rate as much.
At low resolution the agreement with the data is also very good, with reduced 2 = 0.95.Now it is the stellar mass range that is very limited, * ≈ 10 11.17 M ⊙ to * ≈ 10 11.5 M ⊙ , which includes only two data points.The larger box size of (400 Mpc) 3 allows for the use of the two Akino et al. (2022) weak lensing data points as well as five X-ray data points for fitting the cluster gas fractions.However, the high-mass plateau of the gas fractions remains out of reach for this box size.The comparison of the best-fitting subgrid 2022) SMF at = 0, dark blue: compilation of X-ray data at = 0.1, dark magenta: Akino et al. (2022) weak lensing data at = 0.3).Each panel shows the best-fitting emulator prediction as a blue curve, the emulator uncertainty as a blue shaded region, and the result from a simulation using the best-fitting subgrid parameter values in a (100 Mpc) 3 , (200 Mpc) 3 , and (400 Mpc) 3 volume for high, intermediate, and low resolution, respectively, as a red curve.For gas,500 we only plot the best-fitting simulation result at = 0.1 in red, and leave out the result at = 0.3 to avoid clutter.For the cluster gas fractions, besides showing in blue the = 0.1 emulator that should be compared with the dark blue X-ray data, we also show the = 0.3 emulator, in magenta, that is used to fit the dark magenta Akino et al. (2022) weak lensing data.The grey regions indicate the mass ranges that are excluded from the fitting, see also Table 3.The model predictions are shown using dotted lines in these excluded ranges.The vertical dotted line in the left panels indicates a mass corresponding to ten stellar particles.The SMF and X-ray gas fraction data have been shifted by the best-fitting observational bias factors (see Table 7), which are however negligible for the SMF.The SMF from the best-fitting simulation includes Eddington bias (see §3.1.1)in line with how the emulator is trained.The systematic errors given by the priors on the bias parameters are shown as points with error bars in the top panels.At each resolution we obtain excellent agreement between the emulator, a simulation with the best-fitting parameters, and the observational data.
parameter values of the low-resolution model to those of the higherresolution simulations (Table 2) is difficult to interpret because the low-resolution model requires a much lower threshold density for star formation, a much higher black hole seed mass, and does not include any stellar feedback.
As we obtain a good fit to the same data for each of the three resolutions, we conclude that we have good 'weak convergence' between the three resolutions, using the terminology of Schaye et al. (2015).The FLAMINGO suite includes high-, intermediate-, and low-resolution simulations that were run with our fiducial subgrid parameter values in volumes with side lengths of 1, 2.8, and 1 Gpc, respectively.For a comparison of these models with other data, we refer to Schaye et al. (2023).
Feedback variations
One of the goals of FLAMINGO is to investigate the impact of feedback on cosmological observables.In this section we show how we use emulators to calibrate simulations to produce gas fractions or SMFs that have been shifted away from their fiducial, observed values.We focus mostly on changes to the gas fractions, as previous work has shown that baryon fractions in groups and clusters anticorrelate with the baryonic suppression of the matter power spectrum on the scales relevant for current and next generation surveys (e.g.Semboloni et al. 2013;Van Daalen et al. 2020;Debackere et al. 2020;Salcido et al. 2023).For clusters, the gas fractions dominate over the stellar fraction when computing the baryon fractions (the stellar mass content of haloes becomes important at smaller scales).While most of our variations use our fiducial thermal AGN feedback model, we will also calibrate a model that uses kinetic, jet-like AGN feedback.
To quantify the effect of reasonable changes in the astrophysics, we include a set of feedback variations in the simulation suite.These simulations should at least bracket the uncertainty in the cluster gas fraction data, while fitting the SMF data.Previous works created variations of subgrid physics based directly on the values of certain subgrid parameters.For example, the BAHAMAS project (McCarthy et al. 2018) varied the AGN heating temperature by ±0.2 dex, which resulted in very small changes to the SMF and cluster gas fractions that roughly bracketed the observational uncertainty.To arrive at the values of the subgrid parameters for our runs, we make use of the emulators and we will allow all fitted subgrid parameters to vary.Our variations are based on systematically shifting of the data, based on their uncertainties, making the variations less reliant on the subgrid model used.We also include models with gas fractions that are probably ruled out observationally, because we anticipate these will be useful to gain insight into the effect of baryonic feedback on other cosmological observables.
The variations are run at intermediate resolution.We use the fiducial values of the observational bias factors listed in Table 7.For the gas fraction variations, the SMF data are kept the same except for one variation, where we systematically reduce all observed stellar masses.The gas data are shifted up by 2 and down by 2, 4 and 8 for the fgas+2, −2, −4 and −8 models respectively, where is the error obtained from bootstrapping for the X-ray data, or the error on the fit for the weak lensing data from Akino et al. (2022), as discussed in §3.2.We systematically shift all the data by under the assumption that the errors in the gas fraction are mostly systematic and correlated.We shift in steps of 2 and 4 instead of a smaller shift (for example 1) as the cluster-to-cluster scatter is much larger than the errors we found from bootstrapping (see §3.2.1).We also create a models that vary the SMF.As the baryonic suppression is sensitive to the total baryon fraction (see e.g.Salcido et al. 2023), we include these variations to investigate the effect of changes in the baryon fraction at a constant gas fraction, and to see the effect of changing the stellar fractions.For these variations, we systematically shift the SMF data to lower masses according to the 1 given by the stellar mass bias (0.14 dex; §3.1.2).For the M*−1 model we use the fiducial gas fractions and for the fgas−4 + M* − 1 model we simultaneously shift the X-ray and weak lensing gas fractions down by 4.
The best-fitting subgrid parameter values for the feedback variations can be found in Table 8.The changes in the subgrid parameters with respect to the fiducial model are small.As expected, the AGN subgrid parameters bracket the fiducial values, with the fgas−2 model having a slightly higher AGN feedback temperature.As could already be seen in Fig. 4, the gas fraction is very sensitive to Δ AGN , which varies by only 0.37 dex between the fgas+2 and −2 models, in good agreement with BAHAMAS.The fgas−4 and −8 models follow this trend.Changes in the gas fractions are driven mainly by changes in Δ AGN .Going from the fgas−4 to the M*−1 + fgas−4 model, the biggest change is seen in SNII and BH , as expected from Fig. 4. The increase in the BH accretion boost factor is required to compensate for the removal of gas by the increased supernova energy.
The feedback models are compared with the fiducial model and the calibration data in Fig. 7.In the top two panels we show the emulator predictions for the SMF and the gas fractions for each of the variations.Within the fitted mass ranges there is excellent agreement for the SMF between all the different cluster gas fraction variations.There is good agreement between gas for the gas − 4 and the SMF−1 + gas − 4 variations.In the bottom panels we compare the emulator predictions to the results of (200 Mpc) 3 simulations run with the best-fitting parameters.For the SMF, we see that the emulator predictions are accurate at around the per cent level, with only the jet model fgas−4 deviating by ≈ 5 per cent.For gas , all predictions are accurate to ≈ 10 per cent, and most predictions are accurate to within ≈ 5 per cent.The accuracy is slightly better than the expected emulator accuracy from cross-checks (see Table 6).We conclude that by allowing for small adjustments to four subgrid parameters, we are able to vary specific observables while keeping others constant.
In addition to the parameter variations, we also calibrate a different implementation of AGN feedback.As described in §2.3.1 this model uses kinetic bipolar kicks instead of thermal injections to distribute AGN feedback energy around accreting BHs8 .As the subgrid model differs fundamentally from the fiducial model, we run a new Latin hypercube with 32 intermediate-resolution simulations in (200 Mpc) 3 volumes.The subgrid parameter ranges for this hypercube can be found in Table C1.To construct the emulator, we again follow the prescription of Section 4 and we again verify its accuracy using cross-checks (see Table 6).The goal is to have a simulation with a different implementation of AGN feedback calibrated to the same observables as the fiducial implementation.We therefore use the same fitting limits, methods and likelihoods as for the fiducial intermediate-resolution model.For the jet model we fit to both the The emulator predictions for the SMF and gas fractions, respectively, for the feedback variations and the fiducial model (different colors, as indicated in the legend).The observations are shown as black points with error bars.In the top corners of the panels we indicate the assumed systematic errors in the data from the priors on the fitted biases.The bottom panels show the ratio of the emulator prediction and a (200 Mpc) 3 simulation run with the same parameters.In both panels the black dotted line indicates a ratio of one.For the SMF ( gas,500 ), the black dot-dashed lines indicate deviations of 1 per cent (5 per cent).We only show the cluster gas fraction emulator prediction at = 0.1 and leave out the = 0.3 gas fraction results to avoid clutter.The excluded mass range for fitting is indicated by the grey regions (see also Table 3.) We use the emulators to make a direct mapping between our subgrid physics models and systematic shifts in the observations, based on the observational errors.fiducial data and to the perturbed data used to calibrate the gas − 4 model.The resulting medians and best-fitting values can be found in Table 8.
The jet models are shown as the green lines in Fig. 7.They show some differences from the fiducial thermal AGN feedback models.The jet models fit the knee of the SMF slightly better by having slightly more galaxies with * ≈ 10 10.7 M ⊙ .The difference at the very low-mass end of the SMF, below the fitted range, is due to the fact that the bug in the threshold of star formation for zero metallicity gas (see footnote 3) was fixed for the jet models.The gas − 4 jet model also has a significant reduction in the number of galaxies with masses above our fitting limit, thus yielding a SMF with a steeper high-mass cut off.However, the bottom panel suggests that this is at least partially explained by the fact that the emulator under-predicts the number density by a few per cent.Compared with the thermal AGN models fit to the same data, the jet models predict higher gas fractions in groups (M 500 ∼ 10 13 M ⊙ ), where there is, however, no observational data.From the bottom panels we can see that for gas the accuracy of the jet emulator does not differ significantly from the emulator for the thermal AGN feedback models.
CONCLUSIONS
In order to fully exploit the large-scale structure data that will become available with surveys like Euclid and LSST, we need to acquire a deeper understanding of how baryonic effects, like AGN and stellar feedback, impact the matter distribution.The most self-consistent way of experimenting with these effects is through the use of cosmological hydrodynamical simulations.The FLAMINGO project provides such simulations in volumes sufficiently large to study the evolution of large-scale structure and massive galaxy clusters for different numerical resolutions, cosmologies and astrophysical models.
As feedback processes originate on unresolved scales, we have to add them via subgrid prescriptions.However, because these subgrid models are theoretically not well constrained, they need to be calibrated to reproduce a relevant set of observables.Previous simulation projects like EAGLE (Schaye et al. 2015;Crain et al. 2015), Illus-trisTNG (Pillepich et al. 2018), BAHAMAS McCarthy et al. (2017, 2018) and SIMBA Davé et al. (2019) achieved good agreement with data by varying subgrid parameters by hand until the simulation lined up with the target observations.However, for cosmology a more robust and objective calibration method is desirable, particularly if it can also be used to predict the effect of subgrid variations that have not been simulated directly.
To create a robust method of calibration, we make use of machine learning, specifically Gaussian process emulators.Instead of emulating the effects of changes in the cosmological parameters, which is becoming a common application of machine learning in cosmology, we emulate the observables that we want to match to observations as a function of a set of subgrid parameters.For three different numerical resolutions, which span a factor of 64 in particle mass, we train an emulator on 32 input simulations where we vary the four most impactful subgrid parameters, two of which relate to stellar feedback and two of which relate to AGN feedback (Section 2).In addition, we train an emulator for another intermediate-resolution implementation of AGN feedback, which uses jets (i.e., directed kinetic feedback) instead of injecting the feedback energy thermally.At each resolution we run simulations with 360 3 gas particles, implying a (100 Mpc) 3 , (200 Mpc) 3 and (400 Mpc) 3 volume for FLAMINGO high [m8], intermediate [m9] and low [m10] resolution, respectively.We then use MCMC to fit the emulator to carefully selected observational data.We repeat the same procedure for each resolution, and only change the fitted mass ranges to account for resolution and box size limitations.Additionally, we have created a set of subgrid physics implementations based on fitting the emulators to the data after systematically shifting it by .
We calibrate to the observed low-redshift galaxy stellar mass function (SMF) from the GAMA survey and a compilation of group and cluster gas fraction measurements based on X-ray and weak lensing data.A novel aspect of our approach is that we also fit for possible observational biases (i.e., systematic errors).We account for biases in the stellar mass and the cluster mass inferred from X-ray data under the assumption of hydrostatic equilibrium, as well as for the effect of cosmic variance on the SMF.In addition, we account for the effect of random errors in the observed stellar mass on the SMF (i.e., Eddington bias) by randomly perturbing the simulated stellar masses(Section 3).The observational biases are only fit during the calibration of the intermediate-resolution simulations and the bestfitting values are then also applied to the other resolutions.
Our main conclusions are: (i) By carefully setting up the subgrid parameter space, we were able to train emulators that are more accurate than the target observational constraints (Fig. 3).
(ii) The emulator framework enables simultaneously fitting for subgrid parameters and observational biases.For FLAMINGO, the posteriors found for the biases are driven by and in agreement with the priors.We find a negligible value for the stellar mass and cosmic variance error, and a hydrostatic bias of HSE = 0.743.
(iii) Emulators can be used to make parameter sweeps, i.e. plots showing how the trained relation depends on the value of a single subgrid parameter (Fig. 4).As the emulators give the continuous response of the trained relation to changes in subgrid parameters, emulators can be used to gain a deeper understanding of how the observable relations are affected by the subgrid models.
(iv) The parameter space that we explore is devoid of major degeneracies between the subgrid parameters.The emulator+MCMC framework finds a single best-fitting solution (Fig. 5).We note that this is partially by construction, as parameters that had major degeneracies were omitted from the parameter space (see Section 2).For future work it might be interesting to see if these degeneracies can be solved by fitting the model to additional observational data.
(v) At each resolution we find excellent agreement between the best-fitting model and the calibration data (Fig. 6).
(vi) The emulator framework can be used to map observational uncertainties onto changes in subgrid parameters.By fitting the emulator to variations in gas fractions and the SMF, we produce a set of simulations for which specific observables are varied while keeping others constant (Fig. 7).As the model variations are directly tied to observations, the resulting simulations can be used to quantify the effect of uncertainties in the calibration data on the predictions for other observables.
(vii) We used the emulator framework to calibrate a different implementation of the model, which we did for kinetic AGN feedback (in contrast with the thermal AGN feedback used our fiducial model; Fig. 7).By making different models match the same calibration observations, the simulations can be used to quantify the uncertainty in predictions for other observables due to uncertainties in the underlying physics.
We have used Gaussian process emulators to create a close link between subgrid models and observations.By creating a robust statistical framework for calibration, future hydrodynamical simulations will be able to use available and upcoming data to constrain the subgrid physics and to quantify the uncertainty in the predictions of simulations that remains after the models have been constrained to fit particular sets of data.In this work we have focused on calibrating simulations using different resolutions, and a single variation of the implementation of AGN feedback.For future work the same framework could be used to get agreement between different simulation codes and subgrid models for specific observables.In this way we could improve our understanding of the degeneracies between different methods and the uncertainties in their predictions.
In the companion paper Schaye et al. (2023) we present the largevolume FLAMINGO simulations that use the calibrated parameter values that we obtained here.More information on and visualisations of the FLAMINGO simulations can be found on the website. 9APPENDIX A: DIFFERENT APERTURES Fig. A1 compares the SMF results for different choices of 3D apertures with radii of 30, 50 (our fiducial aperture) and 100 kpc.For each non-fiducial aperture we retrain the emulator on the SMFs obtained with the different aperture.The new emulator, based on a different aperture, is then evaluated at the fiducial subgrid parameter values.We do not refit the SMF for each aperture, because we wish to quantify the effect of the aperture size on the SMF predicted by a given simulation.The choice of aperture only has an impact at the largest stellar masses (see also Schaye et al. 2015).For our analysis this implies that the main effect of an increase in aperture would be a slight increase of the slope of the density dependence of the AGN accretion rate boost factor.However, for the fitted mass range this effect is relatively small.The effect of using a mass measurement method more similar to that used by observers may be larger (e.g.De Graaff et al. 2022), but such a comparison is not feasible at the resolution of our simulations.
APPENDIX B: POSTERIORS FOR HIGH-AND LOW-RESOLUTION
The posteriors for low resolution are shown in Fig. B1.There is a degeneracy between the two parameters.Both parameters are sampled well within our chosen ranges.Even though the range for the heating temperature is much wider than for the other resolutions, we find that the best-fitting value is in the range where AGN feedback is well sampled, and does not suffer from catastrophic numerical overcooling (see §2.3).
The posteriors for the high-resolution simulation are shown in Fig. B2.Similar to the intermediate-resolution posteriors we find a best-fitting model within the chosen parameter ranges.The bestfitting value for BH is quite close to the edge, partly due to a degeneracy between BH and Δ SN .The high-resolution posteriors are more degenerate than for intermediate-resolution.This is likely due to the fact that we fit a much broader range of the SMF, making it more important to get the balance between stellar and AGN feedback right.The posteriors show that there are some significant degeneracies in how this problem can be solved.Note that for both and high and low resolution we have fixed the biases to the values for for intermediate resolution, see §6.2.
APPENDIX C: PARAMETER RANGES FOR THE AGN JET MODEL
The subgrid parameter ranges for the Latin hypercube that was used to train the emulators for the AGN jet model can be found in Table C1.
This paper has been typeset from a T E X/L A T E X file prepared by the author.
Figure 1 .
Figure1.Compilation of observational data used for calibration.On the left we plot the SMF.On the right we plot the cluster gas fraction versus total mass, both measured at 500 .Where available we display the 1 measurement errors, which do not include intrinsic scatter.The X-ray data are binned from a compilation of available data, see §3.2.1, except the lowest mass point, which is obtained from a fit byLovisari et al. (2015).We show the individual clusters as black dots.Note that the X-ray data are plotted without any correction for the hydrostatic mass bias.For this work we use theDriver et al. (2022) data for the SMF, and the X-ray andAkino et al. (2022) data for the gas fractions.
Figure 3 .
Figure3.Performance of the emulator on cross checks (see §4.4) for the redshift = 0 SMF (left panel), the = 0.1 X-ray cluster gas fractions (middle panel), and the = 0.3 weak lensing cluster gas fractions (right panel) at intermediate [m9] resolution.Each of the 32 red lines corresponds to the case where a single simulation from the 32-node Latin Hypercube has been omitted from the training set.The curves show the ratio of the emulator prediction for the parameter values of the omitted simulation to the actual simulation values.The solid black line shows the median as a function of mass.The horizontal dash-dotted and dashed lines indicate, respectively, the 1 and 2 mean errors on the emulator.The horizontal dotted lines indicate the one-to-one lines, i.e. zero errors.The grey bands indicate the regions that are not used for fitting in Section 5.In each panel we also indicate the observational errors.For the SMF we show the error due to cosmic variance and the errors on the data byDriver et al. (2022), for the = 0.1 gas fractions we combine the error from the X-ray data with the error due to hydrostatic bias and for the z=0.3 gas fraction we show the error on the weak lensing data byAkino et al. (2022).The emulator predictions are accurate enough to predict to simulation output within the observed constraints
Figure 4 .
Figure 4. Subgrid parameter sweeps using the emulator trained on our 32-node Latin hypercube of (200 Mpc) 3 intermediate-resolution simulations.The parameter sweeps are centred on the best-fitting parameters (see §6.2).The left and right columns show the galaxy stellar mass function and cluster gas fractions, respectively.In each row a single subgrid parameter is varied across the allowed range.From top to bottom we vary the slope of the black hole accretion rate boost factor slope, the AGN heating temperature, the stellar feedback energy, and the stellar feedback kick velocity.The grey regions indicate the mass ranges that are excluded for fitting (see also Table3).Parameter sweeps help gain insight into how changes in subgrid model parameters map onto observables.
Figure 5 .
Figure5.The posterior distributions of the model parameters resulting from fitting the emulator to the observed SMF and cluster gas fractions for intermediateresolution simulations.The parameters shown are the stellar feedback energy, SN , the stellar feedback kick velocity, Δ SN , the AGN feedback temperature jump, Δ AGN , the logarithmic slope of the density dependence of the black hole accretion rate boost factor, BH , the stellar mass bias, M * , the hydrostatic mass bias, HSE , and the cosmic variance bias, CV .The four subgrid parameters are described in Section 2 and the three observational bias factors are discussed in Section 3. The black contours show the 68 and 95 per cent confidence levels.The panels along the diagonal show the one dimensional probability density for each parameter.In these plots the three vertical lines indicate the 16th, 50th and 84th percentiles.The solid, red circles indicate the maximum likelihood values, which were used for the fiducial model.Each panel is centered on the centers of the priors given in Table2.The posteriors show that we can find a single solution that fits the simulations to the observational data.
Figure 6 .
Figure 6.Comparison of the best-fitting models to the observed galaxy stellar mass function (SMF; left column) at = 0 and observed cluster gas fractions (right column).The top, middle and bottom rows show results for high-, intermediate-and low-resolution simulations, respectively.The observations are plotted as points with error bars (black: Driver et al. (2022) SMF at = 0, dark blue: compilation of X-ray data at = 0.1, dark magenta:Akino et al. (2022) weak lensing data at = 0.3).Each panel shows the best-fitting emulator prediction as a blue curve, the emulator uncertainty as a blue shaded region, and the result from a simulation using the best-fitting subgrid parameter values in a (100 Mpc) 3 , (200 Mpc) 3 , and (400 Mpc) 3 volume for high, intermediate, and low resolution, respectively, as a red curve.For gas,500 we only plot the best-fitting simulation result at = 0.1 in red, and leave out the result at = 0.3 to avoid clutter.For the cluster gas fractions, besides showing in blue the = 0.1 emulator that should be compared with the dark blue X-ray data, we also show the = 0.3 emulator, in magenta, that is used to fit the dark magentaAkino et al. (2022) weak lensing data.The grey regions indicate the mass ranges that are excluded from the fitting, see also Table3.The model predictions are shown using dotted lines in these excluded ranges.The vertical dotted line in the left panels indicates a mass corresponding to ten stellar particles.The SMF and X-ray gas fraction data have been shifted by the best-fitting observational bias factors (see Table7), which are however negligible for the SMF.The SMF from the best-fitting simulation includes Eddington bias (see §3.1.1)in line with how the emulator is trained.The systematic errors given by the priors on the bias parameters are shown as points with error bars in the top panels.At each resolution we obtain excellent agreement between the emulator, a simulation with the best-fitting parameters, and the observational data.
Figure 7 .
Figure7.Top left and right panels: The emulator predictions for the SMF and gas fractions, respectively, for the feedback variations and the fiducial model (different colors, as indicated in the legend).The observations are shown as black points with error bars.In the top corners of the panels we indicate the assumed systematic errors in the data from the priors on the fitted biases.The bottom panels show the ratio of the emulator prediction and a (200 Mpc) 3 simulation run with the same parameters.In both panels the black dotted line indicates a ratio of one.For the SMF ( gas,500 ), the black dot-dashed lines indicate deviations of 1 per cent (5 per cent).We only show the cluster gas fraction emulator prediction at = 0.1 and leave out the = 0.3 gas fraction results to avoid clutter.The excluded mass range for fitting is indicated by the grey regions (see also Table3.)We use the emulators to make a direct mapping between our subgrid physics models and systematic shifts in the observations, based on the observational errors.
Figure A1 .
Figure A1.The effect on the SMF of choosing a different aperture when measuring stellar masses in the simulation.For each line we set up a new emulator based on the simulation results for the corresponding aperture.Each emulator is then used to predict the behaviour at the best-fitting parameter values for the fiducial 50 kpc aperture.Differences between the apertures start to occur above a stellar mass of 10 11 M ⊙ .
Figure B1 .Figure B2 .
Figure B1.The posterior distributions of the model parameters resulting from fitting the emulator for low-resolution simulations to the observed SMF and cluster gas fractions.The parameters shown are the AGN feedback temperature jump Δ AGN and the logarithmic slope of the density dependence of the black hole accretion rate boost factor, BH .The two subgrid parameters are described in Section 2. The black contours show the 68 and 95 per cent confidence levels.The panels along the diagonal show the one dimensional probability density for each parameter.In these plots the three vertical lines indicate the 16th, 50th and 84th percentiles.The solid, red circle indicate the maximum likelihood values, which were used for the fiducial model.There is some degeneracy, but there is a clear single best-fitting solution.
Table 1 .
Numerical characteristics of the final Latin hypercubes of simulations.The columns list: the resolution qualifier, comoving box size, number of particles (there are initially equal numbers of dark matter and baryonic particles), initial baryonic particle mass, dark matter particle mass, comoving gravitational softening length, maximum physical gravitational softening length.
Table 2 .
Priors and best-fitting values for the subgrid parameters for each of the three simulation resolutions.Low-resolution simulations do not include stellar feedback.The rows titled 'Median+CL' give the median and the 16th and 84th percentile confidence level (CL) obtained from the posterior of the fits.The rows titled 'best-fitting' list the maximum likelihood value from the fitting, which is our fiducial value.The last row 'Log' indicates whether the parameter is sampled logarithmically.The best-fitting values for the jet model are listed in Table8and the priors for the jet model are listed in TableC1.
Table 5 .
Compilation of cluster X-ray gas fraction data used for calibration.These values are for the DESYR3 cosmology (ℎ = 0.681, Ω m = 0.298).The values are obtained by taking the median of the X-ray data described in Table4in eight logarithmically spaced bins between 10 13.8 and 10 15.0 M ⊙ .The errors are the absolute difference between the 16th or 84th percentile and the median (whichever is largest), obtained by bootstrap resampling the median. | 24,116.4 | 2023-06-08T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Multiobjective Project Scheduling with Multiple Compression Capabilities of the Multistate Activities with Project Reliability Approach using the Metaheuristic Algorithm
We examine the time-cost-reliability-project balance of the project by considering the limitations of renewable and nonrenewable resources and performing multistate group activities in the same conditions and with the possibility of multiple compression activities. This study aimed to select the best mode for conducting actions in each subset and find the best execution method by determining the number of compression time units of actions to maximize project reliability and minimize project completion risk, time, and cost. Drilling projects will also be implemented for evaluation in the Azar oil field. The exact Epsilon Limitation method was used to solve this problem in low dimensions. The combined genetic metaheuristic algorithm and particle swarming were used to solve high dimensions problems. The low-dimensional Epsilon solution method could clearly balance the objective functions. Also, the Taguchi experimental design method was used to adjust the parameters of the problem algorithm. The correct value of the algorithm parameters is also specified. Based on the results, the hybrid metaheuristic algorithm has less solution time than the exact method. In high dimensions, as it was observed, GAMS software could not solve the problem. Still, the hybrid metaheuristic algorithm solved the problem well in high dimensions.
Introduction
is is a temporary effort aimed at producing a unique product. For each project, a temporary organization is formed of individuals, and resources (budget, human resources, machinery, etc.) are provided to produce a unique product with the requirements set in the framework. Project frameworks or limitations s are usually divided into four items: scope, time, cost, and quality, but there are other things to consider. Together, these four elements form the classic triangle of the project [1]. In today's world, most large organizations are carrying out several projects simultaneously to produce a new product or service following the goals and policies of the organization. Sometimes, the pressure to complete on time by the project budget and maintain competitive advantage has led organizations to develop and implement project management processes [2]. us, organizations are shifting from traditional pragmatic structures to project-oriented ones, which must complete projects with clear but immediate goals according to the agreed time in a competitive environment with limited resources in mind. Increasing the project completion time compared to the planned time is one of the significant problems of project implementation. It increases its cost and economic inefficiency and causes delays in completing work and poor project managers in front of customers. Improper and irrational project scheduling will cause many problems such as errors in cost estimation and project budgeting, making mistakes in decision-making and prioritizing projects, incurring late costs and depriving the employer of trust in contracts, lack of attention to resource feasibility, failure to control the project, and inconsistencies in project progress reports. To prevent possible problems of projects, before starting the project, we should study the feasibility study according to project management knowledge and consider the limitations. e project management knowledge set will help to know how to properly implement each of the project management actions, what steps to take at what times, and with whose participation.
Problem Statement.
e critical path method for solving project planning problems was introduced in the late 1950s. In the calculations of this method, it is assumed that all activities can be performed at their expected and usual time.
e project may need to be completed even earlier than planned in some cases. e time of several activities should be reduced to achieve the time of completion sooner. is reduction in time is accompanied by an increase in labor resources and expenditure, called performing impact or compression of activity time.
At the same time, carrying out activities in less time increases the quality of the activities and the project's risk and reliability. It is challenging for managers to make a complete and accurate decision regarding these benefits and penalties. erefore, pay attention to the problems of timecost balance. e general state of time-cost balance problems is divided into two categories, continuous and discrete, based on the form of the direct cost function of the activity. Under continuous functions, the shape of the function may be linear, convex, or concave. In discrete time-cost balance problems, also known as multimode problems, resources are available in discrete units. e goal is to allocate the best execution mode and resources needed for project activities, at least promptly. In 1995 and 1998, a particular state of the multimodal problem was discussed. At that time, the activity was considered a function of resource needs (State choice), and the amount of time accelerated the reduction of time through increased direct costs. Indeed, after having selected one execution mode out of several modes and assigned it to an activity, it is possible to reduce the time of this activity by incurring more costs. is means that the time and cost of the activity depend not only on the choice of mode but also on the choice of time during a mode. ese issues are referred to as multistate compression issues [3,4].
In this study, as opposed to previous studies, in which only one of the normal or intensive conditions might be selected, a set of correct time units will be used to determine the amount of activity compression. e numbers in this set start from zero for the usual execution method and end with adding one unit of compression periods to a specific value (maximum amount of compression) for the compact execution method. In fact, this will determine the number of units that accelerate the activity's execution time, despite selecting the normal or intensive execution method in the mode assigned to each activity. All activity modes' assignments are done independently in the multimodal problems discussed so far. is means assigning a mode to action in a project that includes a set of activities does not necessarily force any other activity to be performed in a particular mode. In practice, however, there may be situations where certain activities belong and must be completed in the same way.
is problem is known as problems with the same state. erefore, in this study, to bring the proposed model closer to real-world projects, by dividing project activities into separate subdivisions and defining multiple execution modes for each subset of activities, those activities in the same subset are performed. Also, unlike previous research, here, considering the ability to compress multiple modes, none of the activities in the group will be required to be performed regularly or intensively and will only have the same execution mode [5][6][7].
Generally, projects are trained to address a set of needs, and the goal of project managers is to guide and control the project in its main direction to achieve the predetermined goals. Many goal functions are often considered in projects for various reasons. One of the most important reasons for this is the presence of different stakeholder groups in the project. Among the most important goals that have been considered in this field in different ways, we can mention the optimization of project completion time, project cost, quality, net present value, safety, flexibility in scheduling, etc. In these issues, different limitations are considered, such as resource limitations (renewable and nonrenewable) and prerequisite and postrequisite relationships. However, less attention has been paid to objectives such as reliability and risk in project management issues among the mentioned objective functions. To increase the reliability of the project, we try to select the modes of execution of the activities so that the activity can be completed on time and completely. For example, it is assumed that different levels of technology, such as hand tools, semiautomatic and all-automatic, are available to perform an activity.
On the other hand, the reliability of using each level of technology is different. Factors such as fatigue and error of human resources, lack of timely human resources in the workplace, improper transportation, incorrect loading and unloading, device failure, and improper maintenance and repair can reduce reliability. erefore, the type of execution mode selected can affect the time cost and the reliability of the activity. On the other hand, the risk of carrying out project activities can affect one or more of the project objectives. e risk may have one or more causes, such as the need for a license and limited resources or other aspects of the project environment, such as poor project management practices or reliance on outside specialists that are not controllable. erefore, the type of execution mode selected can affect the activity's time, cost, reliability, and risk. Because of the above, in this study, we try for the first time to simultaneously optimize the goals of time, cost, reliability, and risk of project implementation, given their undeniable importance in today's projects. In addition, the issue of multiple compression modes and problems of the same state is rarely considered due to the complexity of the problem. erefore, in this study, for the first time, we concern the issue of time balance-cost-reliability-project risk by considering the limitations of renewable and nonrenewable resources and performing multimode group activities in the same conditions and with the possibility of multiple compression activities. e purpose is to select the best mode for executing the activities in each subset and find the best execution method by determining the number of compression time units of actions to maximize project reliability and minimize risk, time, and cost of project completion. Also, in this research, drilling projects will be implemented for evaluation in the Azar oil field. Since the scheduling problem is an NP-HARD problem, a multiobjective genetic algorithm will solve it.
Literature Review
Koo et al. [8] introduced an integrated multiobjective optimization model to provide a set of optimal solutions based on Pareto front concepts in six stages. ese six steps are as follows: (1) problem statement, (2) definition of optimization goals, (3) data structure creation, (4) standardization of optimization goals, (5) definition of the fit function, and (6) introduction of genetic algorithm. A case study on the issue of the time-cost balance of construction and instrument was analyzed to evaluate the reliability of the proposed model. e results of this research can be used in the following cases: (1) determining the optimization of objectives, including initial investment cost, operation, and maintenance costs and CO 2 emission costs, (2) considering the objectives as real meanings, (3) evaluation fitting, and (4) functions to be extended to other areas such as indoor air quality, materials, and energy use.
Safari et al. [9] have proposed a triobjective mathematical model for the Transportation-Location-Routing problem. e model considers a three-echelon supply chain and aims to minimize total costs, maximize the minimum reliability of the traveled routes, and establish a well-balanced set of routes. In order to solve the proposed model, four metaheuristic algorithms, including Multiobjective Grey Wolf Optimizer (MOGWO), Multiobjective Water Cycle Algorithm (MOWCA), Multiobjective Particle Swarm Optimization (MOPSO), and Nondominated Sorting Genetic Algorithm-II (NSGA-II) are developed. e performance of the algorithms is evaluated by solving various test problems in small, medium, and large scale. Four performance measures, including Diversity, Hypervolume, Number of Nondominated Solutions, and CPU-Time, are considered to evaluate the effectiveness of the algorithms. In the end, the superior algorithm is determined by Technique for Order of Preference by Similarity to Ideal Solution method.
Mohammadipour and Sadjadi [10] evaluated the costquality-risk balance in a time-limited issue. In this research, an integer mixed linear multiobjective programming model is proposed to minimize "total project total costs," "total project risk increase," and "total project quality reduction" due to time limitations. In other words, the proposed study balances the three objectives mentioned to shorten the project's total duration. Finally, the computational results show the efficiency of the proposed model.
In their paper, He et al. [11] provided a multiobjective gray linear model to find the project's critical project using risk time, cost, and quality parameters. is model considers the amount of risk and quality of each activity along with two time and cost factors and determines the weight for each criterion. A coefficient is obtained for each activity consisting of four factors: time, cost, risk, and quality. Based on this factor, a critical path is selected. To complete the project on time according to the specified budget, the most focus should be on this path and not necessarily the shortest project completion time.
Paryzad and Pour [12] have balanced the project's risk, quality, time, and cost with the help of the dolphin technique. is paper presents the dolphin group hunting algorithm as a new evolutionary optimization method. e proposed algorithm is inspired by intelligent and group hunting of dolphins. In this method, a dolphin that finds its prey using its voice reflection is selected as its leader and then informs the other dolphins of the prey's position. By circling the prey and adjusting their position relative to the leader, the dolphins bring the prey closer to the water's surface in the form of a cone and hunt it. e experimental results show that the proposed algorithm achieves the optimal answer faster than other algorithms.
Zheng [13] has compared time-cost-environment balance with combining genetic algorithms. His goal is to minimize the total project time to minimize delay costs and minimize environmental impacts. Since the exact solution methods for this problem were not very efficient, for this purpose, he used the method of combined genetic metaheuristic algorithm, which in the above dimensions has been well converged to the optimal solution. Zhang and Xing [14] have examined the issue of project scheduling with a balance of profit and time. eir research considered several discrete times with 4 payment methods where the final profit depended on their completion time because as the project time decreases, the cost will decrease. us, the profit will increase, and since the issue was an NP-hard issue, He used the neighborhood search method to solve the problem. Rahimi et al. [15] investigated the problem of scheduling oil and gas projects with a cost-time-quality balance approach to minimize the total project time and maximize quality. e issue was examined in conditions of uncertainty. ey investigated the problem with a multiobjective solution and Pareto front generation approach and used fuzzy theory to deal with the uncertainty of the parameters. Askarifard et al. [16] examined the balance between project time and cost, the Bayesian approach to updating project time, and cost estimation, taking into account cost, time, and resource limitations, under conditions of uncertainty. ey also used a metaheuristic solution to the problem, thus saving considerable time relative to the exact solution.
Model Assumptions
e assumptions considered in the mathematical model of the problem are as follows: (i) Customer determines the initial scope of the project (ii) Client reviews the initial schedule at the tender time and accordingly requests the contractor to reduce the project completion time (iii) ere are resource limitations
Model Symbols.
e symbols considered in the mathematical model of the problem are as follows: i: prerequisite activity symbol j: activity symbol k: symbol of successful activity k r: symbol used for the activity affected by the risk t: symbol used for the number of time units of delayed activities l: symbol of the last activity n: number of projects
Model Parameters.
e parameters considered in the mathematical model of the problem are as follows: PR jn : revenue from the implementation of activity j in project n C jn : cost of increasing or decreasing the delay of a unit of time in activity j in project n d jn : duration of activity j in project n TR max jn : maximum time unit allowed for activity j in project n f 0n : project completion time specified by the client in the project n fs min ijn : minimum time between the end of i and the beginning of activity j in project n ss min ijn : minimum time between the start of activity i and the start of activity j in project n ff min ijn : minimum time between the end of activity i and the end of activity j in project n sf min ijn : minimum time between the start of activity i and the end of activity j in project n P jtn : probability of project risk by reducing t time in activity j in project n l rjtn : magnitude of the impact of risk on activity j is affected by r with t units of delay in project n.
Decision Variables.
e decision variables of the mathematical model of the research are as follows: y jtn : if activity j is delayed by project t in project n, it takes a value of one, otherwise zero ES jn : the earliest time to start activity j in project n LF jn : the latest end time of activity j in project n
Mathematical Model of the Problem.
e mathematical model proposed in this research is as follows: LF kn − LF jn ≥ FF min jkn , ∀j, k, n, Lf jn , ES jn ≥ 0, ∀j, n, As can be seen, there are four objectives of the study. e first objective function (1) minimizes delays in implementing each project activity. is function is obtained by multiplying the variables zero and one based on whether or not there is a delay in the execution of activity j during the delay time t. e model's second function (2) also minimizes the safety cost incurred in obtaining delays in each project's implementation activities. e third objective function (3) minimizes the cost of activities in each project. e objective function (4) also maximizes the quality of activities in each project.
Limitation (5) specifies the time frame set by the client for each project. In fact, this limitation ensures that the entire project is completed within the maximum time set by the customer for each project. Limitation (6) ensures that if a delay occurs in an activity, it will occur only within one of the timeframes specified for each project. Limitations (7) to (15) calculate the project's critical time and allocate the appropriate start and end time for each activity for each project. Limitations (8) and (9) indicate FS relationships, limits (10) and (11) indicate SS relationships, limits (12) and (13) indicate FF relationships, and limitations (14) and (15) indicate SF relationships between activities. Limitations (16) and (17) also determine the range of decision variables.
Case Study of the Research
Azar oil anticline is located in Anaran exploration block in Ilam province (southwestern Iran) and near the Iran-Iraq border. Azar structure along the northwest-southeast has a length of about 5.36 km on the horizon of Ilam, located approximately 13.5 km in Iran.
(1) Drilling of 10 wells (7 productions, 1 repair, 1 evaluation, 1 descriptive) (2) Install the first stage separator (3) Construction of 129 km of the 16-inch oil pipeline from the separator of the first phase of Azar field to Dehloran exploitation unit (4) Duration of this stage: 3 years (5) Start production with a flow of 30,000 barrels per day after 36 months
Central Operation Facility Project (for Model Validation).
is project includes all engineering services, procurement, and purchase of goods, construction, and installation required for the daily production of 65,000 barrels of crude oil and the extraction of 78 million cubic feet of gas per day. Production fluid is collected from 17 production wells in Manifold to collect common fluid to enter two separate production trains. After the fluid passes through the precooling unit, it passes through two three-stage separators to completely separate the gas and water that accompany it. e fluid then enters the desalination unit and the sulfur separator to remove H 2 S. e stabilized oil is then cooled in an air conditioner and transferred to an On-Spec storage tank via a shared header. ree booster pumps and three main pumps transfer stored crude oil to Cheshmeh Khosh production center located in Dehloran city through a dedicated oil transfer pipeline. e produced gas is first dried in two dewatering routes. It enters the two gas pressure boosting routes through a common header, each containing three gas pressure boosting compressors, until after the necessary boosting through the gas transmission pipeline for delivery.
Mathematical Problems in Engineering
Finally, it is to be sent to the NGL-3100 facility, Dehloran production center. e Central Operation Facility (CPF) will naturally generate and transport 65,000 barrels of crude oil per day. However, this complex is designed for a daily production capacity of 51,500 barrels of oil and 78 million cubic feet of gas per day, which can process 23,000 barrels of wastewater for reinjection into the tank. e EPC contractor of this project is Jahanpars Engineering and Construction Company. e description of their activities and relationships is as described in Table 1.
According to the activities and their prerequisite relationships in Table 2, the critical path of CPM can be shown in Figure 1.
Also, other problem parameters are randomly presented. After solving the problem, the results are presented in the following tables by helping the GAMS software and Cplex 22.1.1 solver in a personal system with 3.2 GHz processing power and 4 Gb of random memory. It is noteworthy that the software execution time equals 85 seconds, and the computational gap equals zero. e Parthian front of the problem is represented by the values in Table 3.
According to the information in Table 3, it can be seen that none of the points overcomes the point or other points. erefore, the Pareto front is properly formed. In Figure 2, the resulting Pareto front graphic can be seen. Figure 2 shows that safety has increased with increasing project delays. is is logical because the longer the delay in a project, the lower the project risk, and as a result, the project safety will increase.
In Figure 3, the enclosed page shows the Pareto front members. Of course, not all members can be identified; because of the continuity of the values obtained in each function, all points between the two points of the front can be considered a member of the Pareto front. Also, from this figure, the shapes depicted on each target function page can be well understood in two dimensions. e critical point in analyzing and using the answers obtained from solving multiobjective problems from the Pareto front format is selecting one of the front points as the final answer for implementation in the system under study.
Sensitivity Analysis.
To validate and analyze the proposed model, sensitivity to some parameters is measured. In fact, it is examined how the objective functions change by changing the desired parameters and to what extent our expectations from the model correspond to the results.
is research investigates the sensitivity analysis of the mentioned dimensions to the risk, delay, and time of activities.
Analysis of the Sensitivity of the Problem to the Delay Parameter.
To see the effect of delay on other objectives, we need to modify it to see its effect on other functions. It is necessary to examine the changes in the value of the cost parameter on all three objective functions, which are specified in Table 4. As can be seen from the table, with increasing safety delay, the cost increases and the cost and quality increase, but safety and quality take precedence over cost. For the delay rate, we will examine the values of 5, 10, 15, and 20 days. As shown in Table 5, the project's safety increases with increasing latency. erefore, the amount of profit has decreased (cost increases), which indicates that the model's logic works properly and the optimal level is the least risk and the least delay to achieve maximum profit. we are looking for what effect this transfer will have on the risk and profit of the project if the customer requests delivery of the work from the contractor earlier than the due date. To investigate this question in the project, we assume that the customer wants us to deliver the project in 36 working days instead of 38 working days, so we must examine what effect this change will have, resulting in Table 6.
Sensitivity Analysis of the
As can be seen in Table 6, reducing the total project time from 38 working days to 36 working days in 5 different repetitions increased cost and reduced safety, which is logical. It depends on the client which one to choose between cost and safety and early project delivery, but as is clear in the mathematical research model, most projects always have delays. It seems logical that the customer should request a reduction of the delay instead of requesting a reduction of the project on time, which was clearly shown in the results that the risk decreases, and the project's profit increases.
Computational Results of the Metaheuristic Algorithm.
To solve the algorithm, we need the optimal parameters of the algorithm, which will be performed according to the design of the Taguchi experiment, which is specified below these experiments.
Design of Experiments.
One of the experimental design objectives is to observe and identify output changes through conscious changes in process entry variables.
ere are several ways to design an experiment.
One of the first methods proposed in this field was the factorial method, which obtained the number of experiments by N � Lm. e main drawback of this method was that if there are many variables, the number of experiments will be vast, and this issue is not cost-effective in terms of time and cost. So, they thought of finding ways to reduce the number of experiments. One of these modifications was the Taguchi method, which we want to explain.
is method, which is a strategy to improve the quality of the process and achieve an enhanced product using the test design method, was first introduced by a Japanese engineer named Jenichi Taguchi in 1986. is method is derived from the design of a fraction of factors. e design is organized based on the minimum resources, time, and the number of possible experiments. e reasons for the efficiency of this method for the use of researchers and engineers can be named as follows: (i) Minimum tests (ii) Ability to check the effectiveness of the parameters (iii) Ability to analyze the signal to noise (iv) Determine the optimal levels of the selected levels e Taguchi method has made it possible to provide this vital information with a much smaller number of experiments and experiences. Taguchi developed a family of fractional factor schemes used in various applications.
(1) Standard method of analysis of variance (ANOVA) (2) Use a signal-to-noise ratio (S/N) e value of S/N expresses the degree of scatter around a specific value or, in other words, how our answers have changed between several experiments.
How do we know which value is better? ere are 3 relationships, each used to obtain this value. In the Taguchi method, a loss function calculates the changes between the results and the desired value, and this function has different modes depending on the problem conditions.
(1) A smaller value is best: SB � 1/n (y i ) 2 (2) Larger amount is best: LB � 1/n (1/y i ) 2 (3) Nominal size is the best: NB � 1/n (y i − y 0 ) 2 In these formulas, n number of iterations and y outputs have been measured. It should be noted that, in this research, we have used the signal-to-noise method.
is research seems necessary to optimally adjust the following parameters for the designed algorithm: number of replicates, initial population size, p(c) intersection, and p(m) mutation. For this purpose, first, the algorithm is implemented for a problem with suitable dimensions under 5 scenarios according to Table 7, and the mentioned more important factors are analyzed. e results are then used to fine-tune the parameters. In this method, the value of the resulting objective function must be normalized, which in this study To use the Taguchi method, it is first calculated according to the number of factors, the number of levels (scenarios), and the number of times the algorithm is executed. Figure 4 shows the appropriate number of times the algorithm is executed, calculated by Minitab16 software.
According to the software output, since the test is performed for 4 factors in 5 different scenarios, so the number of runs must be equal to 25, which is according to Table 8.
For example, row 5 refers to the fifth run of the test. e number of generations is selected from Scenario 1 and the remaining parameters from Scenario 5, or as another example in row 20, representing the twentieth time the test is performed. e number of generations is selected from Scenario 4, the population size from Scenario 5, the probability of occurrence of the intersection operator from Scenario 3, and the probability of the mutation operator from Scenario 1. Similarly, all tests are performed according to Table 8, and the results are reported. Because the best response is reported at different algorithm times, the results may be the same based on all scenarios. But parameters such as the number of generations can greatly impact the execution time of the algorithm. erefore, in the report of the algorithm response, in the iterations required for the Taguchi method and the final response of the algorithm, the execution time is also included and is directly added to the objective function.
Results of Taguchi Experiments to Determine the Optimal Value of the Parameter.
To examine the parameters of the genetic algorithm, we must obtain the optimal value of the parameters, which was done by designing an experiment using the Taguchi method, and the results have been determined.
P(m) P(c) Population size Number of generations
It can also be seen that the greatest effect on the responses reported by the algorithm is due to the population size factor.
Evaluating the Performance of Algorithms.
In this section, the numerical example is designed according to real-world conditions. e results of solving the mathematical model and the proposed algorithms are presented in Table 10. It is noteworthy that the results presented for the algorithms are the best answer among the 10 independent implementations. It should be noted that since the Epsilon method has a limit of 100 repetitions for the problem, it is reported that the first answer of Parthian point takes 55 seconds to check the answer of the first point, but solving every 100 points will take about 5500 seconds. erefore, in the timetable, the first output is considered. Signal-to-noise: Smaller is better According to the presented results, it is observed that, in small-dimensional examples, the results of solving the mathematical model and the proposed algorithms have zero computational gaps. is indicates the high efficiency of the proposed algorithms in obtaining the final answers. Gradually, with increasing dimensions of the problem, GAMS software could not solve the problem, so in large examples, the mathematical model could not solve the problem, and only the proposed algorithms provided answers. e quality of the answers provided by the algorithms in high dimensions can not be definitively commented. However, considering the proximity of the answers of both algorithms to each other and the zero computational gap in small dimensions, we can hope that the answers provided in high dimensions are of good quality.
Algorithm Sensitivity Analysis.
To analyze the algorithm's sensitivity, we must analyze the online criteria of the algorithm, such as the convergence rate of the algorithm concerning the iterations of the problem and the time to reach the final answer.
As shown in Figure 7, this problem has reached convergence in the same initial iterations. It shows that the algorithm has reached the optimal convergent solution very quickly, which indicates the algorithm's efficiency. As shown in Figure 8, the time ratio of GAMS software is much higher than MATLAB software, which indicates that the algorithm is better.
Conclusion
Cost management is the main structure for achieving strategic goals. Cost is created by consuming resources and is the same resources sacrificed to gain value. In this process, to save resources and costs, all costly activities that do not produce value must be eliminated, and value-added activities performed in parallel elsewhere must be combined. Also, those activities are added to the activities of the organization in a way to complete and improve the quality of services.
In this research, the problem of three-objective linear project scheduling has been investigated. In the first objective function, we tried to minimize the amount of delay, and in the second objective, we tried to minimize the amount of risk. Also, in the third objective function, we tried to maximize the profit. e exact Epsilon constraint method was used to solve this problem in low dimensions. e combined genetic metaheuristic algorithm and particle swarming were used to solve the problem in high dimensions. e low-dimensional Epsilon solution method was able to show the balance between the objective functions well, and examples with different dimensions for the problem were examined. Also, the project's critical path was described for the solved examples. Since the three parameters of delay, risk, and profit were reviewed to analyze the sensitivity of the problem, their relationships were examined in two-dimensional and three-dimensional. Taguchi's experimental design method was also used to adjust the parameters of the problem algorithm to determine the correct value of the algorithm parameters. e results examined in Chapter 4 show that the hybrid metaheuristic algorithm has less solution time than the exact method. In high dimensions, as observed, GAMS software could not solve the problem, but the hybrid metaheuristic algorithm could solve the problem in high dimensions.
Data Availability
e data used to support the findings of this study are available within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 7,890.4 | 2022-03-31T00:00:00.000 | [
"Computer Science"
] |
Transformer Meets Remote Sensing Video Detection and Tracking: A Comprehensive Survey
Transformer has shown excellent performance in remote sensing field with long-range modeling capabilities. Remote sensing video (RSV) moving object detection and tracking play indispensable roles in military activities as well as urban monitoring. However, transformers in these fields are still at the exploratory stage. In this survey, we comprehensively summarize the research prospects of transformers in RSV moving object detection and tracking. The core designs of remote sensing transformers and advanced transformers are first analyzed. It mainly includes the attention mechanism evolution for specific tasks, the fitting ability design of input mapping, diverse feature representation, model optimization, etc. The architectural characteristics of RSV detection and tracking are then described across two aspects. One is moving object detection for motion-based traditional background subtractions and appearance-based deep learning models. The other is object tracking for single and multiple targets. The research difficulties mainly include the blurred foreground in RSV data, the irregular object movement in traditional background subtraction, and the severe object occlusion in object tracking. Following that, the potential significance of transformers is discussed according to some thorny problems in RSV. Finally, we summarize ten open challenges of transformers in RSV, which may be used as a reference for promoting future research.
Moving object detection and tracking are the fundamental premise for advanced visual tasks, such as scene content analysis and understanding [7], [34], [35]. They are widely used in intelligent monitoring, dynamic observation for moving objects, and other application scenarios. In addition, the well-developed object detection and tracking has good reference value and significance for remote sensing video (RSV) interpretation [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46]. For example, improved natural models can be transplanted into RSV detection and tracking with great development potential and prospects for further research [47], [48], [49], [50], [51]. RSV moving object detection and tracking are discussed in this review, hoping to bring some application value. The relationship between each section is shown in Fig. 1.
For moving object detection (MOD), motion-based traditional machine learning methods label sparse foreground objects via modeling background information. They adopt the alternating direction method of multipliers (ADMM) [52] function iteratively. However, the complex background with sparse foreground characteristics in RSV makes noise affecting the model robustness as a significant research hot spot. It is sensitive to object irregular motion [53], [54], [55], which relies on interframe registration. On the other hand, appearance-based deep learning models mainly take advantage of feature learning from the convolutional neural networks (CNNs) [56], [57] and the recurrent neural networks (RNNs) [58]. They rely on many training samples while lacking semantic distinction for motion artifacts [56]. The attention mechanism is added to enhance the object semantic features or distinguish objects from the background, enabling the detection more accurately [38], [58], [59].
RSV detection and tracking still have great improvements in model representation and performance optimization [7], [34], [35]. Transformers show strong potential for dealing with temporal dynamics [93], [94], [95], [96], [97], [98]. RSV detection and tracking methods can be inspired and further improved from transformers with high efficiency and low latency performance. Therefore, a general overview of the application transformers in RSVs is needed, especially moving object detection and tracking, which will benefit to RSV interpretation. In this article, we mainly discuss the practical problems transformer can solve for RSV detection and tracking. The development of RS transformers is first analyzed. RSV moving object detection and tracking methods are then systematically investigated. The potential development of transformers in RSV object detection and tracking is discussed before raising ten open challenges. The primary contributions of this article are summarized as follows.
1) RS transformers are introduced from backbones to various downstream tasks, while the advanced transformers are from backbones to video and efficient transformers. It mainly discusses the input embedding, position encoding, and diversified feature designing to help readers grasp the research status effectively. 2) RSV detection and tracking are researched on the model optimization design and performance analysis. It mainly elaborates on moving object learning detection and object transformer tracking, which analyzes the model characteristics and research difficulties in detail. Besides, the datasets with corresponding evaluation indicators and experimental performance are also introduced. 3) Potential research directions of transformers in RSV detection and tracking are pointed out. Then, ten open challenges faced by transformer and RSV are discussed from the corresponding theoretical basis, bringing a good reference for promoting future work. The rest of this article, as shown in Fig. 2, is organized as follows. Section II describes the motivation for this review. RS transformers are briefly summarized in Section III. Section IV portrays moving object learning detection methods in RSVs. Section V explains object transformer tracking in RSVs, including SOT and MOT. Section VI discusses the potentials of transformers in RSV moving object detection and tracking, and Section VII provides ten promising open challenges. Finally, Section VIII concludes this article.
II. MOTIVATION
Transformer is essentially suitable for video tasks due to the sequence characteristics of the video [93], [99], [100], [101], [102]. It has recently been shown to closely resemble the structure of human hippocampus without the aid of any biological knowledge [32], [33]. Moreover, the attention mechanism in transformer imitates the selection mechanism in brain activity. Because of the support across these brain-inspired biological theories, the advantages of transformer interpretability are excavated more deeply [29], [103]. With the gradual increase in performance and memory requirements in RS field, RS transformers based on locality, feature diversity, and hierarchy have successively enriched the backbone networks [11], [104], [105], [106], [107]. Besides, their corresponding train techniques have improved to adapt different downstream RS tasks, which can effectively enhance spatial information under limited computing resources [14], [15], [108], [109], [110].
1) RSV data characteristics with the complex scene: Low contrast between foreground and background can lead to blurred object boundaries, which may be affected by object shadows or noise [36], [51], [62], [118], [119]. Meaningless geometric properties and motion patterns from outliers interfere with the model performance. In addition, the local redundancy of video data can introduce a lot of repeated calculations [7]. 2) Foreground separation and spatiotemporal information utilization for MOD: The traditional model relies on motion information, which is less sensitive to irregularly moving foregrounds and more to texture changes [120], [121], [122]. For deep learning models, it is crucial to effectively use motion information and spatiotemporal continuity for preventing false/missed alarms with achieving efficient detection [4], [57], [58], [116], [123]. Transformer can not only be used to enhance the object semantic features, but also improve the long-range modeling ability due to the video sequence nature [96], [97], [101], [124].
III. RS TRANSFORMERS
Transformer, composed of pure attention mechanism [131], [132], has been shown effective for long-term relationship construction as an encoder-decoder mode in natural language processing tasks [30]. Besides, the reason for the excellent performance of transformer is not only multihead self-attention (MHSA), but all the components in the block are playing a role [133], [134]. Next, we will introduce the transformer preliminaries, RS, and advanced transformers.
A. Transformer Preliminaries
Transformer mainly contains position encoding module, multihead attention mechanism, feedforward network (FFN), residual connection, and layer normalization module. The overall architecture is illustrated in Fig. 3. Next, we will describe the encoder and decoder module from the image processing perspective.
The output is finally passed to the decoder after N 1 encoder stacks.
a) Input embedding: The input elements are embedded in distributional space W to make the machine process the input sequences [30].X whereX ∈ R h 2 C×N is the flattened patches' sequence of the input X ∈ R H×W ×C . (H, W ) and C are the resolution and channel of the input, respectively.x i ∈ R h 2 ,C is the ith flattened patch, (h, h) denotes the resolution of each patch, and N = HW/h 2 represents the number of patcheš where the outputX is the patch embeddings generated by the flattened sequenceX map to W through the embedding matrix E. b) Position encoding: The RNN is a linear sequence that naturally encodes the position information into the model. The convolutional layer of the CNN retains position-relative information, while transformer, which contains no recurrence, learns the position information through the hidden state computation. The position information is beneficial to transformer [141] X =X + P where the positional encoding P has the same dimension as the patch embeddings. It adds toX for supplying the positional information. Besides, there are kinds of position encodings, as shown in Fig. 4, like sinusoidal functions [10], [30], [108], [124], relative positional encodings [25], [136], [138], [142], learnable embeddings [12], [22], [141], [143], [144], and dynamic position encoding with depthwise convolution (DWconv) [145]. c) MHSA mechanism: As an essential part of the transformer model, MHSA operates differently with modular neurons. Transformer with several head attention layers reproduces the contents of memory during computation [146], [147], [148]. It shows that transformer can move information to the output and other places in the context. As shown in the lower left part of Fig. 3, the MHSA mechanism A as the core part of transformer concatenates each self-attention outputs A i , which defined as follows: [13], [14], [30]. (b) Spatial reduction attention [24]. (c) Pooling attention [150]. (d) Efficient attention [28].
where m is the number of heads. The self-attention mechanism collects the relevant information between each token to other tokens in the sequence. As shown in Fig. 5(a), it can be calculated as The single-head self-attention result A i is computed by the dot product between the Softmax function with value V i . Besides, the attention matrix Q i K T i is normalized as a probability distribution by the Softmax function.
iX are intermediate representations of the input tokens X, usually represent as different from linear transformation of tokens [149]. M q i , M k i , and M v i are the learned weight matrices for the query, key, and value, respectively. Different single-head self-attention results can be constructed by mapping the tokens with varying weight matrices.
To reduce the computational complexity of the transformer model, most methods modify the attention module with different perspectives, especially the computation of attention weights [151]. For example, ShiftViT replaces the attention mechanism with a partial shift operation [152], [153]. Some frameworks change the attention weight calculation to the firstorder approximation of Taylor expansion, which reduces the computational complexity to linear [154], [155]. SwinV2 proposes a scaled cosine function to replace dot product operation [156]. d) Add and norm strategy: Transformer leads to loss of practical information and gradient vanishing problem due to the stacked layers [133]. Some frameworks have proved that residual connection and layer normalization can solve the above problems [133], [134]. As shown in the upper left part of Fig. 3, the specific performance takes three different forms. They are written asà Here, post-norm [30], res-post-norm [156], and pre-norm residual units [157] are defined in (6)-(8), respectively. represents the input matrix of the residual normalization module, which is the output matrix of the MHSA mechanism after the residual and layer normalized computation. LN function expresses the layer normalization. e) Feed forward network: This module is crucial to the entire transformer structure, which takes the averaged attention values and transforms them into a more tractable form before inputting the next layer [152]. It usually presents in the following form: FFN  = ReLU LinearLayers  .
We take (6) representation as an example. FFN consists of a linear layer and an activation function [157].
2) Decoder Module: Each autoregressive decoder takes the previously generated decoder result as input when developing the next consequent. Its components are similar to those of the encoder; difference is the masked MHSA and the multihead cross-attention mechanisms. a) Masked MHSA mechanism: This mechanism has the same structure as the MHSA in the encoder. The discrimination is the input tokens that need to be masked by adding −∞ [158], [159], that is, just relying on the token information at the current subsequent without any future information [141] A where Q i , K i , and V i are the projection results between the input tokens X to the corresponding learned linear matrices M q i , M k i , and M v i . M i is the mask of the ith head self-attention. b) Multihead cross-attention mechanism: The input is designed to handle two embedded inputs with the same dimension, which is different from MHSA. The key-value pairs come from the same input, and the query from another. It can capture contextual information more effectively [161]. The multihead cross-attention mechanism in the decoder module can be written as follows: The inputs K and Q for calculating the attention weight matrix are the encoder result and the masked MHSA mechanism output of the decoder, respectively. Besides, the encoder output V is assigned attention weights to highlight the interest regions [162]. Some methods construct cross attention from a clustering perspective to improve the model rationality [163], [164].
B. RS Transformers
Transformer has developed so fast in RS field. The difference between RS transformers is mainly reflected in the following three aspects: 1) processing: a definition that maps a specific task to model input/output in a sequence of vectors; 2) diversity of position embedding types: like sinusoidal functions [30] and learnable embeddings [141]; 3) efficient transformer designs: such as specific-question structured sparsity patterns in masked attention. Various RS transformers are listed in Table I . They are briefly summarized in this section, such as transformer backbones for feature representation learning and high/mid-level and low-level transformers in RS interpretation.
1) Transformer Backbones: They are gradually expanding in RS field. The supervised and self-supervised learning RS transformers will be discussed in the following subsection. a) Supervised learning transformers: A straightforward approach is to replace the backbone with transformer blocks [12], [104], [105], [154], such as MAP-SwinT [104] replaces ResNet in MAP-Net [165] with SwinT block to achieve multiscale feature extraction. Some models add specific modules to their backbones for feature enhancement [11], [166]. The value tokens of the CTN model are calculated by the 2-D convolution layer, as shown in Fig. 6(a), realizing the combination of convolution and transformer [11].
Swin transformer has a good development in some RS tasks [20], [106], [107], [167], [168]. SwinT blocks are adopted as encoders in semantic segmentation. They construct corresponding decoders to generate enhanced semantic features [106], [167]. For generating high-quality RS image time series, SwinSTFM [107] proposes a feature extraction and fusion module composed of SwinT blocks in Fig. 7. An unmixing-based fusion block is introduced in the multilevel fusion module to complete the fusion of features at different levels. SwinSUNet [20] designs a pure transformer network with the Siamese U-shaped structure [169] at the image change detection task in Fig. 8. In the low-level vision task of pansharpening, DR-NET uses the SwinT blocks to process the multispectral and panchromatic images separately before performing feature fusion [168]. Besides, it introduces convolutional block attention module (CBAM) and efficient channel attention [170] in an image reconstruction stage to enable the network focus on crucial information, thereby obtaining images with uniform spectral information and sufficient spatial details. b) Self-supervised learning transformers: Selfsupervised learning is a variant type of unsupervised learning, which uses self-supervision to analyze the laws and key information in the datasets [10], [46], [171], [172], [173]. It learns a general feature representation to make the model transferable for downstream tasks [10], [22], [174]. Using the label-free self-distillation contrastive learning mechanism, LaST captures long-range contextual information of RS images with the SwinT backbone [174]. It solves the hard negative sample problem by self-distillation contrastive learning.
As a self-supervised pretraining transformer, BERT [175] achieves good generalization performance in RS. HSI-BERT [22] introduces BERT into hyperspectral image (HSI) classification to capture the global dependencies across pixels. The pixel embedding, which contains a learned linear transformation and a learned positional embedding, is used in all input dimensions. SITS-BERT [10] adopts a BERT-based self-supervised learning for model pretraining. It captures spectral-temporal features in RS image time-series classification tasks after fine-tuning.
2) High/Mid-Level RS Transformers: They are mainly described in image classification, object detection, semantic segmentation, and change detection tasks. a) Image classification: It has crucial research value as a primary RS interpretation task mainly based on transformer or Fig. 9. Local-enhanced transformer block [19]. neural network. Most of frameworks use a hybrid scheme to improve modeling capabilities.
CNN-enhanced transformers: They use ViT or SwinT variants as a central framework to perform feature extraction on different RS images. It is found that MHSA and convolution modules exhibit opposite behaviors, which resemble low-pass and high-pass filters, respectively [176]. Therefore, CNNs and transformers have been fused differently to promote the representation learning.
Generally, for HSI classification, most models first adopt a convolutional network to map the image as corresponding convolutional features and then use a transformer model to achieve subsequent classification [13], [18], [108], [109], [143], [177], [178]. In addition, DHViT adopts a convolutional token embedding to adjust tokens [18]. SSFTT proposes a Gaussianweighted feature tokenizer module by adding a Gaussian distribution weighted matrix [13]. It makes the tokens conform to the distribution characteristics of the sample. Moreover, some methods directly split the image and input it into transformer after flattened [11], [19], [22], [174]. SPRLT-Net proposes a spatial partition restore module to extract complex spatial relationships [19]. The flowchart is shown in Fig. 9, where the spatial partition module splits the HSI patch into several overlapping subpatches centered on a pixel. At the same time, the spatial restore module is used to aggregate all subpatches to a feature map. Some models have improved the self-attention mechanism to realize spectral awareness for HSI. For example, BS2T introduces a multihead spatial-spectral self-attention module, which acts as the spectral information on the attention weight matrix [177]. HiT proposes a conv-permutator module with the DWconv operations to encode spatial-spectral features from height, width, and spectral dimensions [109]. SSTN proposes a spatial attention and a spectral association module [178]. Among them, the spectral module generates masks through a 3-D convolution operation on spatial information to model the correlations between spectral kernels and spatial information. Besides, it finds the optimal architecture setting by the factorized architecture search framework to achieve better accuracy.
To further enhance the overall performance of transformer, some frameworks use parallel design between transformer and CNN to extract local and global information [160], [179]. As shown in Fig. 6(b), CTNet concatenates the semantic features of ViT streams with the local structural features of CNN streams to predict sample labels [160]. GLNS designs a fusion network to integrate the output features and uses a twofold loss function [20], [167], [180], [181], [182].
to compact the classification features [179]. MSTNet proposes a multilevel feature aggregation decoder to improve the feature expression ability, which fuses different level features generated by other transformer encoder blocks [143]. For joint classification of hyperspectral and light detection and ranging data, DHViT proposes a spectral sequence and a spatial hierarchical transformer module [18]. The former sends the flattened feature vector to transformer for extracting spectral features. The latter extracts the spatial features of these two modal data. Finally, a cross-attention module is presented to exchange the classification and patch tokens from different modal features for achieving the heterogeneous feature fusion.
Transformer-enhanced CNNs: As an essential means to obtain discriminative features, the attention mechanism can effectively improve the modeling ability. The model with different attention mechanisms represents different information [183].
In the channel feature learning, the convolution operation, which fuses all the channels by default, pays more attention to the receptive field, while some models use the channel attention mechanism to realize adaptive enhancement of feature weights of virtual channels [180], [186], [189] or to strengthen the correlation between channel features [181]. The calculation process of channel attention is shown in Fig. 10. The feature undergoes a global average pooling module and two fully connected layers, realizing the channel weighting. Notably, SAFF proposes a nonparametric self-attention layer, which sequentially weights the spatialwise and the channelwise [186]. CAG proposes a cross-attention mechanism consisting of a horizontal and vertical attention mechanism [184]. It uses a combination of weight multiplication and maximum weight matching strategies to expand the feature difference.
Some models adopt the self-attention mechanism instead of spatial convolution operation to capture long-distance information relations effectively [187], [188], [189]. WFCG proposes a position and a channel attention module composed of the self-attention mechanism to simulate spatial and channel attention [187]. These two modules are concatenated in series to capture higher level abstract HSI feature information. In HSI classification, the spectral attention mechanism is introduced to capture long-range dependencies of feature maps [188], [189]. In particular, a feedback spatial attention module using multiscale spatial information and a feedback spectral attention module are proposed in FADCNN to strengthen semantic information in the spatial-spectral dense networks [189].
b) Object detection with transformers: The attention mechanism is mainly used for feature enhancement. IAANet adopts MHSA to model the coarse-grained candidate regions at pixel level and outputs attention-aware features to distinguish objects from the background [14]. In the design of channel Fig. 11. CBAM [168], [193], [194], [202], [204], [209]. feature correlation, SSE-CenterNet introduces a spatial shufflegroup enhance attention module, which shuffles the channels to improve the relationship between groups [192]. It divides the feature map into multiple groups along the channel dimension and generates an attention factor at each spatial location within each group to learn higher level semantic information.
The CBAM extends of the channel attention [194]. As shown in Fig. 11, it includes channel and spatial attention in series and applies to multiscale feature enhancement in the object detection field [193]. Some models use improved hybrid attention modules to perform multiscale feature enhancement [195], [196]. They obtain the final features after multiplying by the generated attention weighted map with the original feature map to highlight object features. For example, RSADet proposes a lightweight scale attention module, including a parallel spatial and a channel max pooling submodule [195]. FPN-MSDAM proposes a multiscale deformable attention module, which cascades multiscale features through channel axes and generates attention maps using a convolution layer and a sigmoid function [196].
c) Semantic segmentation with transformers: Some models use the transformer blocks [103], [210] as the encoder to extract multiscale features and design the decoder with different attention modules for feature fusion and refinement [105], [106], [167]. For example, SETR-MFPD designs a dimension attention module including channel and spatial attention mechanisms to connect the multiscale feature pyramid decoder [105]. DC-Swin designs a decoder with a densely connected feature aggregation module [106]. As shown in Fig. 12(a), it generates enhanced semantic features through a shared spatial attention (SSA) and a shared channel attention (SCA) with cross-scale connections. Another method feeds the local features and global contextual features into the transformer encoder to realize dual-branch semantic correlation [15]. The projected local feature tokens are set as query, and the contextual feature tokens as key and value.
The attention mechanism variants can be incorporated into the network backbone for capturing feature correlations [151], [197], [199]. In the channel attention mechanism designs, UDA-SS proposes a covariance-metric-based channel attention module to an unsupervised framework [199]. It assigns high weights to feature maps with high covariance through convolution and channel correlation computation for representing other feature maps. SCAttNet cascades the channel and spatial attention module in CBAM [197]. MANet proposes a multiattention network that combines kernel and channel attention mechanisms to refine information in positions and channels [151]. Among them, the kernel attention mechanism uses the kernel smoothers to replace [106]. (b) SwinT embedding U-Net model [198]. the attention weight matrix calculation. The channel attention uses the attention weight calculation based on the dot product.
For the U-Net backbone improvement, the attention mechanism enhances the feature extraction ability with well segmentation accuracy [183]. MaResU-Net replaces the skip connections of the baseline network with a linear attention mechanism and adopts the 2 -norm to ensure nonnegativity [155]. ST-UNet introduces a relational aggregation module (RAM) to integrate the SwinT block into the U-Net encoder hierarchically [198]. As shown in Fig. 12(b), it proposes a spatial interaction module (SIM) across window MHSA (W-MSA) and shifted window MHSA (SW-MSA) blocks to improve modeling capabilities. This module includes dilated convolution and global average pooling operations. A feature compression module (FCM) consisting of a soft pooling operation and a bottleneck block with dilated convolution is introduced to improve the segmentation accuracy of small-scale objects while preserving details. STrans-Fuse proposes a parallel two-branch structure of SwinT and CNN [2]. It designs an adaptive fusion module based on the self-attention mechanism to enhance spatial details selectively. d) Image change detection with transformers: This task is to identify surface changes from a pair of bitemporal RS images covering the same place. Some models concatenate the multitemporal feature maps into the transformer encoder to achieve spatiotemporal context modeling and then input the enhanced features into subsequent convolutional layers for generating the final prediction results [3], [207]. CDViT proposes a transformer block composed of two cascaded MHSAs to model the spatial and temporal context features [3]. BIT uses transformer to enhance the original features [16]. It proposes a Siamese semantic tokenizer to generate two token sets from the extracted bitemporal features. The cascaded token sets are fed to a ViT encoder and then sent to a Siamese transformer decoder after splitting.
For the multiscale features, transformer blocks are used to operate on different scale features [21], [110], [201]. MSCANet introduces a spatial attention module for token embedding and designs a transformer structure for each scale [21]. It also proposes a contextual aggregation connection to aggregate high-level decoding features into low-level features for fusing multiscale information.
The attention mechanism plays a vital role in the consistency of cross-temporal features [200], [202], [203], [204], [206]. For example, Bi-SRNet adopts the self-attention mechanism in both temporal and change branches [206]. Remarkably, a crosstemporal semantic reasoning block is proposed in the change branch, where attention maps are projected on its opposite temporal branches. DASNet adds a dual-attention mechanism to obtain distinguishable feature representations [200]. As shown in Fig. 13, it consists of a spatial attention module for modeling local contextual features and a channel attention module for long-range semantic dependencies. Besides, CBAM is used for obtaining more discriminative multiscale features [202], [204]. SRCDNet proposes a stacked attention module with multiple CBAMs to enhance adequate information in hierarchical features [204]. DARNet introduces a hybrid attention module to fuse bitemporal multiscale features [203]. It contains an efficient spatial-temporal attention module with cross attention to capture the long-range feature dependencies and a channel attention module in CBAM to model the channel contextual information. A residual connection is finally added to facilitate the error backpropagation. e) Other image processing fields with transformers: The attention mechanism, especially the channel attention module, plays an essential role in modeling key features [182], [191]. In the satellite image time-series classification task, CA-TCN adds a channel attention block to enhance the critical feature in [208]. MS and PAN mean multispectral and panchromatic, respectively. the channel dimension and mine deeper phenological information [182].
3) Low-Level RS Transformers: In the image despeckling field, SAR-CAM introduces a continuous attention module, which consists of multiple concatenated residual channel attention blocks (RCABs) and CBAM with residual connections [209]. The RCAB adopts the channel attention module with residual connection to make the network focus on highfrequency channel features.
The convolutional features are used to implement transformer modeling [17], [130], [208]. For the multi-image super-resolution task, TR-MISR proposes a transformer-based fusion module to fuse low-resolution image features after the encoder [130]. The fused features are input into the decoder for obtaining high-resolution images. Not only that, some models design parallel transformer and CNN branches [17], [208]. In the super-resolution HSI restoration, Interactformer proposes an interactive attention unit through elementwise multiplication to adjust the information interaction of branches [17]. In addition, a separable self-attention module is designed in the transformer branch to achieve linear complexity calculation. It obtains attention weights at the width and height dimensions of features and, finally, acts on the input in turn. PAN-Tran designs a pan-sharpening transformer in the transformer branch to realize the fusion of panchromatic and multispectral image features [208]. As shown in Fig. 14, this branch contains a hard-attention and a soft-attention module to fuse the two kinds of image information.
1) Transformer Backbones:
Similar to the type of RS transformer backbones, the supervised-learning-based, selfsupervised-learning-based, and reinforcement-learning-based transformers are introduced.
a) Supervised learning transformers: We divide the supervised transformer backbone into the pure transformer and the convolutional transformer backbone for easy distinction.
Pure transformers: ViT, which only uses transformer encoder, requires a lot of training data and needs to be developed regarding feature and data diversity [210]. In the input token operations, PVT adopts a spatial reduction operation to reduce the spatial dimension of key-value pairs [24]. As shown in Fig. 5(b), it realizes the downsampling of input sequence, while PVTv2 replaces this with an average pooling operation [212]. MViT series introduces the pooling constraints [23], [150]. As shown in Fig. 5(c), it incorporates decomposed relative position embeddings and uses the residual connection to compensate for the pooling strides effect in attention computation [150]. LV-ViT adds local supervision on the output of each patch, which exploits the complementary information between the patch and class tokens [213].
Some transformers focus on designing attention mechanisms [214], [215], [216], [217], [218]. For the local attention mechanisms, Focal Transformer designs three window levels for each query by incorporating fine-grained local and coarsegrained global interactions [214]. ELSA proposes an enhanced local self-attention [215]. It has a Hadamard attention with Hadamard product to generate local attention efficiently and a ghost head inspired by GhostNet [219] to increase channel capacity. To capture long-distance information, BOAT proposes a bilateral local attention, which uses a feature-space local attention as a supplement to the image-space local attention [216]. To improve the patch feature expression ability in the local area, transformer nesting methods divide the patch into several subpatches in a nested way and pass through inner and outer transformer blocks in turn after flattening [217], [218].
Different from the standard transformer block in Fig. 15(a), some different attention mechanisms are stacked in the transformer block, which achieves two consecutive attention mechanisms in Fig. 15(b) [103], [156], [220]. SwinT proposes a W-MSA and an SW-MSA, which realizes cross-window connections as well as expands the receptive field [103], while Twins-SVT stacks the global subsampled attention and the locally-grouped self-attention, achieving an effective attention paradigm [220].
Convolutional transformers: The convolutional token embedding can be incorporated to capture the local information [140], [221]. CvT replaces the linear projection of input tensors with convolutional projection to reduce semantic ambiguity [221]. To further control the interest region of the transformer model, DAT proposes a deformable attention module that shifts key-value pairs to target regions by a query-independent offset network [25]. NAT controls the receptive field of each token within its neighborhood range by taking the position corresponding to the query as the center [139]. Besides, DWconv performs well in reducing data dimensions and maintaining network performance. For example, replacing the entire or part attention calculation [223], [227], designing the positional encoding [142], and expanding the receptive field in FFN [212], [230].
The attention mechanism can be replaced to achieve stable performance with less computational overhead [140], [142], [222], [224], [227], [246], [247]. CSWin performs the selfattention operations on horizontal and vertical stripes in parallel [140]. It adjusts the stripe width according to the network depth. VAN proposes a large kernel attention module, which captures long-range relationships through a decomposition diagram of large-kernel convolution operations [222]. CoaT designs a conv-attentional module, which adopts a co-scale mechanism to predict results using a series of serial and parallel blocks [142]. ACmix proposes a two-stage manner to integrate convolution and self-attention [224]. Moreover, the parallel strategy can be used to realize the fusion of convolution and attention [227], [246]. As shown in Fig. 16, Mixformer adopts the bidirectional interactions to enhance the model ability across branches simultaneously [227].
The high-resolution architecture could be integrated with visual transformers to enhance cross-resolution interactions [229], [230]. Besides, HRViT performs a heterogeneous branch to optimize key components of the model jointly [229]. The mix-block is designed to reduce the computational cost and achieve efficient networks. ViTAE stacks the reduction and normal cells to form different variant structures [228]. The reduction cell obtains multiscale context information through the pyramid reduction module. It uses MHSA and parallel convolutional modules to model long-range dependencies and local context. The normal cell has a similar structure to the former except for the pyramid reduction module. Conformer designs a dual-branch structure with CNN and transformer [226]. It fuses representations through a feature coupling unit module.
In the ResNet bottleneck block applying and improving, TRT-ViT follows the hierarchical route from the stage to block and forms hybrid architectures with the bottleneck in a standard transformer [225]. BoTNet designs a bottleneck transformer block, which replaces the convolution layer with MHSA [231]. It significantly improves performance by replacing the last three bottleneck blocks with the designed block. RepLKNet replaces the self-attention with a depthwise large convolution kernel, resulting in a larger effective receptive field [248]. b) Self-supervised learning transformers: Visualtransformer-based self-supervised learning frameworks have been proposed to learn features with more substantial generalization [26], [245]. SimMIM proposes a self-supervised learning framework based on masked image modeling to learn semantic information [26]. As shown in Fig. 17, it randomly masks some input patches and predicts the masked patch values by a transformer encoder and a lightweight one-layer prediction network. Swin UNETR transfers it to the medical image pretraining, achieving good experimental results after fine-tuning [232].
c) Reinforcement learning with transformers: Reinforcement learning is adopted to make the model learn attention decision for deciding the focused perceptual area before the proposed transformer [249], [250]. To make transformer suitable for the reinforcement learning optimization process, GTrXL designs a gating layer to replace the residual connection with incredible model stability [27], as shown in Fig. 18. On this basis, AT-RL adds an adaptive attention span to selectively focus on past time steps, improving the attention computational efficiency [233]. CoBERL combines GTrXL with long short-term memory (LSTM), and BERT [175] with contrastive objectives to learn a better representation [234]. As for offline reinforcement learning, the agent only learns from the limited data without environmental interaction. Transformer has shown great potential [235]. (b) Long short-term transformer [240]. and mines optimal policies in data with its powerful sequence modeling ability [251], [252].
2) Video Transformers: Video tasks need to deal with temporal dynamics information. The current transformer-based models have been extensively explored with the development of pure video transformers [93], [101], [235], [236], [237], [238]. a) Video classification: Some models focus on improvements to the transformer block. ViViT migrates transformer from image to video tasks and proposes different structural paradigms [93]. It develops improvement strategies in feature embedding, spatiotemporal encoder, and self-attention. As shown in Fig. 19(a), TokShift-xfmr designs a token shift module [235]. It swaps the partial content of the current frame with neighboring time stamp for modeling the temporal relationship within the transformer encoder.
b) Video action recognition: The temporal attention mechanism is introduced to make the model learn dynamic scenes efficiently with increased memory consumption. X-ViT restricts temporal attention to a local temporal window for achieving space-time attention with linear complexity [238]. It exploits the depth of transformer to obtain full temporal coverage of video sequences. And different positional embeddings are designed for space and time tokens.
Some models employ divided temporal and spatial attention instead of self-attention to aggregate spatiotemporal information [94], [98], [236], [237]. VidTr proposes a topK pooling operation based on the standard deviation in the temporal attention [94]. It reduces the temporal dimension and eliminates the redundancy caused by the same content in multiple frames. Besides, Motionformer designs an approximation scheme to speed up the calculation [236]. TIME designs a self-supervised model to learn the video temporal dynamics, eliminating spurious correlations in the spatiotemporal dynamics [101].
c) Video restoration: A neural network is used for feature extraction, while transformer is for feature alignment and long-term dependence modeling [95], [124], [239]. As shown in Fig. 20, ET-Net proposes a token pyramid aggregation strategy with transformer to model the internal correlation and intersected correlation of tokens [124]. VRT designs a temporal mutual self-attention to achieve feature extraction and alignment [95]. The proposed attention connects multihead mutual attention and MHSA in parallel. In addition, the attention mechanism plays an essential role in temporal feature processing, effectively highlighting object edge features [239], [253]. d) Video object and instance segmentation: VisTR applies an encoder-decoder transformer to model feature similarity and instance feature prediction in the temporal order [96]. STM uses the attention mechanism to perform calculations between the image information of the current frame and the object masks of past frames [99]. In particular, as shown in Fig. 19(b), AOT proposes a long short-term transformer to model LSTM [240]. It designs an identification mechanism to achieve a unified object segmentation strategy, which embeds multiple-object masks into a feature space. e) Video reasoning: In future frame modeling tasks, learning the spatial relationship and object dynamics is vital [137]. AVT designs a ViT encoder for each video frame to anticipate future actions [97]. To reduce memory consumption, it proposes a causal transformer decoder using causal masking to focus on specific input parts. OCVT takes targets as the center based on unsupervised learning, which encodes the scene as tokens and uses transformer to learn the spatiotemporal dynamics between targets [137].
f) Video frame interpolation: It aims to synthesize intermediate frames in video frames for improving the frame rate. The CNN and transformer are combined to improve the attention in the transformer block, achieving the long-distance pixel correlation [98], [241]. VFIformer designs a cross-scale window-based attention mechanism to expand the receptive field and gather multiscale information [241].
3) Efficient Transformers: An efficient transformer with low latency and high parameter efficiency has always been crucial [228], [242], [243], [244]. It could run efficiently on resource-constrained hardware with improved representation by adjusting the loss function, training, or modeling techniques. It introduces the aspects of model design and knowledge distillation. a) Transformer model designs: Efficient self-attention is crucial for long sequence modeling. ResT adds a DWconv operation to MHSA for reducing the dimensions of key-value pairs [28]. As shown in Fig. 5(d), it adds a convolutional operation to the attention weight calculation for increasing the interactions among different heads.
The mix-block could reduce the computational cost and achieve efficient networks. SPViT proposes a weight-sharing scheme between MHSA and convolutional operations, which adopts a single-path search space to formulate the operation search as a subset selection problem [243]. Alternatively, some methods choose to stack blocks alternately [228], [242], [244]. CoAtNet alternately stacks the DWconv and self-attention to design the model cleverly [242]. Next-ViT effectively stacks the next convolution and transformer block by a next hybrid strategy [244].
b) Knowledge distillation: DeiT designs a transformerspecific distillation to improve ViT, distilling the teacher CNN backbone network into a transformer-based student model [29]. It interacts with a distillation token from the teacher model with the patch embedding. In this way, the student model can improve its training speed and quality. Based on DeiT, DINO introduces self-distillation with no labels, which combines the proxy task in BYOL [172] and ViT for self-supervised learning [245]. It learns the semantic segmentation representation of the input image efficiently.
IV. MOVING OBJECT LEARNING DETECTION IN RSVS
The primary purpose of MOD is to locate and identify continuously moving objects for a given video and then track these objects successfully [4], [35], [36], [116], [254]. This field is generally divided into motion based and appearance based. The former models background for realizing the foreground motion detection. The latter applies the artificial neural network to extract the motion and appearance information of objects. The categories of MOD have been classified in detail in Table III. The modeling of the traditional motion-based methods and the construction of the appearance methods are briefly summarized. It can be a clear understanding to MOD. The last part of this section introduces the attributes and characteristics of some popular datasets, as well as evaluation metrics. The intention is that our introduction should be helpful for readers to have a holistic understanding of MOD. In the following, we will introduce these two models separately to make readers more aware of MOD development.
A. Motion-Based Models
These frameworks mainly detect moving objects according to motion patterns, which adopt frame difference and background subtraction to separate the foreground and background [120], [121], [122]. Frame difference eliminates most of the unchanged background through computing pixelwise differences in intensities between consecutive frames and then extracts moving objects. Background subtraction is the mainstream method in traditional MOD, which achieves foreground detection with different background models.
1) Frame Difference: It has advantages with high efficiency and low memory consumption. The difference calculation between frames is a relatively simple operation to eliminate background information where d t is the tth video frame, and d t represents the absolute interframe difference between foreground and noise. The current frameworks mainly learn how to separate the foreground objects after frame difference calculation. The most direct method performs a threshold to separate moving objects and background, while different threshold selection affects the number of moving objects [120]. AMS-DAT designs a binarization threshold under the premise of object scale invariance [120]. Some frameworks use prior morphological information to remove background noise [35], [54], [55], [255]. For differentiating foreground from noises, PDT proposed a local noise model via fitting noise patterns with a probability distribution [55]. AMS-DAT uses the spatiotemporal continuity of object motion to eliminate false detections [120].
Three-and multiframe difference methods have been proposed to detect irregularly moving objects [35], [256]. VTD-FastICA takes three consecutive frames as input and uses an improved independent component analysis method, FastICA, to integrate image information in the space domain [256]. MMB models frames as perturbed low-rank matrices to detect slow-moving objects and uses a pipeline filter to draw the trajectory [35].
2) Background Subtraction: This traditional method mainly separates a video sequence into foreground and background, which labels the moving objects through the background model. It can be divided into the following steps.
1) Given a video sequence with n frames
where d t is the tth vectorized video frame, and s is the pixel values contained in each frame. Generally, the sequence is decomposed into three components, namely background matrix . . , r n ] ∈ R s×n , and noise matrix E = [e 1 , e 2 , . . . , e n ] ∈ R s×n . 2) The low rank is generally imposed on the background and sparsity is on the foreground. The optimization problem is defined as arg min Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. where λ 1 > 0 and λ 2 > 0 are the weights of the foreground term Ω(R) and the noise term E 2 F , respectively. Rank() expresses as a low-rank matrix factorization, and Ω() refers to the structured sparse induced norm of R, which is generally expressed as f ∈F R 1 / ∞ to promote the sparsity on the foreground [257]. · F represents the Frobenius norm. E is used to represent noise explicitly, so that the model can better reflect the fundamental data structure. The background is highly correlated in a lower dimensional subspace, while the foreground exists as the sparse outliers in the background. 3) To solve (13), the ADMM is used for optimization, which transforms a multioptimization-variable problem into a single-optimization-variable problem [254], [258], [259].
These models are mainly assigned to the statistical background and sparse background models, which will be introduced in the following subsection. a) Statistical background: It models the background by the adjacent pixel information, and its input features are pixel level and region level.
For the background construction, HMAO regards to background and foreground as peer unknown variables, which decomposes the background into temporally low-frequency and high-frequency components [36]. VSF-BST utilizes thermal pixel intensity and spatial video salient feature, named Akin-Based Local Whitening Boolean Pattern (ALWBP) feature descriptor [118]. It considers the effect of other neighboring pixels, discriminating foreground in flat cluttered regions. AV-BSM proposes a real-time adaptive vector-based background subtraction [260]. Each pixel is transformed into a vector with a spatial-temporal signal through a vector representation method. It uses the specified time interval scheme to initialize the background model.
For foreground detection, AV-BSM determines whether it is a foreground object by calculating the number of vector collinearity [260], while some methods employ the Markov random fields for improving the robustness [36], [118].
b) Sparse background: This method mainly decomposes the video sequence into a low-rank background and sparse foreground [257], [261]. The background is highly correlated in a lower dimensional subspace, while the foreground exists as the sparse outliers in the background.
Background modeling estimates the rank minimization of background based on principal component analysis (PCA). SLRC proposes a dedicated background model for multiscenario video sequences, which uses dictionary learning-based sparse coding to represent the background model for each scene [262]. MODSM imposes the saliency map on the background, enabling the estimated foreground with high-level semantic objects and fewer false alarms [263], while foreground modeling generally emphasizes the smooth constraint of foreground boundary to reduce noise influence. Since moving objects are a collection of spatially correlated pixels, structured sparse is mostly adopted instead of pixel sparse. SLRC adds contextual regularization and sparse representation into the foreground model [262]. KRMARO integrates kinematic regularization into the principal component pursuit of the foreground, which uses the Euclidean distance and motion angle to model the motion of the candidate region [121]. 3DTV-RPCA presents 3-D total variation regularization to achieve the continuity of moving objects [53].
For the optimization problem of the objective function, ILR-SUSD proposes an inexact alternating direction method based on augmented Lagrange multiplier and proximal operators to solve the optimization problem [115]. E-LSD provides the direct expansion of the ADMM to solve video with poor spatial resolution and low contrast [254]. SLRC develops a three-stage alternating optimization method consisting of the SOFT-IMPUTE method, PALM, and 2-D FFT [262]. MCMD develops a batch optimization method with ADMM and an online stochastic optimization method [258]. KRMARO integrates a backtracking behavior into an inexact augmented Lagrange multiplier, which obtains the moving objects only when the frames are optimally aligned [121]. To eliminate the satellite motion influence and reduce false alarm rates in video frames, MCMD proposes a moving confidence score through the dense optical flow estimation to emphasize the difference between real object motion and satellite movement [258]. 3DTV-RPCA introduces an auxiliary variable to model noisy data for reducing noise impact [53].
Unlike the above robust PCA-based methods, O-LSD is an online structured sparse model combining the stochastic optimization and the structured sparse penalty to improve update estimation [264]. STOMF proposes a temporal difference motion prior model to obtain the motion information matrix and weight matrix for extracting the entire motion regions [122]. Besides, a postprocessing method is presented to detect normal-scale and small-scale moving objects using partial spatial information reconfirmation and partial spatial background information reuse methods.
Tensor, a higher dimensional data structure than 2-D matrix, is more appropriate for capturing higher order relationships in data. WSNM-STTN decomposes the video frames into tensor form based on E-LSD [254] and applies a weighted Schatten p-norm to the background for providing an adaptive threshold [259]. TLISD proposes a tensor low-rank and invariant sparse decomposition method for background [119]. Based on the tensor PCA, 3D-PSCATV-CS provides an automatic weight assignment to the singular value tubes of the background tensor [265].
In the foreground constraint, 3D-PSCATV-CS adopts a 3-D Piecewise Smoothness Constraint combination based on Anisotropic Total Variation (3D-PSCATV) for the foreground to encode the spatiotemporal smoothness and temporal coherence [265]. TLISD models the illumination changes as noise variables via the k-support norm and generates a set of illumination-invariant representations as prior maps to distinguish moving foregrounds from illumination changes [119]. TF-TTV proposes a dynamic half thresholding low-rank tensor total variation (DHLRTTV) and a static half thresholding low-rank tensor total variation (SHLRTTV) algorithm according to dynamic and static background influence, respectively [37]. DHLRTTV divides the foreground into the dynamic background and the exact foreground. It adopts the 1/2 -norm regularization for diminishing dynamic background effect and the tensor total variation regularization for the foreground smooth. SHLRTTV, compared with DHLRTTV, ignores the dynamic background component. The augmented Lagrange multiplier with an alternating direction minimizing approach is finally proposed to solve the optimization problem.
B. Appearance-Based Models
The traditional motion-based models require consistent global illumination and rely on video registration [55], [56]. In addition, they are sensitive to irregular motions and texture changes in the physical world. On the other hand, several appearancebased deep learning MOD frameworks have emerged [4], [58], [116], [123]. They are divided into the following four categories, namely image-object-detection-based, RNN-based, visual-tracking-based, and optical-flow-based models, respectively.
1) Image-Object-Detection-Based Methods: The object detector or semantic segmentation method can be employed to MOD directly [266]. ML-SAR uses Faster-RCNN to detect object shadows in video SAR frames [57]. As shown in Fig. 21(a), WS-MOD trains the detector with the foreground masks, which generated as the binary pseudo labels by background subtraction and threshold segmentation method [268]. To eliminate the obvious false positives, LRP adopts a discrete histogram mixture model through a recursive learning algorithm to measure the object category possibility [266]. ML-SAR employs an improved density-based clustering method in consecutive frames to correlate object shadows with solid correlation [57]. For missing alarms, it presents Bi-LSTM to predict the lost locations based on the detection of contextual information.
To aggregate more detailed features, DeepFoveaNet proposes two encoder-decoder network modules inspired by the monocular vision of birds [38]. It contains a Peripheral-CNN for detecting contextual information in the scene and a Deepfovea-CNN for small moving foregrounds to simulate visual attention. DSFNet proposes a 2-D static stream with a feature fusion block to obtain the object details [116]. In object motion cues extraction, it presents a lightweight 3-D dynamic stream with three 3-D convolutional layers. The overall flow is shown in Fig. 21(b), where FFB represents the feature fusion module. 2D conv and 3D conv represent the 2-D and 3-D convolution blocks, respectively. These two stream features perform fusion through a progressive hierarchical feature fusion manner. ClusterNet combines the motion and appearance information through a convolutional network and obtains object locations with heatmap estimation [56].
2) RNN-Based Method: As shown in Fig. 22, ETE-MOD uses a deep convolutional encoder and decoder network to extract the semantic information of video frames [58]. It proposes an attention convLSTM to enhance the semantic features, which adds a soft attention mechanism after convLSTM. In addition, it adopts a spatial transformer network for enhancing the robustness of global and local motion, as well as a conditional random field layer for smoothing foreground boundaries.
3) Visual-Tracking-Based Methods: Trackers need accurate and robust object features to achieve correct results. Thus, the feature extraction in the tracker-based method is critical. Dogfight uses pixelwise and channelwise attention to distinguish object boundaries from the background [59]. As illustrated in Fig. 23(a), the pooling and attention block contains a spatial pyramid pooling and an attention module with pixelwise and channelwise. The channelwise attention is implemented by channelwise multiplication of the attention vector with convolutional feature maps. In comparison, the pixelwise attention performs pixelwise multiplication of pixel attention mask to give functional regions with high weights. To generate high-quality object proposals, UDOLO proposes an object occupancy map in Fig. 23(b), which is served as a selective attention mechanism, guiding the detector to focus on essential parts [39].
For obtaining object candidate regions, JP-DP-TBD, which is based on the dynamic-programming-based track-before-detect (DP-TBD) algorithm, uses both object position and radial velocity information in video SAR image and corresponding range-Doppler spectrum [4]. ES-TBD adopts an expanding and shrinking strategy, combining the particle filter and dynamic programming algorithms to obtain effective transition states for object position components [269]. It presents a region-partitioningbased track-before-detect algorithm to maintain known object trajectories and detect newborn objects.
4) Optical-Flow-Based Methods:
This traditional method mainly calculates object velocity between frames in MOD. To gain object candidate positions from adjacent frames, OF-VAS uses optical flow to obtain the object motion information and generates candidate objects by Otsu segmentation method [123]. The flowchart is shown in Fig. 24, and the Gabor filter combines the obtained results to receive a quaternion image. The final detections are achieved by the quaternion Fourier transform and phase spectrum reconstruction.
Optical flow can be used under unsupervised learning for MOD. ACM-MOD proposes an unsupervised adversarial contextual model consisting of a generator and an inpainter [270]. The generator produces an object mask through the image and its optical flow, while the inpainter attempts to inpaint back the optical flow, which is masked out by the generator. It is trained jointly in an adversarial manner to learn the complex relationship between foreground and background.
C. Datasets and Evaluation Metrics
1) Datasets: Some public RS datasets for moving object learning detection are listed in Table IV. The PESMOD dataset comes from small object drone videos on the Pexels website [271]. Its targets include vehicles and pedestrians, which challenge is the occlusion in complex environments. The Chang Guang Satellite Technology Company Ltd. (CGSTL) provides many free RSVs for scientific research. The Valencia dataset from the CGSTL is widely used in the MOD experimental verification [55], [120]. It covers different city-scale information with small moving objects. In addition, the MOD task of the VISO dataset includes a training set with 13 470 images, a validation set with 535 images, and a test set with 3725 images [35]. Its main challenges include complex backgrounds, illumination changes, and dense lanes.
2) Evaluation Metrics: There are multiple evaluation indicators for MOD, including precision, recall rate, F 1 score, precision-recall (PR) curve, average precision (AP), and mAP. They are defined as follows: a) Precision: It is expressed as the proportion of true positive (TP) in overall detections, which included TP and false positive (FP), namely Precision = TP TP + FP (14) where TP is the truly detected boxes for the correct coverage and FP represents the false detected boxes. Due to the small target pixels in RSV, TP also defines as the detection overlaps with the ground truth box [35]. b) Recall rate: It refers to the ratio between TP and all ground truth boxes, in which false negative (FN) represents the ground truth boxes missed by the detector, i.e., c) F 1 score: It is the harmonic mean of precision and recall rate, which is a traditional criterion for binary classification between interest objects and nonobjects, expressed as Here, the IoU value in the ith frame among a given video sequence is defined as where box i α and box i β represent the i α th ground truth and the i β th detected box area in the ith frame, respectively.
. i N and i M represent the number of detections and ground truths in the ith frame, respectively. and are the intersection and union of the two regions, respectively. |.| is the number of pixels occupied by the region. The cost tensor (CT) is composed of CMs with continuous K frames to reduce the computational burden, expressed as The final correlation index is obtained through the optimal associations between detections and ground truths, which uses the Hungarian algorithm in the spatiotemporal domain.
V. OBJECT TRANSFORMER TRACKING IN RSVS
RSV object tracking plays an indispensable role [82], [86], [90], [112], [125], [127]. It can provide a cost-effective processing method for motion analysis and object monitoring, especially for requiring on-site measurement with installation difficulty. The current research status of SOT and MOT will be discussed in this section.
A. Single-Object Tracking
It is mainly divided into two directions. One is the traditional CF tracker based on estimation, and the other is the deep-learning-based tracker. CF tracker fuses the hand-crafted or convolutional features of the tracked object based on the pure CF and estimates the object location through the Bayesian method. SOT is mainly divided into four steps.
1) Build a tracker model. For a video sequence, the object in the first frame is sent to the tracker for subsequent tracking. 2) Extract object candidate region at subsequent frame. The candidate region is obtained by an inference-based filter or a DNN-based model. 3) Achieve the accurate object position and mark it with a rectangular box. The position is inferred through the object information saved in the tracker. 4) Update the tracker. The object information at the current frame is sent into the tracker. If the sequence is terminated, the loop ends. Otherwise, continue to step 2. These models are classified in detail in Table V. It marks the key characteristics and the baseline model corresponding to the tracker. It also keeps the usage of transformer/attention in each tracker from template extraction, search region extraction, and correlation calculation. To understand each tracker more intuitively, the solved challenges of the tracker are recorded, which mainly include occlusion, similar objects, and complex scenes. Not only that, the commonly used RS tracking datasets and evaluation metrics are introduced later to enable completing a more comprehensive understanding.
1) CF-Based Trackers:
The CF tracker trains with positive and negative samples based on the object bounding box at first frame [5], [61], [62]. Its weights are updated in subsequent frames for preventing temporal degradation and increasing the tracker discriminative capability. The general structure is shown in Fig. 25, which the feature extraction part includes manual, DNN, or both of them. The categories of CF trackers are mainly divided into basic and deep-learning-based CF trackers. a) Basic CFs: CF is fast and real time without additional training, which is suitable for large-scale RSV object tracking. Some elements can be added to make results accurate before feeding input to the filter. SCT divides the tracking object into multiple cognitive units and sends these units into the attentional weight map calculation module for getting final input [60]. CFME combines KCF [272] with the motion estimation method for determining the object position and mitigating the boundary effects [62].
Optical flow, as an important tool for detecting object motion, plays an important role in SOT. MOFT obtains object position with the Lucas-Kanade optical flow method [273]. HKCF adopts the optical flow for detecting motion information, and the histogram of oriented gradient (HOG) for capturing object texture information [61]. For hyperspectral video, MHT decomposes the data into constitute spectral and corresponding abundances and then embeds them into CF [274]. For SAR video, JKCF uses the cell-averaged CFAR to extract the object shadow in the image and the energy in the corresponding range-Doppler spectra [5]. Besides, it adopts interframe correlation with trajectory matching to suppress false tracks. The final shadow and energy bounding boxes are both sent to the dual KCF.
For the object feature extraction designing, PAC proposes spatial and appearance selective attentions [40]. The former, which generates an object location response map through the weighted Boolean maps, is used to capture the object topological structure. The appearance selective attention pushes the distractors around the object to negative samples. During CF weight updating procession, WTIC employs the information compensation, which introduces the background information into CF to distinguish the tracking object from the corresponding background [275]. In addition, JKCF proposes a normalized interaction factor to update the learning rate [5]. STSD adds the spatial-temporal information constraints to the objective function, which makes the filter update conservatively when the appearance changes drastically [276].
The CF tracker can combine with other models in different ways to improve performance [47], [125], [277]. CFKF proposes a tracking confidence module to couple the CF tracker and the Kalman filter [277]. It evaluates the CF confidence through the average peak-to-correlation energy algorithm and passes the result to Kalman filter for trajectory correction. Du et al. [47] parallel KCF with three-frame difference for preventing the drift offset and obtain results by calculating the attraction value. MBLT proposes a motion estimation to predict the object position probability and a road segmentation method to constrain the object moving area [125]. These two results are finally masked to the CF result for generating the final bounding box.
Model postprocessing is particularly significant at performance improvement [5], [62], [275], [276], [278]. For solving the occlusion problem, CFME uses the filter response patch peak value to determine whether the object is occluded or occlusion ends [62]. If occluded, the motion estimation result is used as the object position. IMMCF considers the maximum response score and the average peak correlation energy [278]. If occluded, the interacting multiple model is used to predict the object position. To prevent the track drift, WTIC proposes tracking status monitoring indicators to evaluate tracking status [275]. JKCF presents a target localization interactive correction with the peak-to-sidelobe ratio (PSR) to prevent tracking drift and reinitializes the tracker while crashing unexpectedly [5]. STSD employs a multiscale patch-based contrast measure scheme to correct target position, preventing the shadow targets affected by clutter [276].
b) Deep-learning-based filters: Convolutional features are added to the tracker model for enriching the feature diversity. To emphasize the importance of different channel features, CGRCF proposes a channel attention module with the channel and graph regularization methods [279]. Likewise, A 3 DCF advocates an adaptive attribute-aware spatial attention mechanism with channel-specific regularization [73]. It identifies each channel discriminative information and mitigates the irrelevant information influence. For suppressing the distractors influence, JMMAC designs a multimodal fusion network with global and local networks, obtaining accurate response maps [70].
Cascading CF and DNN can achieve the robust tracking [68], [280]. ACFN adds a subset of CF trackers and designs an attention network composed of prediction and selection subnetworks, realizing the selection of trackers adaptively [280]. MMNet proposes a fine-grained perception module before CF [68]. It performs a self-attention mechanism on the shallow features to obtain more fine-grained correlation information.
2) Deep-Learning-Based Trackers: The deep learning trackers generally migrate pretraining classification models to the tracker and fine-tune the model weights at the tracking data to achieve effective object tracking [75], [127], [281], [282]. These trackers are divided into three major categories, namely CNN-based, RNN-based, and Siamese-based trackers. a) CNN-based models: As single-branch trackers, they mainly use MDNet [283] as a baseline and train a feature extractor as well as a video-specific classifier at the first frame for subsequent tracking. The general tracking process is shown in Fig. 26; the light dashed line indicates the model backpropagation.
To increase the object representation ability, TTS introduces a spatial mechanism, which applies max and average pooling operations to the original convolution features, making the tracker pay more attention to the object [284]. RT-MDNet+LV adds an attention regularization term to suppress the background and highlight the target region [72]. The regularization defines the weighted local variances of the convolution feature. TCTrack [281] designs an adaptive temporal transformer for refine the feature map. As shown in Fig. 27, the subscript t of Feature encode t represents the tth video frame. It uses the temporal information to enhance the spatial features. CRAM combines the appearance and optical flow motion features [285]. The final location prediction integrates these two response maps from the same separate regression network. CAT introduces a center and a corner regression module [74]. Besides, it proposes a lightweight attention module in the corner regression. The weighted features across this manner could pay more attention to the regions where benefited the corner regression. DACapT introduces the capsule network into the feature extraction to model feature similarity [286]. It adopts a group attention mechanism for the model pay attention to the object and a penalty attention module for providing discriminative attributes.
For different modality inputs, M 5 L integrates an attention fusion module that concatenates weighted modalities to obtain the final fused feature [69]. CBPNet designs a channel attention mechanism to make the model focus on significant regions [64]. As for the occlusion challenge in satellite videos, AD-OHNet uses the spatiotemporal context to calculate the object average moving direction and distance [287]. Besides, it adopts a deep reinforcement learning to make the tracker proceed along the original direction. And the object appearance model continues training with the previous positive and negative samples. b) RNN-based models: They employ the gating mechanism in LSTM to compute the information flow at the current time step and utilize different attention mechanisms for feature enhancement [75], [76]. HART, which imitates the human visual cortex structure, proposes a cascaded form of spatial and appearance attention before the features feeding into LSTM [75]. The appearance attention is paralleled by a ventral and a dorsal steam. The final input features are obtained by the Hadamard product across these two feature results. ARNN jointly trains with a bidirectional LSTM [76]. As shown in Fig. 28, the intraand interattention mechanism is formed with an interattention and an intraattention model, augmenting the object patch-level features.
c) Siamese-based models: These dual-branch architectures generally determine the current object response position by calculating the similarity between the template region feature in the first frame and the search area feature in the current frame [308]. The general process is shown in Fig. 29. Besides, it is worth noting that DualTFR achieves effective tracking with a pure transformer backbone network [309]. During the video frame preprocessing stage, DeepMAT proposes a dynamic target-aware attention module to obtain an accurate global search area [305]. CFD-SiamRPN++ integrates the clustering-based frame differencing method in the input blocks to enhance the discriminability of small objects [322]. It fuses the original block with a fine difference map generated by k-means clustering. In hyperspectral video processing, BRRF-Net proposes a band regrouping module, which divides HSI patches into groups of RGB-like image patches [48]. It quantifies each band by capturing nonlinear correlations between bands and then reorganizes them according to the importance degree. Similarly, SiamMRANN divides HSI patches into several threeband image patches and inputs them into the Siamese network in parallel [282]. H 3 Net divides RGB and hyperspectral video data into spatial and spectral branches and then concatenates the spatial and spectral features into the Siamese tracker [112]. It adopts an unsupervised learning framework to train these two data sequentially using the principle of cycle consistency.
The channel and/or spatial attention modules with different connection modes can be added in the feature extraction to enhance the tracker adaptability, such as cascade and parallel modes [303], [304], [320]. It achieves the sensitivity of the tracker to object discriminant features [127], [298]. CGACD designs a twofold correlation-guided attention module to obtain enhanced features [298]. It is based on channel and spatial attention mechanisms, which acts on search regions and template features, respectively. SiamMRANN proposes a multilevel residual attention module to focus on spatial and spectral aspects of local objects [282]. The loss function incorporates the tracking results of multilevel features to accurate object regression prediction. AiATrack introduces an attention-in-attention module after the dot product operation of the attention mechanism [312]. The proposed module is shown in Fig. 30, which can be used in self-attention or cross-attention blocks to suppress noise.
Transformer encoder-decoder can be used to aggregate template and search area features [42], [307]. As shown in Fig. 31, TrDiMP adopts the transformer architecture to achieve the enhancement of the object cues, where Mask represents the template feature mask [42]. In addition, pyramid features have great advantages in the model feature enhancement [310], [314], [316]. The multiscale features can be sent to the pooling attention mechanism, which is similar to Fig. 5(c) [314]. SiamTPN designs a transformer pyramid network block [316]. It uses the lateral cross-attention approach for cross-scale feature fusion.
Similarity calculation can be replaced by cross-attention operations [41], [301], [314], [321]. Among them, TransT contains a cross-feature augment module composed of multihead cross attention [41]. TT-ATOM designs a cascaded pixel-level cross attention and channel-level cross attention to realize interactive modeling across channels [314]. Different from the above pixellevel calculation, CSWinTT flattens the template and search area features into a window sequence [126]. It proposes a multiscale cyclic shifting window to generate a large number of samples, realizing window-level attention. SwinTrack designs a visionmotion integrated transformer, which fuses a motion token into the decoder to embed tracklet [78]. MixFormer adopts the multiple stacked asymmetric mixed attention modules with patch embedding, realizing the integration of feature extraction and correlation [311]. Transformer can input with other features after correlation calculation, such as saliency features or hierarchical features [310], [317], [318]. It enhances the ability of capturing global context information. The Gaussian mixture model (GMM) can obtain the object mask result to improve the tracking performance and prevent tracker drift [49], [66]. The result then is fused with the output from Siamese tracker to predict the object position, which makes full use of the tracking and detection capabilities. SRN-TFM presents a deep motion regression network formed with optical flow, which is a crucial complement of Siamese tracker [50]. In addition, an adaptive fusion strategy based on the PSR is adopted to combine the deep motion network with the tracker. And a trajectory fitting motion model is proposed to fit the object motion pattern for alleviating tracking drifts.
3) Datasets and Evaluation Metrics: RSV datasets provide a very important reference value in SOT development, further promoting the model research.
a) Datasets: The UAV123 dataset incorporates long-term aerial tracking sequences and protrudes camera viewpoint with bounding box aspect ratio changing [323]. Besides, the total number of sequences exceeds 110K frames. Some of the DTB70 video sequence are recorded from DJI Phantom 2 Vision+ drone on college campuses, others from YouTube [111]. It improves the variety of object appearance and scenes. The tracked [114]. It marks 840K bounding boxes. And the sequence length ranges from 83 to 2970 frames. In particular, only 50 sequences are used for SOT testing. As a hyperspectral video dataset, the WHU-Hi-H 3 dataset provides additional spectral information among the band range from 600 to 900 nm with 25 bands [112]. It designs nine scenes, which divided into 69 video sequences. The tracked objects include cars, rigid objects, people, and shadows.
The VISO dataset, which is taken by Jilin-1, contains some different traffic situations in real-world scenarios [35]. The tracked objects include airplanes, cars, ships, and trains. Twentyseven video sequences are used for SOT, which contains 3159 trajectories with a total of 1120K frames. The SatSOT dataset uses the data collected by Jilin-1, Skybox, and Carbonite-2 and contains 27 664 frames [113]. To reflect more complex background information, it does not set a uniform resolution. Besides, the number of video frames ranges from 120 to 750 frames. Objects include ships, cars, planes, and trains, whose sizes range from 21 to 780 605 pixels. The SV248S dataset utilizes six open-source satellite video datasets provided by CGSTL [7]. It constructs 248 video sequences. Each dataset selects approximately 40 tracked objects including ships, motor vehicles, and aircraft.
These datasets contain a rich set of abundant appearance and challenging attributes. All the video sequences are accurately labeled with tracking targets for tracker evaluations. The detailed information of these datasets is listed in Table VI. b) Evaluation metrics: Most trackers adopt the one-pass evaluation, that is, initializing the ground truth position of the first frame in a video sequence and reporting the average accuracy/success score [61], [127], [280], [286]. They follow the evaluation methodology of OTB across calculating the success and accuracy scores without any parameters [324], [325]. The specific indicators are as follows.
Precision plot: Given the center positions (β G 1 , β G 2 ) and (β T 1 , β T 2 ) of the ground truth and tracked boxes across each frame in the video sequence, we define the Euclidean distance between these two as the center location error (CLE) We calculate the percentage of frames, in which CLE is less than a given threshold for a specified video sequence. Then, an accuracy curve is drawn through different thresholds with corresponding frame percentages. The proportion of the area under curve (AUC) is the total accuracy score of the tracker. Success plot: Given the ground truth area box 1 and the tracked area box 2 of each frame in the video sequence, the IoU value IoU 1,2 can be calculated by (18). The enhanced IoU (EIoU) considers the location error and IoU comprehensively Here, δ 1 and δ 2 are the nonnegative weight coefficients; it is stipulated that δ 1 + δ 2 ≤ 1. NE represents the normalized Euclidean distance of the center positions between the ground truth and tracked boxes [7]. The percentage of frames in sequence is calculated through IoU/EIoU is less than a given threshold. The success curve drawing is the same as the accuracy curve. The total success score of the tracker can be obtained via the proportion of the AUC. The precision and success evaluation metrics show different types of tracking accuracy at all thresholds.
Enhanced normalized union score (ENUS): It is a highly compatible and accurate evaluation method that can evaluate different types of tracker boxes, such as tight polygon boxes, which is specifically written as, where σ 1 and σ 2 are the nonnegative weight coefficients, which satisfy the condition of σ 1 + σ 2 ≤ 1. U = max(1 − | Precision Precision 0 − 1| γ ) presents the product of Recall and Precision.
Precision 0 is determined according to the type of tracker box, and γ is a regularization factor [7]. c) Performance evaluation: The precision and success score comparisons of SOT methods on available RSV datasets are listed in Tables VII and VIII. DeepMAT [305] adopts
B. Multiple-Object Tracking
MOT methods associate the same objects across frames in a given sequence to generate the optimal motion trajectories with object identity [82], [86], [90]. The categories of MOT methods are listed in Table IX. It explains the key characteristics from detection hypotheses and detection-tracklet association. Same as SOT, the end of this subsection introduces some common tracking datasets and evaluation metrics to make the research complete.
1) Two-Stage Structures: Followed by the tracking-bydetection paradigm, these traditional methods are cast MOT as data association problems, in which detection hypotheses are associated into object trajectories [326]. The main steps are divided into two steps. 1) Preprocessing: Objects in a video sequence are detected by a pretrained image detector or background subtraction. It comprehensively describes objects using discriminative features, such as textures and structural features. 2) Multiframe data association: Target trajectories are assigned through the data association between all the targets in all frames. MOT is treated as a multiframe multiobject association problem. According to whether future frame information is required to process the current frame, these two-branch structures are divided into two branches, i.e., online and offline methods. The online method only uses the current frame and past frames to estimate the current object states, while the offline method uses the future frames and past frames as input to estimate object trajectories.
a) Online methods: It matches the current frame detections with the previous tracklets until the end of the video sequence. The overall process is shown in Fig. 32. We divide these methods into motion-based, appearance-based, and objectinteraction-based methods.
Motion-based models: In the object detection stage, GMPHD-SAR adopts the morphological operations and border tracking to extract object candidates from clutter-suppressed SAR video frames [327]. As shown in Fig. 33, SFMFMOT proposes an improved NMS module to combine FairMOT [89] with a slow-feature-based bounding box proposal extraction module for extracting object bounding boxes [82].
During the data association phase, the Kalman filter or other motion models are used to learn the trajectory features of different detections/pixels [82], [83]. They distinguish the moving object trajectories and fill in the missing detection parts. The prior information can be used to achieve tracking. GMPHD-SAR adopts the Gaussian mixture probability hypothesis density (GMPHD) filter for tracking under the assumption in which each target follows a linear Gaussian dynamic model [327]. With the shadow characteristic of moving targets and road information in SAR video frames, SDT-SAR adopts the pretrained CNN and filters to complete tracking [328]. Structural constraint event aggregation (SCEA) exploits the structural constraints to achieve data association [83]. It proposes an SCEA method, which fuses data association costs along with the assigned events, to estimate the optimal assignment between well-tracked objects and detections. Besides, a structural constraint object recovery (SCOR) method is presented to recover the missing objects between frames through the updated well-tracked objects and structural constraints.
Appearance-based models: These models adopt the trackingby-detection paradigm and focus on the object appearance feature extraction. In the video frame preprocessing stage, ER-MOT proposes an adaptive resolution optimization (ARO) method to [128]. It scales the image adaptively by applying the linear relationship between the gray value distribution (GVD) and the image size.
As for capturing the discriminative features between similar detections, ER-MOT adopts HOG, local binary patterns, and RGB histogram features of the detections [128]. TC-MOT proposes a Siamese-based appearance model [79]. The overall tracking process is shown in Fig. 34; HC and LC mean high confidence and low confidence, respectively. The tracker combines the online transfer learning (OTL) to fine-tune the model parameters, making it suitable for specific tracking sequences. HMAR proposes a human mesh and appearance restoration Fig. 34. Appearance-based online method [79]. method to extract 3-D appearance, pose, and location information of detections [329]. Transformer is then presented to propagate the spatiotemporal information for learning associations across frames. IQHAT designs a target identification module to obtain the identity assignment probabilities of detections, and a local target quantification (LTQ) module to obtain the density map [43]. An identity-quantity harmony (IQH) module is proposed to jointly optimize the two modules.
In the trajectory inference stage, the Hungarian algorithm and the Kalman filter can be employed to generate the final trajectories [43], [329], [330]. ER-MOT adopts the greedy bipartite graph technique to correlate the previous tracklets with the current detections [128]. It proposes a trajectory reliability assessment metric to eliminate incorrect samples, which mainly contains the affinity between tracklets and detections. TC-MOT proposes a confidence-based data association method, which defines a tracklet confidence [79]. The tracklets with high confidence are associated locally with the current frame detections through the Hungarian algorithm, while the low confidence tracklets are associated globally with detections or other tracklets later.
The instance segmentation method can be adopted for small proportion of object extraction, which is a conventional measure in RSV [330], [331], [332]. It enhances the appearance representation of detections and brings a reference to RSV tracking. ODTS constructs a foreground GMM and a universal background GMM for each object to compute corresponding confidence maps [331]. It adopts Lagrangian dual decomposition to combine the structured tracker with video segmentation method. Inspired by PointNet [333], PointTrack series set each instance as a 2-D point cloud and other region as environment point cloud [330], [332]. The random sampled point cloud data combine with multiple data patterns composed of offset, original RGB color, and categories. Moreover, a point weighting layer is introduced into the foreground for summarizing the instance features. The final instance features are obtained with the foreground, environment, and position embeddings. PointTrackV2 adds the focal loss to the instance segmentation for settling the pixel-level class imbalance problem [332].
Object interaction-based models: To learn the object feature and the relative position information between objects, the interaction models use the interaction characteristics between the tracked object and its adjacent objects, which combines the object motion and appearance information to achieve better trajectory predictions [80], [81], [334]. In object appearance and motion model designing stage, IMM-MPT computes a 4-D color histogram to detections in the color space for incorporating the spatial information into the appearance model [81]. The processing flow is shown in Fig. 35(a); PCHC means pedestrian color histogram computation. Besides, it proposes an IMM formed with the Kalman filter in Fig. 35(b), including the stationary model, the constant velocity model, and the constant acceleration/deceleration model. This tracker represents the data association as a weighted bipartite graph problem and uses the Munkres' algorithm to give the best assignments.
The tracking process could be described as an optimization problem [6], [334]. BQP-MOT proposes a binary quadratic programming to find each object position in the current frame, mainly constrained by object individual information and context cues [334]. It presents a modified Frank-Wolfe algorithm with SWAP steps for speeding up the optimization to directly solve the objective function. JMDT-EM employs the gating technique to eliminate infeasible association hypotheses for the data association module [6]. Based on the expectation maximization iterative optimization method, the tracker optimizes the optimization problems with alternately calculating the complete likelihood function and the tracking states. Particularly, MLMRF models the data association as a reidentification (Re-ID) problem [80]. It combines LSTM with the local maximal occurrence Re-ID model [335] to build an appearance model and uses the Kalman filter to model motion prediction. Besides, a label cost term is adopted to reidentify the detections as existing objects and a fast α-extension algorithm to solve the model optimization problem.
b) Offline methods: The overall process is shown in Fig. 36. It obtains the detections of the entire video sequence and then gains the final trajectories through performing global data association. The Kalman filter is always used to achieve global correlation [88], [117], [336]. The approximate solution has been proposed in the global associative optimization model to achieve an effective balance between memory and performance [87], [129], [337], [338]. The current offline models are mainly divided into graph-based, network-flow-based, and iterative-approximation-based methods.
Graph-based models: They regard each detection as a node and the relationship between the detections across frame as the edge weight on the graph structure. The data association graph is then constructed by edges with high similarity [84], [85], [339]. IT-MOT exploits the interaction between nonassociable tracklets to improve tracker performance [84]. The objective function is defined as a unary and pairwise term. The unary term measures the affinity between associable tracklets by integrating appearance, motion, and temporal consistency. While the pairwise term proposes close interaction (CI) and distant interaction (DI) term. The quadratic pseudo-Boolean optimization (QPBO) is then used to approximate the optimal solution. GMI-MOT regards object localization as a Markov inference problem via a graphical model, which designs the appearance and motion models as node potentials [339]. Besides, the edge potential is used to smooth the distance and angle of objects connected with the same edge. CCC regards MOT as a correlation co-clustering problem [85]. It combines the top-down MOT with the bottom-up motion segmentation and defines them in graph structure. The tracker centers on the high-level concept of semantic objects and treats the combination of bounding boxes with the same object as a correlation clustering problem. The motion segmentation centers on the low-level concept of grouping pixels and treats the grouping of point trajectories as a correlation clustering problem in terms of pairwise potentials.
Network-flow-based models: The data association optimization is treated as a multidimensional assignment problem, that is, a one-to-one data mapping should be found between multiple sets [129]. Under exploiting pairwise similarity, they use linear programming, minimum energy functions, or greedy algorithms to solve data association problems [44], [337]. In the object detection design stage, JTA combines target shadow and echo energy information [86]. A cell-averaged CFAR and a modified OS-CFAR are proposed to detect target shadows in imagery and energy information in the range-Doppler spectrum domain, respectively. As shown in Fig. 37, TBC creates the counting constraints by a spatiotemporal sliding window on the density map for object detection [337]. It integrates object appearance and motion information at the flow constraints to incorporate video context information. Besides, it designs a mixed-integer linear programming (MILP) problem, combining the objectcount constraint with flow constraints.
In the data association stage, JTA estimates the object state vector with different data association methods for different mode trajectories [86]. It also introduces the M/N-logic-based method to associate the two modules' information. HDA divides the data association into detection association and tracklet association [44]. It estimates the detection affinity by employing the object pose and appearance features in the detection associations. A Siamese tracklet affinity network (STAN) is proposed with the tracklet affinity to generate the final trajectory. It models the long-term object action dependence by LSTM and introduces a coherency-aware Siamese predictor to bidirectionally generate the unseen trajectory states for two tracklets.
Iterative-approximation-based models: Iteratively approximating the interframe assignment is adopted to solve the global optimal solution, which correlates across the video sequence to construct trajectories. DCM-MOT generates low-level tracklets from detections through KLT [326]. As shown in Fig. 38, CNL and ML in dynamic clustering block means cannot link and must link constraints, respectively. The tracker adopts the Dirichlet process mixture model (DPMM) [347] to dynamically cluster tracklets and proposes two appearance representation models for rigid and nonrigid objects, namely superpixel model (DPMM-SP) and deformable part model ((DPM) 2 ). TLMHT defines five categories of tracklet hypotheses with dummy detections and forms track-level associations by using the similarity between any two different detections within five frames [338]. An iterative maximum weighted independent set (MWIS) algorithm is proposed to solve the multiple-hypothesis tracking problem through a hypothesis category transfer model. Besides, a polynomial-time approximation (PTA) algorithm is introduced in the model optimization process, which converts the MWIS problem in a hypothetical subset into a bipartite graph matching problem.
The tensor approximation can exploited to solve the data association optimization [87], [129]. R1TA-MOT reshapes the optimization as a rank-1 tensor approximation problem and proposes a tensor power iterative method [87]. It captures higher Fig. 39. One-stage structure [91]. order motion information by the assignment constraints inherited from the multidimensional assignment formulation. Dual-L 1 -MOT proposes a dual L 1 -normalized context/hypercontextaware tensor power iterative optimization to obtain the detection correlation [129]. The final global trajectories are produced through the serial expansion of all batch associations.
2) One-Shot Structures: An end-to-end model is built to generate detections and corresponding trajectories, which mainly combines object detection methods with Re-ID or motion information to achieve tracklet association [348], [349], [350].
The spatiotemporal context information reflects the morphological changes of objects in different periods, which is particularly important for the subsequent trajectory inference [90], [91]. As shown in Fig. 39, PCAN distills a set of prototypes by clustering the spatiotemporal memory with a GMM [91]. It contains a frame-level and instance-level prototype cross-attention module to achieve a generalizable yet compact feature representation. TGarM regards MOT as a multitask learning method based on graph spatiotemporal reasoning [90]. It calculates the edge weight between features through attention mechanism and uses the graph convolution network reasoning to obtain the current message. The current feature state is obtained using a readout function through the previous feature state and the current message.
To enhance task-related feature representation, CSTrack proposes a reciprocal network with self-attention mechanism [92]. This network constructs the self-relation and cross-relation weight maps to facilitate object detection. DHIAN adopts a Re-ID branch to extract appearance features and encodes detections through the historical locations of tracklets with corresponding time stamps [345]. GRN-MOT proposes two subnetworks to extract object state attributes, namely global response generation (GRG) and motion displacement regression (MDR) subnetwork [45]. A logical inference methodology is proposed to estimate object response values using the object states from past frames, and the regression subnetwork calculates the pixelwise offset.
During tracklet generation representations across detections, DHIAN proposes a GNN-based human interaction model to utilize the relative position information between tracked objects and its surrounding objects [345]. DAN performs data association by pairing permutation to calculate the affinity matrix between the current frame object features and the previously stored previous features and then generates reliable [344]. GRN-MOT proposes two matching approaches, namely target-independent and target-dependent matching [45]. The former uses a greedy matching algorithm based on center point distance to link objects, while the target-dependent matching minimizes the global CM to optimize assignment.
MOT can be presented with two homogeneous branches for obtaining the current frame trajectories, namely detection and Re-ID branch [51], [89], [90], [92]. The overall process of Fair-MOT is shown in Fig. 40 [89]. FairMOT-SAR applies it for moving shadow tracking in SAR video with good performance [51]. The DLA network means deep layer aggregation network for extracting video frame features. For the Re-ID module, CSTrack proposes a scale-aware attention network (SAAN) with a spatial attention module and a channel attention module, enhancing the multiscale detection features and suppressing the background noise [92]. The branch imbalance problem has been solved with a bunch of detailed training schemes [89], [90]. TGarM proposes a multitask adversarial gradient learning strategy to make the loss gradients have similar statistical distribution [90]. SiaBi-GRU proposes a tracklet cleaving and reconnection network for trajectory postprocessing to cut impure tracklets and reconnect the same tracklets [346].
Unlike supervised models described above, visual-spatial proposes a cross-input consistency self-supervised learning method [46]. It computes detections in an unlabeled video corpus during preprocessing and proposes two input-hiding schemes to obtain learning signals, named visual-spatial and occlusionbased hiding. A tracker is applied independently on the two input variations to derive tracked output. The consistent output is produced by backpropagating the similarity of these two results.
3) Datasets and Evaluation Metrics: Some RSV tracking datasets for multiple objects are listed in this subsection, covering different challenges in various data types. In addition, commonly evaluation metrics are described to measure the performance of multiobject trackers more comprehensively. The characteristics and shortcomings of the trackers can be determined in time for subsequent optimization. a) Datasets: Table X lists the characteristics of each dataset in terms of sequence number and total frames. The VisDrone2018 dataset is a large-scale drone video dataset for Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. multiple vision tasks, which filmed in 14 different Chinese cities [351]. In the MOT task, this dataset divides whether the tracker needs detections in a single frame into two tracklet tasks. It contains 56 sequences for training, 7 for validation, and 16 for testing. Target categories include pedestrian, car, van, bus, and truck. The MOT task in the VisDrone2019 dataset merges the two tracklet tasks of VisDrone2018-MOT [352]. In the MOT task of the UAVDT dataset, 30 sequences are used for training and 20 sequences are used for testing [114]. The train and test sequences select different shooting angles to prevent the tracker overfitting. The VISO dataset uses the last seven (658 tracklets and 89 509 bounding boxes) of 47 video sequences as tests, realizing the MOT task design [35].
b) Evaluation metrics: Evaluating MOT methods incorporate multiple metrics to comprehensively evaluate the tracker performance from different perspectives. The evaluation metrics are synthesized from some MOT datasets and methods, aiming to gain a comprehensive grasp of evaluation protocol [34], [35], [114], [351], [352].
where Location MOT can be expressed as the total values of IoU between the true positives and the corresponding ground truths or the total values of the Euclidean distance between the two center positions. 8) Identification F 1 score (IDF 1 ): It expresses the ratio of correctly identified detections over the average number of ground truths and computed detections, namely MT is recorded as the number of targets with a covered percentage more than 80%, while ML is less than 20%.
VI. POTENTIALS IN RSVS
Transformer has achieved beneficial results in both RS image and video fields [12], [13], [14], [15], [16], [17]. Singleobject transformer tracking is prominent with improved performance [41], [42], [301], [307], [310]. There are still some potentials in RSV moving object detection and tracking tasks, such as the feature extraction of sparse foregrounds, the influence of complex background noise, and the utilization of spatiotemporal context information. The future developments of transformers in RSV moving object detection and tracking will be delved into this section.
A. Transformer in MOD
MOD contains the traditional background and the deep learning method [35], [36], [37], [38], [39]. The former uses the background spatiotemporal correlation with the motion cues of objects. It is sensitive to texture changes and objects irregular motion. The blurred RS scenario brings challenges to model performance. The deep learning method relies on object appearance and needs to balance model performance and speed. In this subsection, we will describe the development prospects of transformers from the perspective of motion-based and appearance-based models.
Background subtraction divides a video sequence into foreground, background, and noise, which relies on interframe registration. The self-attention mechanism in transformer can perform a more accurate spatial mapping between moving and fixed images, which provides a sufficient guarantee for the interframe registration [353]. The sparse background method models background with the rank minimization, foreground with structured sparse [257], [261], [262], [264], and adopts the motion information to ensure the continuity of moving targets [53], [121], [122]. Multihead attention can induce the model to interactively learn the context features, which can be used to ensure continuous detection in the irregular motion case [94], [96], [97], [124], [235]. Multilevel attention feature aggregation and hybrid attention modules can improve the feature representation of foreground objects and suppress noise interference [143], [181], [186].
2) Appearance-Based Models: Image-object-detectionbased methods focus on feature aggregation and motion information fusion. In feature aggregation, attention mechanisms or feature fusion blocks focus on moving objects [38], [116]. Nowadays, transformer variants focus on local feature areas to improve the feature expressive ability of the local regions [183], [198]. For example, designing local attention mechanisms, stacking attention paradigms, or combining convolution and attention [179], [215], [220], [227], [246]. The interframe information fusion has been adopted through convolutional networks [56], [116]. Transformer models the similarity of interframe information. It also has the advantage of global modeling at learning the object dynamics in the video scene [96], [124], [137], [241]. The attention mechanism has been used in RNN-based and tracking-based models to extract and enhance object semantic features effectively [39], [58], [59]. Various attention mechanisms and transformer variants have been used in RS tasks to enhance features [2], [16], [17], [21], [168], [208]. They can assist the network in mining deeper feature information and extracting high-quality detections.
B. Transformer in Object Tracking
RSV object tracking aims to track the objects marked by the first frame in subsequent frames. The development prospects of transformers concerning SOT and MOT methods are discussed in the following subsection.
2) Multiple-Object Tracking: The accurate detections and modal combination features are significant for the tracker robustness. Transformers perform well in RS image tasks, especially detection and segmentation [11], [14], [22], [108]. They can assist or replace detectors to improve MOT model accuracy.
In the data association stage, the Hungarian algorithm and the Kalman filter are widely used. Transformer, as a global context model, has the development prospect of calculating the optimal detection association. The two-stage and single-shot models will be discussed in this subsection. a) Two-stage models: In the object detection process, online methods mainly rely on the object detector to achieve bounding box extraction. They use the spatiotemporal aggregation of object features and instance segmentation methods to enhance the appearance representation [329], [330], [331], [332]. Besides, distinguishing similar objects in different ways lays a solid foundation for subsequent detection of trajectory-level associations [43], [79], [128]. To detect more accurate results and reduce the incorrect data association impaction, RS transformers can extract finer object detection boxes and suppress the background noise [14], [15], [105].
VII. TEN OPEN CHALLENGES WITH TRANSFORMER IN RSV
RS transformer development has gradually grown while facing some optimization, interpretability, efficiency, and versatility challenges. Fig. 41 depicts the open problems faced by transformer and RSV. It includes, but is not limited to, transformer interpretability, brain-inspired and physics-informed transformer, transformer with causal inference and few-shot learning, efficient and multimodal transformer, multiobjective optimization with transformer, multiscale geometric network with transformer, and transformer in RS tasks. They are introduced in detail as follows.
A. Transformer Model Interpretability
It is found that the attention heads in small transformers are interpretable, which has been shown to learn the context information, while the model interpretability becomes more complicated for multiple layers with high isolation costs [146], [147]. From an intuitive understanding, transformers can pay attention to more input information in a certain way and perform an approximate global analysis. Some brain studies explain how the brain works by adding perturbations to parts of the brain [359]. We can perturb part of the model to analyze the inner mechanism of transformer. In addition, the input in the attention head can interact differently to generate more complex behaviors with better performance [17], [151], [158], [206], [222], [223]. Therefore, it is necessary to explore the internal structures fundamentally for using the advantages of transformers more proficiently in RSV, such as explaining each module of transformer from different perspectives, comprehending transformer through the feature visualization, the influence function, or the saliency map.
B. Brain-Inspired Transformer
Neural networks treat functions as computational properties and train to learn external representations for adapting tasks [360], while they are still largely dependent on the input without the ability to understand the deep logical semantics, such as the object concept and the scene structural and causal understanding. It leads to poor generalization ability, making the networks enter a certain bottleneck period [361]. And how to successfully adopt biological plausibility for improving network performance has become an unavoidable topic. It can be enhanced through the study of brain anatomy and physiology [362].
Biologically realistic neural network architectures perform best at representing fundamental dynamics [360]. Transformer can replicate the spatial representation of the hippocampal structure accurately after being equipped with a recursive positional encoding [32], [33]. It suggests that transformer is similar to the human hippocampus without the aid of any biological knowledge. Moreover, transformers can significantly improve the ability of neural networks to mimic the various calculations performed by grid cells and other parts of the brain. This has laid the biological foundations for transformer studies, making it more valuable for research. The current networks need to provide more information for neural representation and brain cognition. We could continue drawing inspiration from the brain cognition and neuroscience field [363].
C. Physics-Informed Transformer
Embedding physics informed into some fields is already a popular trend [364], [365]. Quantum evolutionary algorithms, inspired by quantum theory in physics informed, have been widely used in multiobjective evolution algorithms [366], [367], [368], [369], [370], [371], [372]. Training neural network is a nonconvex optimization problem through the interaction and evolution of millions of parameter weights. It can be analogous to a large number of physical molecule interaction processes. Physics-inspired models have been proposed one after another [373], [374], [375].
Physics-informed transformers have been developed rapidly. Wave-MLP improves the token representation for distinguishing the semantic information in different images according to the wave-particle duality in quantum mechanics [375]. It divides each token into the wave function form of phase and amplitude, which dynamically aggregates tokens according to semantic information. The physics-informed models could be better at processing high-dimensional data with slow speeding in solution, which still needs to be researched. More physical information should be integrated into transformer by learning the data distribution laws to perform better in a shorter training time.
D. Integration of Causal Inference With Transformer
Causal inference is divided into three stages: association, intervention, and counterfactuals [376]. It estimates causal relationships through observational data, which can ensure that the results are correct and unbiased. Besides, it has great potential in exploring the influence of attributes on model prediction labels with promoting the development of deep learning models [374], [377].
For visual transformer research, pursuing accuracy and computational complexity is required. Most methods model the correlation between features, resulting in limited causal reasoning ability. Therefore, developing transformer with causal reasoning capabilities helps to explore the underlying mechanism in the model with interpretability. It will realize the general model transformation. Knowledge graphs for causal inference are built based on transformer, which provides logical evidence for the final prediction [378], [379], [380]. How to help transformer improve architecture performance is still an open problem. Causal intervention could be added to transformer for dealing with spurious correlations.
E. Efficient Transformer
The high performance with a low-cost strategy can improve transformer effectiveness and computational efficiency. At the same time, the energy and efficiency are usually related. Determining the balance between them is a meaningful research topic in the future. We will discuss from lightweight network and network architecture search for deploying efficient transformers.
With the performance of transformer in various tasks, the practical transformer has been designed through NAS [394], [395], [396], [397]. The current problem lies in the interpretability of NAS. In addition, model designs are limited to the existing structure design experience. How to find innovative elements from the search space to eliminate parameter optimization and the manual configuration of all the parameters are challenges in the future.
Compared with lightweight models based on neural networks, these transformers have similar or even higher accuracy [219], [405], [406], [407], [408]. There is still a development room for parameters and floating point operations. Balancing speed with accuracy and achieving better results on resource-constrained devices, like mobile devices, are still essential directions for future research [409].
F. Multiobjective Optimization With Transformer
In the real world, we often encounter problems where two or more conflicting objectives need to be optimized simultaneously. A set of constraint conditions must be satisfied, such as the receiver operator characteristic convex hull maximization in machine learning [410], [411], [412], [413], [414]. All these problems are called multiobjective optimization problems, which have been used widely in many fields [415], [416], [417], [418], [419]. Several evolutionary algorithms have been proposed to solve multiobjective optimization problems [415], [420], [421], [422], [423]. Their performances still need to be improved when applied to the optimization with transformer containing a super multiobjective. The iterative optimization algorithm further increases the model computational complexity.
Many real-world industrial applications and scientific researches present a time-dependent feature, including transformers [93], [101], [237], [424], [425], [426]. The dynamic multiobjective optimization problem has been paid increasing attention. It is characterized that the objective function with constraint and the associated parameters change over time [427], [428], [429]. The current difficulties are how to rapidly converge to the new true Pareto-optimal front and find a widely distributed set of Pareto-optimal solutions, while the transformer environment changes.
G. Multiscale Geometrical Neural Networks With Transformer
The wavelet scattering network uses the wavelet filter, which is a feature extraction network highly similar to the CNN between traditional image recognition and deep learning [430], [431]. This network is theoretically supported by rigorous mathematics and signal processing fields. It performs well under few-shot learning, which ensures translation invariance and deformation stability. Multiscale geometrical neural network (MGNN), which is based on the development of wavelet scattering network, has rotation and directionality with self-adaptive ability [432], [433].
Many methods combine neural networks with multiscale geometric analysis, mainly divided into two types. One is to use the transformation method at the multiscale geometric analysis tool in the feature space to achieve feature extraction and then send the extracted feature vector to the neural network for processing [433], [434], [435], [436], [437], [438]. The other is to use the parallel MGNN with the direction base directly [8], [439]. In the future, we can combine transformers to develop multiscale geometric analysis tools and construct parallel MGNNs with directionality. Choosing appropriate MGNNs for different tasks is also a future research direction. The computation of spatiotemporal information processing in the video field is relatively large. Combining transformers with MGNN to achieve a fast and practical model is also an important research topic.
H. Few-Shot Learning With Transformer Based on Knowledge-and Data-Driven Models
Inspired by the human visual system, few-shot learning designs a model with solid generalization ability from fewer training samples [440], [441], [442], [443], [444]. It solves problems like obtaining few training data. Transformer-based few-shot learning methods have been proposed one after another [445], [446], [447]. For example, HCTransformer explores the scheme of ViT in few-shot learning tasks [445]. It adopts a hierarchically cascaded transformer with a knowledge distillation framework and designs an attribute surrogate supervised method to learn information in labeled data.
Knowledge-and data-driven models need to be used to make it with logical reasoning and learning data rules. There are still some challenges to solve. In terms of the knowledge-driven model, how to quickly master a large amount of human commonsense knowledge and let the model learn automatically are challenges. For example, how to face an environment with ambiguous conditions. At the data level, the small amount of data, the image with low resolution, and the complex target relationships in the image cannot guarantee the model with a good learning effect. Moreover, how to balance the training time and performance of the model and achieve commercial accuracy with transformer-based few-shot learning are also significant topics for future research.
I. Multimodal Transformer
Multimodal transformer receives a variety of unique input information with different characteristics and generates additional modal data, which provides the possibility to realize more complex intelligent tasks [354], [448], [449], [450], [451]. It realizes the perception and interaction between modalities through the mutual fusion of information, which indicates that transformer has the potential to build a general intelligent agent. Xu et al. [448] describe the challenges of multimodal transformers with high research inspiration, including modal fusion, region-level alignment, and versatility.
For different modal tasks, it is necessary to design a specific learning strategy for the study due to the massive gap between the learning tasks, which leads to insufficient model fusion. Multimodal transformer is limited to the imitation of the brain apparent ability without the human cognitive research, leading to data fitting. A general multimodal transformer will lead to a more complex model parameter design. The tradeoff between model generality and computational cost will become a significant challenge in the future.
J. Transformer in RS Tasks
In this subsection, we focus on RS change and anomaly detection, object detection, and tracking. They play critical roles in detecting and preventing nonagricultural events, air defense, and surveillance. 1) Transformer for Change and Anomaly Detection: Compared with hand-crafted methods, CNN-based RS change detection methods can robustly model some complex change types [202], [204], [206], [452], [453], [454], [455], [456], [457], [458], [459]. Transformer shows excellent potential for change detection tasks with some challenges [3], [16], [21], [110], [207], for example, how to overcome input images with different resolutions, how to eliminate data dependence in diverse scenarios such as class imbalance, and how to capture more semantic information and fully use spatiotemporal context without increasing the model parameters. In addition, the research on environments, such as the open world, will improve the model flexibility and stability. Using the bottom and high-level features with transformers to generate more discriminative information and improve the robustness of pseudo-change information is also an important research topic in the future.
RS image anomaly detection aims to find strange objects or pixels, such as trees, aircraft, or rare minerals, without prior knowledge of abnormal samples [9], [460], [461], [462], [463], [464], [465], [466], [467], [468], [469]. It is a future research topic to combine models with transformers to make a robust feature extraction capability, thereby suppressing the influence of complex pseudo changes. Anomaly detection shows huge performance differences in scenarios. The model versatility has become an essential research direction. Besides, the data potential value needs to be mined for designing robust methods not attacked by deception algorithms [470]. On the other hand, transformer-based video anomaly detection methods have developed rapidly, and how to make the model select anomaly video segments adaptively is a research direction [100], [471], [472].
2) Transformer for Object Detection and Tracking: RSV with low spatial resolution, complex background, and small object sizes makes the intraframe and interframe information important [56]. It is a promising research direction about combining transformers, which capture global context information. The local redundancy of video data introduces a lot of repeated calculations, which can be solved with transformer for capturing long-distance dependencies [145]. Real-time performance is essential for military activities and urban monitoring, which is also a problem to be dealt in the future [34].
As a particular case of semantic video segmentation, MOD focuses on segmenting the foreground objects. The frame difference and background subtraction, which rely on motion information, are sensitive to irregular motion and texture changes [116]. Handling complex and rapidly naturally changing RS scenes are more challenging [58]. The appearance-based neural networks need more semantic distinction for motion artifacts, which means rich spatiotemporal semantic information is crucial [56].
The single-object transformer tracker has a good development in RSV [41], [42]. And some challenges have still existed in MOT. Online methods suffer from model drifts, irregular motions, similar appearances and occlusions, making them impossible to recover correct associations from early errors [43], [79], [80], [82], [83]. Offline methods adopt different local and global optimization based on accurate detections, which brings higher computational costs [44], [84], [85], [86], [87], [88]. Natural video object tracking can be migrated to RSV without taking the characteristics of RS data [90]. It may not take advantage of trackers, resulting in many false alarms or losing targets [82].
VIII. CONCLUSION
This article summarizes and looks forward to transformer in RSV moving object detection and tracking. We have a deeper understanding of RS transformers. It comprises the constraints of input mapping, the range of receptive field, the approximation and combinations of attention modules, and the efficient model construction with low redundancy and high inference speed. Besides, RSV moving object detection and tracking methods have been summarized with their characteristics and limitations. It also introduces the corresponding RSV datasets and evaluation indicators to promote detection and tracking research. The sequence nature of transformer drives its development into video field, which provides a good reference significance for RSV interpretation. In future research, the potential of transformer will drive to conduct extensive research in RSV detection and tracking. Different RS datasets with corresponding evaluation indicators will also promote the realization of more robust moving object detection and tracking. | 25,706.8 | 2023-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Isotropic uncharged model with compactness and stable con gurations
In present work, we have studied a new stellar distribution model with spherically symmetric matter and an uncharged isotropic distribution in general relativity. In this model, we have considered a particular metric potential. The model is capable to represents some known compact stars like Her X-1,4U 1538-52 and SAX J1808.4-3658.The model satisfy the energy condition and hydrostatic equilibrium equation, i.e., the modified Tolman-Oppenheimer-Volkoff (TOV) equation for uncharged matter. In addition to this, we also present the velocity of sound, surface redshift, and pressure density ratio. The physical quantities such as pressure, density, redshift, etc., are compared with graphical representations that are important from theoretical and astrophysical scale.
Introduction
An analysis of the solution of Einstein field equation shows that the exact solution plays an important role in development of many areas of gravitational field such as black hole solution,solar system test,gravitational collapse and so on. Generally in astrophysics compact stars formed due to gradual gravitational collapse are considered to fall the three categories, white dwarfs, neutron stars and black holes. This classification is based on the internal structure and composition of stars, where the formers contain matter in one of the densest forms found in the universe. According the strange matter hypothesis strange quark matter could be more stable than nuclear matter and thus neutrons star should largely be composed of pure quark matter. Possible observational signatures associated with theoretically proposed states of matter inside the compact stars therefore have remained an active research area in astrophysics , different types of mathematical modeling of such compact objects being considered.The singularity free interior solutions of compact object have an important consequences in relativistic astrophysics.The study of high density object like neutron stars,quark stars and white dwarfs,form their microscopic composition and properties of dense matter is one of the most fundamental problem in modern astrophysics.
In general, it is important to measure the mass and radius (2015,2007) of compact stars which depends on the equation of state (2004,2009,2010,2012).The motivation to undertake such a task, because the interior structure of such compact stars can vary with mass.On other hand, Buchdahl (1959) proposed a method on the mass radius ratio of relativistic fluid spheres which is an important contribution in order to study the stability of the fluid spheres.Thus the motivation of this study is to prediction of mass and radius of compact stars.The mass and radius of compact stars such as Her X-1,4U 1538-52 and SAX J1808.4-3658 has been analyzed by Gangopadhyay et al (2013a).
The equation of state(EOS) is an important features to describe a self gravitating fluid when it comes to solving the field equations. Ivanov (2002) has showed that the analytical solutions in the static, spherically symmetric uncharged case of perfect fluid with linear EOS is an extremely difficult problem. Sharma and Maharaj (2007) have demonstrated this complexity in the case of a static, spherically symmetric uncharged anisotropic fluid.
In the resent model we choose Buchdahl metric and solve the system of field equations and obtained a linear EOS.
The algorithm for an anisotropic uncharged fluid has been done by Lake and Herrera (2003, 2008.In this work Lake (2003Lake ( , 2004 has considered an algorithm and choose a single monotonic function which is generates a static spherically symmetric perfect fluids solutions of Einstein's equation.Against the above studies we have considered an uncharged isotropic fluid distribution in the context of the formation of the compact stars and find a new solution in section 2, section 3 consists of physical conditions for well behaved solutions.In section 4 the matching condition of interior metric to an exterior Reissner-Nordstrom line element and determine the constant coefficient. Stability analysis of compact objects and for better illustration of our result, the relevant physical quantities are presented by table and figure in section 5.Finally in section 6 we have drawn contains about present model.
Field equations for Uncharged Fluid Sphere in Schwarzschild Coordinates
Let us consider the spherically symmetric metric in Schwarzschild Coordinates ds 2 = −e λ(r) dr 2 − r 2 (dθ 2 + sin 2 θdφ 2 ) + e ν(r) dt 2 (1) where λ(r) and ν(r) are the functions of r only.The Einstein equation for a perfect fluid distribution is given by where κ = 8πG c 4 ,with G and c are the gravitational constant and speed of light in vacuum respectively. Here ρ and p denote matter density and fluid pressure respectively. The ν i is the time-like 4-velocity vector such that In view of the metric (1) the Einstein field equations are given by where prime ( ) denotes the differentiation with respect to r. Now we consider a well known form of metric potential,which was proposed by Buchdahl (1959) of the form as where K and C are arbitrary constant.The metric function(6) is regular and non-singular at the center of the star which satisfies the primary physical requirements for a realistic star. Now using (6),the equations (3)-(5) reduce to the following form where e ν = y 2 . Now to solve the equation (9) we introduce the new variables define by and y(X) = (X 2 − 1) 1/4 Ψ(X) Using the equations (10) and (11), the equation(9) reduce to the following form of second order differential equation where In order to solve equation (12) more easily if we set K = 7 4 , then the equation (12) takes the form So the solution of (14) is given as where A 1 and A 2 are arbitrary constants of integration. Now put the value of Ψ(X) from equation (15) and X = K + Cr 2 K − 1 into the equation (11),we get y(r) = 4(1 + Cr 2 ) 3 where g(r) = 7+4Cr 2 3 , F (r) = 5 + 2Cr 2 + 3g(r) 2(1 + Cr 2 ) . Therefore the expressions of density and pressure are given by where +A 2 (−1.25) F (r) (−2.25) 3 Physical Features for Well Behaved Solution 1. From equation (6),we observe (e λ ) (r=0) = 1 and (e ν ) (r=0) > 0. This show that metric potentials are singularity free and positive at center.It is monotonically increasing with increasing the radius of the compact star(see Fig.1).
2. Pressure p should be zero at the boundary r = R.
The above two conditions imply that pressure and density should be maximum at the center and monotonically decreasing towards the surface(see Fig.2). 5. The velocity of sound (dp/c 2 dρ) 1/2 should be less than that of light throughout the charged fluid sphere (0 ≤ r ≤ R). This is called casual condition.
6. The ratio of pressure to the density (p/c 2 ρ) should be monotonically decreasing with the increasing of r.(see Fig.2) where former inequality denotes weak energy condition (WEC) and later inequality denotes strong energy condition (SEC).
Matching Conditions of Boundary
The solution is smoothly connected to the pressure free boundary with the Schwarzschild exterior metric Beside the above the smooth joining with the Schwarzschild metric which requires the continuity of e λ and e ν across the boundary r = R and we get Using the equations (21) and (22),we get the expressions of arbitrary constant A 1 and A 2 as follow The expression of mass M is given as where
Stability Analysis of Compact Objects
In this section we have studied physical properties of interior of the fluid sphere and equilibrium conditions under different forces.
Tolman-Oppenheimer-Volkoff (TOV) equations
The general-relativistic hydrostatic equations were developed and used to models of compact stars by Tolman, Oppenhiemer and volkoff in (1939). These equations are obtained from Einstein-Maxwell field equations when metric is static and isotropic.The latter hypothesis is predicted to be a well approximation for densest interior of static compact star, because the strong gravitational force is balanced by a huge pressure and rigid body forces have a negligible effect on the structure. In the connection of the microscopic theory for the relation between pressure and energy density, and the mass, this equation gives a equilibrium solution.The Tolman-Oppenheimer-Volkoff(TOV) equation (1939,a) in the presence of charge is given by where σ is charge density,q is charge and M G is the effective gravitational mass which is given by: Plugging the value of M G (r) in equation (27), we get But in our model we have considered an uncharged isotropic fluid distribution i.e., the charge(q ) is vanish so equation (29) becomes The above equation can be expressed into three different components gravitational force (F g ), hydrostatic force (F h ) and electric force (F e ), which are defined as: where we use the same notation as above. Fig.(4) represents the behavior of the generalized TOV equations. We observe from these figures that the system is counterbalanced by the components the gravitational force (F g ) and hydrostatic force(F h ) and the system attains a static equilibrium.
Energy Condition
The energy conditions depend on the matter density and pressure.That follow certain restrictions.Basic information about the energy condition in (2011).Here we focus on the (i) Null energy condition, (ii) Weak energy conditions and(ii)Strong energy condition , which have the following inequalities Using these inequalities we justify the nature of energy conditions for the specific stellar configuration as shown in Fig.(5), that satisfy our result.
Adiabatic index
In order to have an equilibrium configuration the matter must be stable against the collapse of local regions. This requires Le Chateliers principle, also known as local or microscopic stability condition, that the pressure must be monotonically decreasing function of r such that dp dρ ≥ 0. Heintzmann and Hillebrandt (1975) also proposed that compact star with the equation of state are stable for adiabatic index Γ = p+ρ p dp dρ > 4/3. Fig.6 show that Γ > 4/3, so model developed in this paper is stable.
Surface Redshift
The gravitational redshift Z s within a static line element can be obtained as where g tt (R) = e ν(R) = 1 − 2M
R
The maximum possible value of redshift should be at the center of the star and decrease with the increase of radius. Buchdahl (1959) and Straumann (1984) have shown that for an isotropic star the surface redshift Z s ≤ 2.For an anisotropic star Bohmer and Harko (2006) showed that the surface redshift could be increased up to Z s ≤ 5. Ivanov (2002) modified the maximum value of redshift and showed that it could be as high as Z s = 5.211. In this model we have Z s ≤ 1 for compact stars SAX J1808.4-3658,4U 1538-52 and Her X-1. Also it is decreasing towards the boundary(see Fig.7).
Equation of state (EOS)
The equation of state(EoS),i.e. a relation between pressure and density is an important features of neutron star. So in this section we have discuss about EoS.It is wroth-wile to mentioned that different equation of state (EoS) of the neutron star lead to different mass-radius (M-R) relation. Many authors (1998,2002,2000) have suggested that the EoS P = P (ρ) can be well approximated by a linear function of the energy density ρ of compact star at high densities. Some researchers have also expressed more approximated form of the equation of state (EOS) P = P (ρ) as a linear function of energy density ρ (in details see (1989,a, 1990)). Here we find the EoS in a linear function of form P = P (ρ) as, where, Table 3: The numerical values of physical parameters of the star SAX J1808.4-3658 for C = 2.009 × 10 −13 km −2 , K = 7/4 r/R p(km −2 ) ρ(km −2 ) p/ρ dp/dρ where ρ s denotes surface density and α is non-negative constant. Harko and Cheng (2002) have demonstrated that the equation (39), gives the maximum mass of a strange star which is M max = 1.83M when ρ s = 4B ( B = 56M eV f m 3 ). In the present paper, we have developed same relation as considered by (2000). In that work, we have showed that the equation(39) corresponds to self-bound matter at the surface density ρ s . Fig.8 represents the behavior of pressure verses density for compact stars with realistic EoS . In the Fig. 8, we observe that the pressure p vanishes at surface density ρ s i.e. at the boundary of our model. This implies that p can be expressed by interpolation in power of ρ−ρ s . Such parametrization is very convenient for stellar modelling, which also significant to the interior of stable stellar configurations (2000).
Static stability criterion
The most important feature of stability for stellar configuration is static stability criterion (1965,1971). In this criterion, it is postulate that the any stellar configuration has an increasing mass with increasing central density, i.e. dM/dρ 0 > 0 represents stable configuration and vice versa. If the mass remains constant with increasing central density, i.e. dM/dρ 0 = 0 we get the turning point between stable and unstable region. For this model, we obtained M (R) and dM/dρ 0 as follows- M (R) = 12 π ρ 0 R 3 (9 + 56 π ρ 0 R 2 ) and dM dρ 0 = 108 π R 3 (9 + 56 π ρ 0 R 2 ) 2 Hence from Fig.9,we can conclude that presenting model represents static stable configuration.
Conclusion
In this article, we have discuss a new solution of Vaidya-Tikekar model for spherically symmetric uncharged fluid ball and found,it is physically valid solution.The Fluid ball contain an uncharged perfect fluid matter and Schwarzschild exterior metric. Mainly, we perform a detailed investigation of the physical result of high density system like uncharged fluid (2003,2004,2008) and observe that the physical viability and acceptable of the our model in connection with compact star like Her X-1,4U 1538-52 and SAX J1808.4-3658.
It has been observe that the energy density and pressure are positive at the center i.e ρ 0 > 0, p 0 > 0 and monotonically decreasing throughout the fluid ball,see Fig.2. The energy conditions are very important to understand many theorem of general relativity such as singularity theorem of stellar collapse. Fig.5 shows that the energy conditions are positive throughout the star and model satisfy (i) strong energy condition(SEC) and (ii) weak energy condition (WEC) (2012a).We have also studied about the surface redshift. It should be maximum at the center and monotonically decreasing from the center to surface (2002) see Fig.7. The modified TOV equation describes the equilibrium condition see Fig.3 and observe that the gravitational force is balanced by the hydrostatic force. For stability analysis the adiabatic constant(Γ) is an important physical parameter and compact star will be stable if Γ > 4/3 (1975). Fig.6 show that Γ > 4/3, so model developed in this paper is stable.The mass-radius relation must be less than 8/9 (1959). Our model also satisfy this condition. The numerical values of physical quantities are shown in the Table 1-5.We have obtained the EoS for the present compact star model, which is the significant physical property to describe structure of any realistic matter. We can see from equation (38) the pressure is purely function of density. Hence we conclude that this approach may help to describe the structure of compact star. | 3,564.4 | 2019-11-29T00:00:00.000 | [
"Physics"
] |
Multi-Criteria Decision-Making Method Based on Simplified Neutrosophic Linguistic Information with Cloud Model
: This study introduces simplified neutrosophic linguistic numbers (SNLNs) to describe online consumer reviews in an appropriate manner. Considering the defects of studies on SNLNs in handling linguistic information, the cloud model is used to convert linguistic terms in SNLNs to three numerical characteristics. Then, a novel simplified neutrosophic cloud (SNC) concept is presented, and its operations and distance are defined. Next, a series of simplified neutrosophic cloud aggregation operators are investigated, including the simplified neutrosophic clouds Maclaurin symmetric mean (SNCMSM) operator, weighted SNCMSM operator, and generalized weighted SNCMSM operator. Subsequently, a multi-criteria decision-making (MCDM) model is constructed based on the proposed aggregation operators. Finally, a hotel selection problem is presented to verify the effectiveness and validity of our developed approach.
Introduction
Nowadays, multi-criteria decision-making (MCDM) problems are attracting more and more attention.Lots of studies suggest that it is difficult to describe decision information completely because the information is usually inconsistent and indeterminate in real-life problems.To address this issue, Smarandache [1] put forward neutrosophic sets (NSs).Now, NSs have been applied to many fields and extended to various forms.Wang et al. [2] presented the concept of single-valued neutrosophic sets (SVNSs) and demonstrated its application, Ye [3] proposed several kinds of projection measures of SVNSs, and Ji et al. [4] proposed Bonferroni mean aggregation operators of SVNSs.Wang et al. [5] used interval numbers to extend SVNSs, and proposed the interval-valued neutrosophic set (IVNS).Ye [6] introduced trapezoidal neutrosophic sets (TrNSs), and proposed a series of trapezoidal neutrosophic aggregation operators.Liang et al. [7] introduced the preference relations into TrNSs.Peng et al. [8] combined the probability distribution with NSs to propose the probability multi-valued neutrosophic sets.Wu et al. [9] further extended this set to probability hesitant interval neutrosophic sets.All of the aforementioned sets are the descriptive tools of quantitative information.
Zhang et al. [10] proposed a method of using NSs to describe online reviews posted by consumers.For example, a consumer evaluates a hotel with the expressions: 'the location is good', 'the service is neither good nor bad', and 'the room is in a mess'.Obviously, there is active, neutral, and passive information in this review.According to the NS theory, such review information can be characterized by employing truth, neutrality, and falsity degrees.This information presentation method has been proved to be feasible [11].However, in practical online reviews, the consumer usually gives a comprehensive evaluation before posting the text reviews.NSs can describe the text reviews, but they cannot represent the comprehensive evaluation.To deal with this issue, many scholars have studied the combination of NSs and linguistic term sets [12,13].The semantic of linguistic term set provides precedence on a qualitative level, and such precedence is more sensitive for decision-makers than a common ranking due to the expression of absolute benchmarks [14][15][16].Based on the concepts of NSs and linguistic term sets, Ye [17] proposed interval neutrosophic linguistic sets (INLSs) and interval neutrosophic linguistic numbers (INLNs).Then, many interval neutrosophic linguistic MCDM approaches were developed [18,19].Subsequently, Tian et al. [20] introduced the concepts of simplified neutrosophic linguistic sets (SNLSs) and simplified neutrosophic linguistic numbers (SNLNs).Wang et al. [21] proposed a series of simplified neutrosophic linguistic Maclaurin symmetric mean aggregation operators and developed a MCDM method.The existed studies on SNLNs simply used the linguistic functions to deal with linguistic variables in SNLNs.This strategy is simple, but it cannot effectively deal with qualitative information because it ignores the randomness of linguistic variables.
The cloud model is originally proposed by Li [22] in the light of probability theory and fuzzy set theory.It characterizes the randomness and fuzziness of a qualitative concept rely on three numerical characters and makes the conversion between qualitative concepts and quantitative values becomes effective.Since the introduction of the cloud model, many scholars have conducted lots of studies and applied it to various fields [23][24][25], such as hotel selection [26], data detection [27], and online recommendation algorithms [28].Currently, the cloud model is considered as the best way to handle linguistic information and it is used to handle multiple qualitative decision-making problems [29][30][31], such as linguistic intuitionistic problems [32] and Z-numbers problems [33].Considering the effectiveness of the cloud model in handling qualitative information, we utilize the cloud model to deal with linguistic terms in SNLNs.In this way, we propose a new concept by combining SNLNs and cloud model to solve real-life problems.
The aggregation operator is one of the most important tool of MCDM method [34][35][36][37].Maclaurin symmetric mean (MSM) operator, defined by Maclaurin [38], possess the prominent advantage of summarizing the interrelations among input variables lying between the maximum value and minimum value.The MSM operator can not only take relationships among criteria into account, but it can also improve the flexibility of aggregation operators in application by adding parameters.Since the MSM operator was proposed, it has been expanded to various fuzzy sets [39][40][41][42][43].For example, Liu and Zhang [44] proposed many MSM operators to deal with single-valued trapezoidal neutrosophic information, Ju et al. [45] proposed a series of intuitionistic linguistic MSM aggregation operators, and Yu et al. [46] proposed the hesitant fuzzy linguistic weighted MSM operator.
From the above analysis, the motivation of this paper is presented as follows: 1.
The cloud model is a reliable tool for dealing with linguistic information, and it has been successfully applied to handle multifarious linguistic problems, such as probabilistic linguistic decision-making problems.The existing studies have already proved the effectiveness and feasibility of using the cloud model to process linguistic information.In view of this, this paper introduces the cloud model to process linguistic evaluation information involved in SNLNs.
2.
As an efficient and applicable aggregation operator, MSM not only takes into account the correlation among criteria, but also adjusts the scope of the operator through the transformation of parameters.Therefore, this paper aims to accommodate the MSM operator to simplified neutrosophic linguistic information environments.
The remainder of this paper is organized as follows.Some basic definitions are introduced in Section 2. In Section 3, we propose a new concept of SNCs and the corresponding operations and distance.In Section 4, we propose some simplified neutrosophic cloud aggregation operators.In Section 5, we put forward a MCDM approach in line with the proposed operators.Then, in Section 6, we provide a practical example concerning hotel selection to verify the validity of the developed method.In Section 7, a conclusion is presented.
Preliminaries
This section briefly reviews some basic concepts, including linguistic term sets, linguistic scale function, NSs, SNSs, and cloud model, which will be employed in the subsequent analyses.
Linguistic Term Sets and Linguistic Scale Function
} be a finite and totally ordered discrete term set, where N * is a set of positive integers, and h τ is interpreted as the representation of a linguistic variable.Then, the following properties should be satisfied: (1) The linguistic term set is ordered:
Definition 3 ([1]
).Let X be a space of points (objects), and x be a generic element in X.A NS A in X is characterized by a truth-membership function T A (x), a indeterminacy-membership function I A (x), and a falsity-membership function F A (x). T A (x), I A (x), and F A (x) are real standard or nonstandard subsets ]0 In fact, NSs are very difficult for application without specification.Given this, Ye [34] introduced SNSs by reducing the non-standard intervals of NSs into a kind of standard intervals.
Definition 4 ([17]).
Let X be a space of points with a generic element x.
Then, an SNS B in X can be defined as B = {(x, T B (x), , and F B (x) : X → [0, 1] .In addition, the sum of T B (x), I B (x), and F B (x) satisfies 0 ≤ T B (x) + I B (x) + F B (x) ≤ 3.For simplicity, B can be denoted as B = T B (x), I B (x), F B (x) , which is a subclass of NSs.
Definition 5 ([20]).
Let X be a space of points with a generic element x, and In addition, T C (x), I C (x), and F C (x) represent the degree of truth-membership, indeterminacy-membership, and falsity-membership of the element x in X to the linguistic term h C (x), respectively.For simplicity, a SNLN is expressed as h C (x), (T C (x), I C (x), F C (x)) .
Definition 6 ([22]
).Let U be a universe of discourse and T be a qualitative concept in U. x ∈ U is a random instantiation of the concept T, and x satisfies x ∼ N Ex, (En * ) 2 , where En * ∼ N En, He 2 , and the degree of certainty that x belongs to the concept T is defined as then the distribution of x in the universe U is called a normal cloud, and the cloud C is presented as C = (Ex, En, He).
Definition 8 ([33]
).Let H i be a linguistic term in H = { H i |i = 1, 2, ..., 2t + 1}, and f be a linguistic scale function.Then, the procedures for converting linguistic variables to clouds are presented below.
(2) Calculate Ex i : (3) Calculate En i : Let (x, y) be a cloud droplet.Since x ∼ N Ex i , En 2 i , we have 3En i = max{X max − Ex i , Ex i − X min } in the light of 3σ principle of the normal distribution curve.
Simplified Neutrosophic Clouds and the Related Concepts
Based on SNLNs and the cloud transformation method, a novel concept of SNCs is proposed.Motivated by the existing studies, we provide the operations and comparison method for SNCs and investigate the distance measurement of SNCs.
Distance for SNCs
Definition 11.Let a = (Ex 1 , En 1 , He 1 ), (T 1 , I 1 , F 1 ) and b = (Ex 2 , En 2 , He 2 ), (T 2 , I 2 , F 2 ) be two SNCs, then the generalized distance between a and b is defined as . Thus, we have In a similar manner, we can also obtain Thus, there is b, c) can be proved similarly.Hence, the proof of Theorem 2 is completed.
SNCs Aggregation Operators
Maclaurin [38] introduced the MSM aggregation operator firstly.In this section, the MSM operator is expanded to process SNC information, and the SNCMSM operator and the weighted SNCMSM operator are then proposed.
Definition 12 ([38]
).Let x i (i = 1, 2, • • • , n) be the set of nonnegative real numbers.A MSM aggregation operator of dimension n is mapping MSM (m) : (R + ) n → R + , and it can be defined as where (i In the subsequent analysis, assume that i 1 < i 2 <, ..., < i m .In addition, x i j refers to the i j th element in a particular arrangement. It is clear that MSM (m) has the following properties: (1) Idempotency.If x ≥ 0 and x i = x for all i, then MSM (m) (x, x, ..., x) = x.
SNCMSM Operator
In this subsection, the traditional MSM (m) operator is extended to accommodate the situations where the input variables are made up of SNCs.Then, the SNCMSM operator is developed.
Definition 13.
Let a i = (Ex i , En i , He i ), (T i , I i , F i ) (i = 1, 2, ..., n) be a collection of SNCs.Then, the SNCMSM operator can be defined as where m = 1, 2, ..., n and (i In light of the operations of SNCs depicted in Definition 10, Theorem 3 can be acquired.Theorem 3. Let a i = (Ex i , En i , He i ), (T i , I i , F i ) (i = 1, 2, ..., n) be a collection of SNCs, the aggregated value acquired by the SNCMSM operator is also a SNC and can be expressed as Proof. .
The proof of Theorem 3 is completed.
Theorem 4. (Idempotency) If a i = a = ( Ex a , En a , He a , T a , I a , F a ) for all i = 1, 2, ..., n, then SNCMSM (m) (a, a, • • • , a) = a = ( Ex a , En a , He a , T a , I a , F a ).
Proof.Since a i = a, there are Theorem 5 can be proved easily in accordance with Definition 13 and Theorem 3. Three special cases of the SNCMSM operator are discussed below by selecting different values for the parameter m.
(1) If m = 1, then the SNCMSM operator becomes the simplest arithmetic average aggregation operator as follows: (2) If m = 2, then the SNCMSM operator is degenerated to the following form: (3) If m = n, then the SNCMSM operator becomes the geometric average aggregation operator as follows:
Weighted SNCMSM Operator
In this subsection, a weighted SNCMSM operator is investigated.Moreover, some desirable properties of this operator are analyzed.Definition 14.Let a i = (Ex i , En i , He i ), (T i , I i , F i ) (i = 1, 2, ..., n) be a collection of SNCs, and w = (w 1 , w 2 , ...w n ) T be the weight vector, with w i ∈ [0, 1] and ∑ n i=1 w i = 1.Then, the weighted simplified neutrosophic clouds Maclaurin symmetric mean (WSNCMSM) operator is defined as where m = 1, 2, ..., n and (i The specific expression of the WSNCMSM operator can be obtained in accordance with the operations provided in Definition 10.Theorem 6.Let a i = (Ex i , En i , He i ), (T i , I i , F i ) (i = 1, 2, ..., n) be a collection of SNCs, and m = 1, 2, ..., n.Then, the aggregated value acquired by the WSNCMSM operator can be expressed as Theorem 6 can be proved similarly according to the proof procedures of Theorem 3.
The proof of Theorem 7 is completed.
The specific expression of the GWSNCMSM operator can be obtained in accordance with the operations provided in Definition 10.Theorem 8. Let a i = (Ex i , En i , He i ), (T i , I i , F i ) (i = 1, 2, ..., n) be a collection of SNCs, and m = 1, 2, ..., n.Then, the aggregated value acquired by the GWSNCMSM operator can be expressed as Theorem 8 can be proved similarly according to the proof procedures of Theorem 3.
MCDM Approach under Simplified Neutrosophic Linguistic Circumstance
In this section, a MCDM approach is developed on the basis of the proposed simplified neutrosophic cloud aggregation operators to solve real-world problems.Consider a MCDM problem with simplified neutrosophic linguistic evaluation information, which can be converted to SNCs.Then, let A = {a 1 , a 2 , ..., a m } be a discrete set of alternatives, and C = {c 1 , c 2 , ..., c n } be the set of criteria.Suppose that the weight of the criteria is w = (w 1 , w 2 , ..., w s ) T , where w k ≥ 0, and The original evaluation of alternative a i under criterion c j is expressed as SNLNs γ ij = s ij , T ij , I ij , F ij (i = 1, 2, . . ., m; j = 1, 2, . . ., n).The primary procedures of the developed method are presented in the following.
Step 1: Normalize the evaluation information.
Usually, two kinds of criteria-benefit criteria and cost criteria-exist in MCDM problems.Then, in accordance with the transformation principle of SNLNs [42], the normalization of original evaluation information can be shown as for benifit criterion, Step 2: Convert SNLNs to SNCs.
Based on the transformation method described in Section 2.4 and Definition 9, we can convert SNLNs to SNCs.
The SNC evaluation information can be obtained as Step 3: Acquire the comprehensive evaluation for each alternative.
The WSNCMSM operator or the GWSNCMSM operator can be employed to integrate the evaluation of a ij (j = 1, 2, ..., n) under all criteria and acquire the comprehensive evaluation a i = (Ex i , En i , He i ), (T i , I i , F i ) for the alternative a i .
Step 4: Compute the distance between the comprehensive evaluation of a i and the PIS/NIS.First, in accordance with the obtained overall evaluation values, the positive ideal solution (PIS) a + and negative ideal solution (NIS) a − are determined as Second, in accordance with the proposed distance of SNCs, the distance d(a i , a + ) between a i and a + , and the distance d(a i , a − ) between a i and a − can be calculated.
Step 5: Compute the relative closeness of each alternative.
In the following, the relative closeness of each alternative can be calculated as where d(a i , a + ) and d(a i , a − ) are obtained in Step 4.
Step 6: Rank all the alternatives.
In accordance with the relative closeness I i of each alternative, we can rank all the alternatives.The smaller the value of I i , the better the alternative a i is.
Illustrative Example
This section provides a real-world problem of hotel selection (adapted from Wang et al. [49]) to demonstrate the validity and feasibility of the developed approach.
Problem Description
Nowadays, consumers often book hotels online when traveling or on business trip.After they leave the hotel, they may evaluate the hotel and post the online reviews on the website.In this case, the online reviews are regard as the most important reference for the hotel selection decision of potential consumers.In order to enhance the accuracy of hotel recommendation in line with lots of online reviews, this study devotes to applying the proposed method to address hotel recommendation problems effectively.In practical hotel recommendation problems, many hotels (e.g., 10 hotels) need to be recommended for consumers.In order to save space, we select five hotels from a tourism website for recommendation here.The developed approach can be similarly applied to address hotel recommendation problems with many hotels.The five hotels are represented as a 1 , a 2 , a 3 , a 4 and a 5 .The employed linguistic term set is described as follows: S = {s 1 , s 2 , s 3 , s 4 , s 5 , s 6 , s 7 } = {extremely poor, very poor, poor, fair good, very good, extremely good} In this paper, we focus on the four hotel evaluation criteria including, c 1 , location (such as near the downtown and is the traffic convenient or not); c 2 , service (such as friendly staff and the breakfast); c 3 , sleep quality (such as the soundproof effect of the room); and c 4 , comfort degree (such as the softness of the bed and the shower).Wang et al. [49] introduced a text conversion technique to transform online reviews to neutrosophic linguistic information.Motivated by this idea, the online reviews of five hotels under four criteria can be described as SNLNs, as shown in Table 1.For simplicity, the weight information of the four criteria is assumed to be w = (0.25, 0.22, 0.35, 0.18) T .
Illustration of the Developed Methods
According to the steps of the developed method presented in Section 5, the optimal alternative from the five hotels can be determined.
Case 1-Approach based on the WSNCMSM Operator.
Let linguistic scale function be f 1 (h x ), and m = 2 in Equation ( 13) in the subsequent calculation.Then, the hotel selection problem can be addressed according to the following procedures.
Step 1: Normalize the evaluation information.
Obviously, the four criteria are the benefit type in the hotel selection problem above.Thus, the evaluation information does not need to be normalized.
Step 4: Compute the distance between the comprehensive evaluation of a i and the PIS/NIS.
Step 6: Rank all the alternatives.
On the basis of the comparison rule, the smaller the value of I i , the better the alternative a i is.We can rank the alternatives as a 5 a 1 a 3 a 4 a 2 .The best one is a 5 .
And the positive ideal point is determined as a + = (6.2421, 0.3334, 0.2307), (0.5675, 0.503, 0.229) , the negative ideal point is determined as a − = (4.0855, 0.6986, 0.328), (0.4449, 0.6791, 0.3688) .Then, the results of the distance between a * i and a + , and the distance between a * i and a − are obtained as Therefore, the relative closeness of each alternative is calculated as According to the results of I i , we can rank the alternatives as a 5 a 1 a 3 a 4 a 2 .
The best one is a 5 , which is the same as the obtained result in the situation m = 2.
Case 2-Approach Based on the GWSNCMSM Operator
Let the linguistic scale function be f 1 (h x ), and m = 2, p 1 = 1, p 2 = 2 in Equation ( 15) in the subsequent calculation.Then, the hotel selection problem can be addressed according to the following procedures.
Step 1: Normalize the evaluation information.
Obviously, the four criteria are the benefit type in the hotel selection problem above.Thus, the evaluation information does not need to normalize.
The obtained SNCs are the same as those in Case 1.
Step 3: Acquire the comprehensive evaluation for each alternative.
On the basis of the comparison rule, the smaller the value of I i , the better the alternative a i is.We can rank the alternatives as a 5 a 3 a 1 a 4 a 2 , the best one is a 5 .
Using the parameters m = 2, p 1 = 1, and p 2 = 2 in the aggregation operators, the ranking results acquired by the developed methods with the WSNCMSM operator and the GWSNCMSM operator are almost identical, and these rankings are described in Table 3.The basically identical ranking results indicate that the developed methods in this paper have a strong stability.
Comparative Analysis and Sensitivity Analysis
This subsection implements a comparative study to verify the applicability and feasibility of the developed method.The developed method aims to improve the effectiveness of handling simplified neutrosophic linguistic information.Therefore, the proposed method can be demonstrated by comparing with the approaches in Wang et al. [21] and Tian et al. [20] that deal with SNLNs merely depend on the linguistic functions.The comparison between the developed method and two existed approaches is feasible because these three methods are based on the same information description tool and the aggregation operators developed in these methods have the same parameter characteristics.Two existing methods are employed to address the same hotel selection problem above, and the ranking results acquired by different approaches are described in Table 4.
As described in Table 4, the rankings acquired by the developed approaches and that obtained by the existed approaches have obvious difference.However, the best alternative is always a 5 , which demonstrates that the developed approach is reliable and effective for handling decision-making problems under simplified neutrosophic linguistic circumstance.There are still differences between the approaches developed in this paper and the methods presented by Wang et al. [21] and Tian et al. [20], which is that the proposed approaches use the cloud model instead of linguistic function to deal with linguistic information.The advantages of the proposed approaches in handling practical problems are summarized as follows: First, comparing with the existing methods with SNLNs, the proposed approaches uses the cloud model to process qualitative evaluation information involved in SNLNs.The existing methods handle linguistic information merely depending on the relevant linguistic functions, which may result in loss and distortion of the original information.However, the cloud model depicts the randomness and fuzziness of a qualitative concept with three numerical characteristics perfectly, and it is more suitable to handle linguistic information than the linguistic function because it can reflect the vagueness and randomness of linguistic variables simultaneously.
Second, being compared with the simplified neutrosophic linguistic Bonferroni mean aggregation operator given in Tain et al. [20], the simplified neutrosophic clouds Maclaurin symmetric mean operator provided in this paper take more generalized forms and contain more flexible parameters that facilitate selecting the appropriate alternative.
In addition, being compared with SNLNs, SNCs not only provide the truth, indeterminacy, and falsity degrees for the evaluation object, but also utilize the cloud model to characterize linguistic information effectively.
The ranking results may vary with different values of parameters in the proposed aggregation operators.Thus, a sensitivity analysis will be implemented to analyze the influence of the parameter p j on ranking results.The obtained results are presented in Table 5.The data in Table 5 indicates that the best alternative is a 5 or a 1 , and the worst one is a 2 when using the GWSNCMSM operator with different p j under m = 2 to fuse evaluation information.When p 1 = 0, we can find the ranking result has obvious differences with other results.Therefore, p 1 = 0 is not used in practice.The data in Table 5 also suggests that the ranking vary obviously when the value of p 1 far exceeds the value of p 2 .Thus, it can be concluded that the values of p 1 and p 2 should be selected as equally as possible in practical application.The difference of ranking results in Table 5 reveals that the values of p 1 and p 2 have great impact on the ranking results.As a result, selecting the appropriate parameters is a significant action when handling MCDM problems.In general, the values can be set as p 1 = p 2 = 1 or p 1 = p 2 = 2, which is not only simple and convenient but it also allows the interrelationship of criteria.It can be said that p 1 and p 2 are correlative with the thinking mode of the decision-maker; the bigger the values of p 1 and p 2 , the more optimistic the decision-maker is; the smaller the values of p 1 and p 2 , the more pessimistic the decision-maker is.Therefore, decision-makers can flexibly select the values of parameters based on the certain situations and their preferences and identify the most precise result.
Conclusions
SNLNs take linguistic terms into account on the basis of NSs, and they make the data description more complete and consistent with practical decision information than NSs.However, the cloud model, as an effective way to deal with linguistic information, has never been considered in combination with SNLNs.Motivated by the cloud model, we put forward a novel concept of SNCs based on SNLNs.Furthermore, the operation rules and distance of SNCs were defined.In addition, considering distinct importance of input variables, the WSNCMSM and GWSNCMSM operators were proposed and their properties and special cases were discussed.Finally, the developed approach was successfully applied to handle a practical hotel selection problem, and the validity of this approach was demonstrated.
The primary contributions of this paper can be summarized as follows.First, to process linguistic evaluation information involved in SNLNs, the cloud model is introduced and used.In this way, a new concept of SNCs is presented, and the operations and distance of SNCs are proposed.Being compared with other existing studies on SNLNs, the proposed method is more effective because the cloud model can comprehensively reflect the uncertainty of qualitative evaluation information.Second, based on the related studies, the MSM operator is extended to simplified neutrosophic cloud circumstances, and a series of SNCMSM aggregation operators are proposed.Third, a MCDM method is developed in light of the proposed aggregation operators, and its effectiveness and stability are demonstrated using the illustrative example, comparative analysis, and sensitivity analysis.
In some situations, asymmetrical and non-uniform linguistic information exists in practical problems.For example, customers pay more attention to negative comments when selecting hotels.In future study, we are going to introduce the unbalanced linguistic term sets to depict online linguistic comments and propose the hotel recommendation method.
Then, according to Definition 11, the Hamming distance d Hamming (a, b) and Euclidean distance d Euclidean (a, b) are calculated as d Hamming (a, b) = 0.4304, and d Euclidean (a, b) = 0.3224.
Table 3 .
Ranking results based on different operators.
Table 5 .
Ranking results with different p j under m = 2. a 1 a 3 a 2 a 4 0 1 a 4 a 5 a 3 a 2 a 1 1 2 a 5 a 3 a 1 a 4 a 2 1 3 a 3 a 5 a 1 a 4 a 2 1 4 a 3 a 5 a 1 a 4 a 2 1 5 a 3 a 1 a 5 a 4 a 2 2 1 a 5 a 1 a 3 a 4 a 2 3 1 a 5 a 1 a 3 a 4 a 2 4 1 a 1 a 5 a 3 a 4 a 2 5 1 a 1 a 3 a 5 a 4 a 2 0.5 0.5 a 5 a 1 a 3 a 4 a 2 1 1 a 5 a 1 a 3 a 4 a 2 2 2 a 5 a 1 a 3 a 4 a 2 3 3 a 5 a 1 a 3 a 4 a 2 4 4 a 5 a 1 a 3 a 4 a 2 5 5 a 5 a 1 a 3 a 4 a 2 | 6,941.2 | 2018-06-01T00:00:00.000 | [
"Computer Science"
] |
The living wage as an income range for decent work and life
Purpose – A “living” wage (LW) is conventionally defined as enabling meaningful participation in society above subsistence through, for example, recreation, supporting a family, and savings. There is increasing debate over LWs due to growing inequality, rising living costs and welfare reform but this remains largely framed by the econometric cost-benefit parameters that apply to minimum wage regulation. The capabilities approach advocated by Sen (1999) offers a different perspective that is inclusive of choice, contingencies and the inter-connections between quality of (paid) work and private life. The paper aims to discuss these issues. Design/methodology/approach – The paper adopts this framework and utilises a qualitative exploration of the narratives of 606 New Zealand employees to understand perceived wage effectiveness. The results suggest that a focus on a specific LW rate might be conceptually limiting, in comparison to a LW range. Findings – First, the findings indicate that there is a pivot range in which people move from self-assessed “survival” to “decent” income. Second, a LW may have more than a simply monetary effect in better meeting employees’ living costs; it can also improve well-being through subjective perceptions of valued freedoms to do with job satisfaction, equity and security. Originality/value – The results thus draw attention to a wider notion of a LW in terms of personal and family well-being, utilising a capabilities approach, with implications for organisational practice, policy and theory concerning sustainable livelihood and the UN Sustainable Development Goals.
Introduction
Most countries regulate pay through "minimum wages", usually by law but also through collective bargaining (International Labour Organization, 2013).Wage minima may vary by sector, region or individual criteria such as age but the arrangements are usually designed both to provide a degree of income protection for workers, in combination with transferable benefits, and to prohibit "unfair" competition based on labour exploitation.
The declining wage share, rising inequality and increased cost of living observed across the developed world has recently shifted attention to "decent" income levels beyond
Dimensions of a LW
A LW can be defined as dependent on context, shaped by objective considerations such as the cost of living and subjective (culturally or historically specific) expectations of needs.In practice, a LW is often defined and calculated on the basis of econometric analyses that use basic cost of living estimates, income distribution as a percentage of median income, or a combination of both (Carr, Parker, Arrowsmith and Watters, 2016).This notion of a LW rate is usually based on narrow assessments of the basic economic needs of a "typical" household unit (Anker and Anker's (2017) recent definition is more workable but is still not broad).Though useful in itself, this neglects employee agency.A broader perspective would approach the LW in terms of employee understandings and impacts.In this paper, we define a LW as a wage level at which employees perceive and experience a step-change in their capability to enjoy meaningful organisational, personal and social lives.
Pay rates are not simply the outcome of "market forces".Notions of what is fair and appropriate inform pay-setting structures and outcomes as much as labour market supply and demand due to the potential for conflict and need to elicit productive behaviour (Arrowsmith, 2009).Conflict may arise because pay is the means of livelihood for employees yet a foremost cost to employers; however, this is attenuated if pay correlates with productivity such that increases are offset by reductions in unit labour costs.Pay has potentially important behavioural effects because much effort and performance is exercised voluntarily (Colling and Terry, 2010), and perceptions of "decent" levels and processes of pay combine with job content, conditions and hours to shape employee commitment and work motivation (Stevenson and Wolfers, 2013).Pay is thus not simply a function of the external labour market but reflects the need of employers to recognise employee concerns in order to motivate discretionary effort (De Saá-Pérez and García-Falcón, 2002;Jawahar and Stone, 2011).
The LW concept incorporates but goes beyond such instrumental performance considerations by focussing on quality-of-life as a goal in itself.This perspective is grounded in traditions of morality and humanism (Bennett, 2014), reflecting an age-old idea that "wages should be sufficiently high to enable the labourer to live in a manner consistent with the dignity of a human being" (Ryan, 1906, p. vii).This dual focus on human well-being and productive development closely resembles the capabilities approach that has been applied across the social sciences (Deneulin, 2009).In Sen's (1999, p. 75) seminal account, a person's capabilities are essentially the meaningful freedoms he or she enjoys to lead the kind of life, and quality-of-life, they have reason to value.
Essentially, capability does not depend on the actual resource endowment that individuals may have (such as income) but rather on what people manage to achieve with their resource endowment given the freedom to choose through the exchange of resources.A human capability approach to LWs thus emphasises the impact of income on the autonomy of workers' choices in the exchange of resources such as time and effort for money, as well as of money for goods and services.These choices, which are context specific, help to shape work motivation and perceived life quality, themselves subjective measures of individual capabilities.
Operationalising a LW using the capabilities approach is not straightforward given the diversity of individual characteristics and context and because there a large number of constructs and measures that can be used, whether for individuals (Waltman, 2004) or incorporating wider understandings of well-being such as at the household level (Stabile, 2008).These considerations of capability acknowledge not just material conditions but also sense of empowerment, commitment, psychological contract and job or career satisfaction (Clark, 2005).Ideally, an evaluation of the effectiveness of a LW in terms of its impact on human capabilities should involve broader considerations of a holistic evaluation of individual, family, organisational and social well-being.
We attempt to adopt this broader view here, accepting work motivation and perceived life quality as subjective indicators of individual capabilities which may be related to pay (Chiu and Chen, 2005;Riketta, 2008).Similarly, the organisational psychology literature highlights the relevance of factors such as wage relativities in informing perceived job satisfaction and well-being (Brown et al., 2008).Here, we use an exploratory capability approach based on selected indicators of work and life well-being.It is not our intention to utilise an exhaustive list of potential "capabilities" and "functionings" (in Sen's (1999) terminology, the desired outcomes of a person's capabilities), but rather to examine how income levels might relate to human capability or, more simply, perceived quality-of-life.This might be understood as qualitative since individuals define, for example, their sense of accomplishment or purpose in distinct family, work and community settings, in accordance with their goals (Clark, 2005).
A key concern regarding a LW, and an objective of this paper, is thus to explore the actual and relative levels of income that impact on workers' perceived realisable
877
LW as an income range for decent work and life opportunities, that is, their capabilities.More specifically, we explore the meaning of a LW based on qualitative insights as reported by employees in order to address two research questions drawn from the human capability perspective: RQ1.Is there an incomemost likely a threshold or "pivot range" rather than a universal rate or "pivot point"that might transform the perceived capability of workers in transitioning from "survival" to "decent living"?
RQ2.What contextual factors might impact on workers' perceived work-life quality in this respect?
Methodology
The research was based on an online survey delivered in 2014 through various media platforms, details and overall quantitative analyses of which have been presented elsewhere (Carr, Parker, Arrowsmith, Watters and Jones, 2016).A sample of 1,183 employees participated in the survey under conditions of informed consent and confidentiality, following a low-risk screening process framed by University Ethics Committee protocol.
Respondents were invited to respond to an open-ended question where they were asked to evaluate their wage or salary and provide views on the factors impacting on work-life quality.A total of 712 (60.2 per cent) participants provided comments but 106 of these were removed due to irrelevance or incompletion.This produced 606 usable participant narratives for this qualitative study.
The sample was diverse in terms of age, region, gender, occupation and income level though no claim is made for representativeness.For example, 41 per cent (n ¼ 249) were aged between 51-65 years (compared to 28 per cent under 40 years) and 24.4 per cent (n ¼ 148) were male; the former might reflect a tendency for younger respondents to reply via social media, which circumscribed narrative comment, and the latter a higher concentration of females in low-paid work (New Zealand Ministry for Women, 2016).Promotion of the survey was targeted at lower-waged workers (though community groups, trade unions and media appeals) but job status and occupation ranged from part-time, entry-level jobs to managerial and professional roles.Annual individual salary data thus ranged from New Zealand Dollar (NZD)3,000 to NZD400,000 (mean ¼ 60,787;SD ¼ 34,773).This range of pay and job types did, however, facilitate analyses of respondent perspectives concerning the different contextual relationships between pay, perceived work-life quality and job characteristics.
A template analysis of the data was adopted (King, 1998); this approach lies between content analysis, which applies a predetermined list of codes (Krippendorff, 2004), and a grounded theory approach in which themes surface within a loosely structured framework (Glaser and Strauss, 1967).The qualitative analysis software NVivo 11 was used to classify, sort and arrange the narrative data and to examine relationships between the themes that emerged.A two-step analytical process was adopted.The first stage involved "descriptive coding" (Richards, 2009).In order to understand the potential "pivot range" of income related to capability mobility, participants' perceptions of the sufficiency of income was coded according to classificatory nodes extracted from the qualitative data.These nodes/categories were obtained using the search function of NVivo 11 to extract the most frequently used word(s) to describe subjective views of the quality of income.This approach generated six categories that we labelled, based on the content of responses, "struggle" ( frequency ¼ 227), "barely enough" (109), "low pay" (203) "fair" (246), "comfortable" (99) and "really good" (84).These nodes were then used for coding.This is, of course, an interpretive rather than objective process, and the challenges of categorising narrative data into usefully descriptive and explanatory "variables" are familiar in qualitative research (Black, 2006), especially as it is important to "let the participants speak for themselves" (Maykut and Morehouse, 2005, p. 42).In order to ensure consistency during the analytic process, working definitions of each variable were generated and applied based on common respondent references to basic material and lifestyle factors, and research team members independently categorised and where necessary discussed each comment.Any coding discrepancies were reviewed and adjustments made until consensus was reached among the coders.
The second process involved "topic coding" (Richards, 2009) to develop understanding of the factors impacting on perceived work-life quality.A list of nodes was generated, in a similar fashion to the above coding process for salary references, concerning perceived capabilities and wider contextual factors.Coding was then applied to explore the components of these topics, based on participants' comments.The coding framework was refined through a collective process of reading and re-reading the self-completed documents.Nodes were then compared and merged to form categories, with each category thoroughly analysed to identify recurrent patterns and themes.The "constant comparative method", which is commonly used in grounded approaches (Lofland et al., 2006) was used to compare how different themes were discussed by different participants.This process served as an internal validation tool to enhance the credibility and potential transferability of the research (Guba and Lincoln, 1994).
Findings
The results are organised according to the two research questions.First, participants' perceptions of income effectiveness are explored and, in particular, income ranges in which participants might move from "survival" to perceived "decent" income resulting in a transformation of the perceived capability.Second, contextual factors which impact on perceived work-life quality are explored.In both sections, quotations are used to illustrate important themes.
Pay, capabilities and exploring a pivot range Individual narratives were coded according to participants' evaluation of their income.Using a demographic question in the survey regarding annual income, we can construct a matrix to identify the numbers of participants in each category of perceived capabilities (using self-assessed well-being as a proxy) from each income group (Table I).Individual income is clearly related to perceptions of well-being though, as the narratives show, this is mediated by personal circumstances (e.g.number of earners and dependents) and perceptions of job content and worth.
Taking each of the capabilities proxy indicators in turn, 8 per cent of participants (n ¼ 49) said that they struggled to get by.Examination of the data narratives for this subset revealed that a sense of insecurity was common among these participants.They found it difficult to meet current basic needs and were worried that the future could be still more difficult.Low income hindered capability development due to an impact on quality-of-living and psychological well-being.For example: It doesn't work well for me at all.I cannot afford to live in my own home as I cannot afford a mortgage or rent plus all other expenses.If my car broke or I needed dental care, I struggle to pay.I am too scared to ask for more as I could only get a casual contact and I think my employer will get rid of me if I ask for more pay.I work 10 and seven hours without one break […] I'm living on a knife edge from week to week wondering how I can make ends meet (Female, 51-65 years, beauty therapist, NZD25,000 per annum) [1].
It is very difficult because we fall into the category that we don't make enough to live on properly but we don't make less enough to get help i.e. community service cards etc […] ☹ [emoticon] (Male, 41-50, Community Service Coordinator, NZD28,210 per annum).
LW as an income range for decent work and life
Another 12 per cent of the participants (n ¼ 75) described their remuneration as "barely enough" in that current basic needs were usually met but there was little or no provision for eventualities.As one respondent put it: I feel that I am able to meet my basic needs but find it difficult to save money despite earning "above minimum" wage.This is disconcerting since I believe I am good with money such as having low living costs, no debt and limiting spending on "extras" (Female, 31-40 years, customer service, NZD29,681 per annum).
These respondents also reported personal and family stress: I worked hard in not-so-good jobs to finally secure a good job but I find that I barely earn enough to pay for daily needs.I can pay the mortgage and food and almost all the basic living costs but I can't afford to put money away for any maintenance or savings.I can't always afford to pay insurance unless we eat rice or cheap non-nutritionally balanced food.We can't afford good quality fruit, vegetables and meat.The kids don't have holidays, Sunday drives never happen and they buy their treats like ice-cream or lollies [sweets] out of pocket money which I give them from savings of buying less meat.I feel sad when others invite me out and I have to say no because I don't have tidy clothes to wear as I save them for work.I don't have money to pay for outings or babysitter!(Female, 41-50 years, early childhood teacher, NZD39,000 per annum).
A further 9 per cent of participants (n ¼ 55) assessed their income as "low pay".Basic needs were generally (just) met but there was frustration over considerations such as pay relative to work and related effort.These participants tended to explicitly extend consideration of capabilities beyond meeting basic living needs to include psychological fulfilments including through work.They were also more likely to reference the pay and perceived job experiences of others: It sucksyet [rent, petrol and food] the things I classify as essential basics in any family) go up.It is good I have a job -I can help my family in a way of making our cash flow easier but when you travel 100km a day to get to and from your job, collecting kids from after school care programs running around for sports etc. and eating on the run, there is never enough hours in the day.Sometimes wonder whether it is at all worth it.I just wish work would appreciate the staff a bit more for what and how much work we actually do considering we should all be paid a lot moreespecially because a lot of us do the so called unpaid hours at work (Female, 41-50 years, patient care assistant, NZD35,400 per annum).
I am paid less than other people I know working in the same profession for other companies.The business I work for generally has a high staff turn-over, primarily due to the low pay (Male, 21-30 years, software developer, NZD40,000 per annum).
In the next respondent group, more than 41 per cent participants (n ¼ 249) considered their income to be "fair" in terms of meeting their needs.This was often linked to personal circumstances, especially household size and needs, and/or to satisfying work: I am just under the LW and it is liveable.My children have all grown up and have left home, so my wages do go a little further these days.I am still not able to save a lot -I do try to save 10% of my wages, for holidays and extras but it doesn't seem to get much higher than NZD2,000 before something needs doing such as work on the car or house maintenance.I am in KiwiSaver [a state-sponsored pension scheme] so there is regular saving for my retirement (Female, 51-65 years, community support worker, NZD38,272 per annum).I am fairly paid for my work, which I enjoy.I am considerably well off for my qualifications and experience, yet even my salary with no dependents just manages to cover my expenses including accommodation, food, utilities and debt (Male, 21-30 years, education organiser, NZD78,000 per annum).
In the next group, almost 24 per cent of participants (n ¼ 142) reported a level of well-being enabled by their income.Narratives indicated a growing sense of freedom of choice and a more positive future orientation.For instance: I feel that my pay is slightly generous for what I do, and fair for the amount of effort, skill and expertise I bring to the role.As a household, we have more than enough to live on, which enables us to save for the future and also to share our resources with others.We never have to worry about money, but we are still modest in our spending (Female, 31-40 years, academic information coordinator, NZD58,000 per annum).I am a comfortable, middle class self-employed contractor.I live on a lifestyle block where I am endeavouring to become self-sufficient in food production before I retire in a few years.I look forward to enjoying more of my hobbies (Male, older than 65 years, senior procurement specialist, NZD98,680 per annum).
Finally, a small number of participants (6 per cent; n ¼ 36) considered that their pay served their needs and aspirations "really well".These were mostly, but not exclusively, in higher income brackets.In contrast to many lower earners who perceived a mismatch between their experience, qualifications and levels of pay, higher income earners expressed a sense of satisfaction as a result of pay more closely and fairly approximating their skillset, scarcity value and work-effort.Again, satisfaction with pay was linked to personal circumstances such as satisfying work and ability to save: I receive a very good wage, but I believe I am fairly compensated for my skills, efforts and especially the hours/extra hours I devote to the job (Male, 51-65 years, associate professor, NZD124,000 per annum).
LW as an income range
for decent work and life I am very happy with my income; it is allowing me to save for a house [but] I realise that not everyone has [that] ability (Male, 21-30 years, union organiser, NZD67,000 per annum).
The findings suggest that increased income is positively linked to individual capabilitieswhat Sen (1999, p. 87) refers to as "the substantive freedoms he or she enjoys to lead the kind of life he or she has reason to value"but that this relationship is mediated by personal circumstances and job-related features.In order to further explore whether there might be any particular income or income ranges that might be linked to a step-change in perceived capabilities, the six categories were collapsed into two labelled "survival" (comprising struggle, barely enough, and low pay responses) and "decent" (comprising fair, comfortable and really well responses).The latter, which included more than two thirds of respondents (n ¼ 427) tended to be more numerous as income rose, with a drop in the middle-income bracket and a peak at the NZD100,000-119,999 band (Figure 1).The survival category demonstrated a clear peak in the third income range with a sharp decrease in representation to category four.This category also serves as the transition point at which there is more "decent" pay than "survival" pay respondents per income category.Hence, we observe (in purely numerical terms) a clear general association between higher incomes and pay evaluation in terms of enabling work-life capabilities (with the gap between "decent" and "survival" widening with increases in income as the latter group diminishes in size), and a decrease in the representation of negative perceptions of work and life quality at a pay pivot range of between NZD30,000-39,000.This indicates a qualitative transformation in employee capability at such a pay range.
Contextual influences on perceived work-life quality
As indicated, participants' comments not only identified how perceived capabilities link to pay in a categorical way but also provided insights into the dynamics that shape personal Notes: a Categories of annual income before tax (in NZD) -P1: less than $10,000; P2: $10,000-$19,999; P3: $20,000-$29,999; P4: $30,000-$39,999; P5: $40,000-$49,999; P6: $50,000-$59,999; P7: $60,000-$69,999; P8: $70,000-$79,999; P9: $80,000-$89,999; P10: $90,000-$99,999; P11: $100,000-$119,999; P12: $120,000-$149,999; P13: $150,000-$199,999; P14: greater than $200,000; b an indication of possible transformation of capabilities as a result of changing wages evaluations of work-life quality.Essentially, respondents reflected on whether their pay met their ( financial) needs and whether their job itself met their (emotional) needs.The former links objective, job-related factors such as pay rates and hours of work to cost of living and household circumstances.The latter refers to self-evaluation of skills and abilities and the perception of how well people are treated at work in terms of management and supervision, work scheduling, job security and job content itself.Integrating the objective and subjective perspectives of pay quality offers a boarder understanding of a LW.A high cost of living was commonly referred to by respondents, especially in the major conurbations.Here, higher earners reported that they too were struggling with house price and rent inflation, and lower-paid workers often had to work multiple jobs.Pay was thus seen as inadequate not necessarily because it was an "unfair" rate for the job but because of increasing strain in meeting financial obligations: My wage isn't enough for the costs of everyday life.Power, rent (because I am part of the generation that will never be able to afford a house, especially not in Auckland), internet, food.I am more concerned with the cost of everything else.I would like to see a wage increase mean that I am better off for the long term, rather than a minimal increase that is less than the inflation rate (Male, 21-30 years, early child education, NZD49,000 per annum).
I have recently separated from my partner, and can barely afford to live on my own.What has this country come to?! I feel like I am receiving a decent wage.I have studied a long time to get a good job.But I would be better off overseas.The rate of inflation, and the cost of goods and services in this country are the problem, along with high tax rates, and GST [VAT in UK].More money won't fix the problem […] We need to be lowering costs [of living] (Female, 21-30 years, school social worker, NZD56,800 per annum).
In terms of treatment at work, working time emerged as a source of dissatisfaction.The insecurity and variability of work was a key concern for many in the "survival" pay-capability groups.Many felt trapped in precarious, fluctuating or time-poor work while looking for future alternatives: My hourly rate is reasonably good for a waitress.However, what makes life difficult is that all employees at my work who are not in management are contracted as casual workers so some weeks you may be rostered on for 40 hours (over the high seasons) and other weeks there will only be 15 hours (during the quieter seasons).It all comes back to supply and demand.It is tough raising a family in such an unstable financial environment (Female, 31-40 years, waitress, NZD36,650 per annum).
In comparison, many in the "decent" group felt privileged to have a job that pays and treats them well.Among them, 17 used the word "lucky" to describe the quality of their pay or living standards, which also enabled some to lend assistance to family members in the survival group: I am very lucky to have a job that I love and to earn a very good wage.It means that I am able to help out my niece who lives with me.She is 20 years old and has been employed as a contractor when she should be employed as an employee.The work is precarious and she has no protection under employment law.Sadly, her story is common among many young workers (Female, 51-65 years, regional secretary of a private sector union, NZD81,000 per annum).
These comments further illustrate the importance of understanding employee pay evaluations in a relative manner, taking account of external linkages to living costs and internal features such as job content, workplace relations, and opportunities for development.
883
LW as an income range for decent work and life Discussion Analysis of participant narratives reveals both unique and shared themes, pointing to links between pay and the dynamic capabilities that shape perceived work-life quality.Capabilities understood in this way become a basis for assessing equality of opportunity in terms of an individual's ability to choose and achieve what he or she has reason to value.The narratives provide glimpses of self-perceived capabilities and ultimately what makes life experiences valuable (Sen, 2009).For example, many employees, especially those on low incomes, reveal the constraints on their personal choice not just in terms of wage expenditure but in wage generation, and not simply in financial terms but as being employed in jobs that do not satisfy their intrinsic or perceived future needs.
This paper uses the categorisation and reconciliation of qualitative data around pay and broader perceptions of capabilities to explore how employees connect pay to quality of (working) life.One finding was the suggestion of a pay range around which employee capabilities may be significantly changed within a specific (New Zealand) context.A pivot range of between NZD30,000-40,000 was indicated whereby people are more likely to perceive income in "decent" than "survival" terms.This shift from struggling to make ends meet to a relative well-being zone suggests a potential LW which enables individual capabilities.Interestingly, the findings were consistent with the 2014 LW hourly rate of NZD18.80, as campaigned for by the LW Movement Aotearoa New Zealand which is approximately NZD33,400 per annum for a full-time worker.The then statutory minimum wage was NZD14.25 per hour, or approximately NZD25,300 per annum.
The modal response category for each of the three highest income ranges was "fair" rather than comfortable or "really well" (Table I).This might emphasise the subjective nature of pay evaluation and the need to consider how individuals themselves interpret pay in terms of reward for and enabler of the application of capabilities.At the same time, for some individuals perceived needs and relativities might adjust to income so that real satisfaction is difficult to achieve.As one participant, who earns NZD167,000 explained: This paper offers an analysis of employees' own perceived efficiency of income in terms of capabilities.The focus on capabilities as a measure of the personal and indeed social effectiveness of remuneration provides some practical insights into a potentially impactful LW pivot range, but is equally important in furthering understanding of the potential quality of a LW by exploring the social embeddedness and contextualised nature of work and wages.For example, many studies have given due attention to the importance of relative as well as absolute income in determining perceived work-life quality (Kifle, 2013).Our qualitative findings support this, highlighting that "fairness" is widely used to compare and evaluate wagesand jobs.This extends beyond pay to embrace a package of perceptions around treatment at work.Job security, for example, was a major concern to emerge as an important contextual factor in the research, linked as it is to income security but also perceived value at work.There are clear linkages between job security and work motivation (e.g.Sverke et al., 2002;Reisel et al., 2010).Our findings go further, suggesting specific, wage-related job security concerns that impact on individual perceived work-life quality.For example, some part-time workers were satisfied with their hourly rate but the uncertainty of working hours created significant stress.Others found the fragmentation of working schedules disruptive and incurring costs in transportation and dependent care.Our findings are in line with the argument that economic precarity such as job insecurity and poor work-life balance is a major challenge experienced by the working class (Warren, 2015).Pay rates cannot thus be considered in isolation from working time and working conditions.This leads us to call for a more holistic consideration of the potential impact of a LW.
Conclusion
This paper contributes to an understanding of the LW concept and practice by utilising an exploratory approach based on a capabilities theoretical framework and categorical analysis of employee narratives.In response to two research questions, we first examined and identified a pivot range for a LW, highlighting a discernible income range (NZD30-40,000) within which employees might perceive a step-change in work and personal empowerment.The findings complement business-case arguments that a LW can act as an "efficiency wage" (Marshall, 1920) and draw attention to spill-over and reinforcement effects between work and non-work.
These different contextual factors were explicitly addressed in response to question two.The findings indicated that respondents adjudged the effectiveness and legitimacy ("fairness") of pay in multi-dimensional terms, relating to perceptions of personal inputs (skills and attributes), treatment at work, pay relative to others and according to their own contingent household needs.All of these factors provide the context for the impact of pay on well-being.However, it was also suggested that increased income, especially at and above the "pivot (LW) range", has the capacity to deliver multiple gains outside the workplace as well as within in terms of enhanced capabilities and satisfaction.The findings tentatively suggest that a policy context framed by narrow economic cost-benefit analyses (e.g.wage costs vs employment) might miss the positive social externalities generated by higher pay.
Opportunities exist to extend our research.First, we acknowledge the generalisation problems associated with qualitative, exploratory studies, especially with relatively small (cell) numbers.More specifically, related to our methodology, we accept that the categories used are not to be reified and may be tendentious but they are generated and deployed in order to best capture and represent the overall subjective evaluation of pay efficiency by the participants themselves.The exploratory analysis of a body of narratives may be effectively supported by categorisation but deeper examination might be enhanced by alternative approaches such as semi-structured interviews.
Second, Figure 1 is provided for illustrative purposes to support the empirical finding that, notwithstanding differences in personal circumstances, a "pivot range" might be identified that distinguishes basic from decent income in terms of potential thriving.The approach was to apply a thematic analysis via manual and NVivo techniques to identify any consistencies within the diverse sample that shed light on the link between income and self-perceived capabilities.No claims can therefore be made about statistical rigour.Furthermore, this research was conducted in New Zealand.Although its insights will be most relevant to similar country contexts, the focus on the relationship between income and capabilities is of more general concern.Subsequent comparative studies could provide more knowledge on the context-specificity of a LW.
We hope our findings offers insights into the dynamic nature of the impact of incomes on individual perceived capability, and future research might apply different methods (e.g.surveys, mix-methods) to test and extend our results.Ideally, too, information about income and career transitions over time would be beneficial.In particular, longitudinal research would enhance understanding of whether a LW has short-term or enduring effects on individual, family, workplace and societal well-being, and could fruitfully inform policy debates around pay.
Note
1. Some participants provided information on their annual income while others (mostly part-time/ casual workers) provided hourly rates.Hourly wages were reconciled to an indicative annual total based on provided average hours of work.
885
LW as an income range for decent work and life Figure 1."Survival" and "decent" income respondents per income range
I
work four-hour shifts in an airport.The only way I can earn extra money is by doing a split shift […] There are no night rates, fuel allowance or double time for delays and public holidays.It's disgraceful![…] I save more when I am actually not going to work in fuel costs than what I actually earn from a four-hour shift.A job is a job and it looks better on paper to have employment regardless of what it is, but for a company to be able to take advantage of this fact is an outrage!(Female, 31-40 years, cashier, NZD32,500 per annum).
It [my pay] doesn't work for me well.I wanted to change my boat but wasn't able to do it this year (Male, 51-65 years, senior manager).
Capabilities (perceived well-being) as a result of income level (n) Annual income (NZ dollars) Struggle Barely enough Low pay Fair Comfortable Really well Total | 7,845.6 | 2017-09-08T00:00:00.000 | [
"Business",
"Economics"
] |
Palladium–poly(ionic liquid) membranes for permselective sonochemical flow catalysis
A R T I C L E I N F O
For the case of heterogeneous catalyst Suzuki-Miyaura carbon-carbon coupling reaction systems, palladium dispersed onto carbon (Pd/C) provides ease of product recovery, a relatively high reaction rate, lower cost, and integration into packed bed reactors or columns [19][20][21][22]. However, not only are high temperatures and loadings necessary to achieve adequate yields, significant levels of metal leaching are observed. Lower palladium loadings have been reported for other solid support materials (these include metal phosphates [23], metal oxides [24], and organic polymers [25]). Despite such heterogeneous systems displaying reduced palladium leaching compared to conventional Pd/C systems, elevated temperatures or microwave heating are still needed to achieve high product yields. Microchannel and capillary reactors are alternatives to packed bed reactor systems, benefiting from lower loadings, high turnover frequencies (TOFs), and low levels of catalyst leaching [26,27]. Their drawback is the small active catalytic areas available in such devices limiting the overall reaction product capacity compared to conventional larger-scale packed bed flow reactors.
Hybrid catalyst-membrane systems can potentially address the aforementioned limitations. Poly(ionic liquids) supported onto membranes have been prepared by photo-initiated grafting of imidazolium groups onto polyethersulfone membranes [28]. The resulting membrane supported palladium-poly(ionic liquid) catalysts yield rapid TOFs (147 h −1 , moles of product per mole of palladium per hour), but operate at above ambient temperatures (333 K).
In this article, we describe a continuous flow anisotropic palladium-poly(ionic liquid) catalyst membrane system containing the above advantages of low Pd loading, but which operates at low temperatures (293 K) and delivers comparable performance (TOF = 154 h −1 ). Its fabrication comprises pulsed plasma deposition of a poly(vinylbenzyl chloride) layer onto a membrane to generate surface benzyl chloride groups followed by the Menshutkin reaction to form surface tethered quaternised N-butylimidazole moieties which are subsequently used to complex palladium chloride catalyst to the imidazolium cations [29], Scheme 1.
The pulsed plasma deposition step entails modulating an electrical discharge in the presence of vinylbenzyl chloride gaseous precursor containing a polymerisable carbon-carbon double bond [30,31]. Mechanistically, there are two distinct reaction regimes corresponding to the plasma duty cycle on-and off-periods (typical timescales are of the order of microseconds and milliseconds respectively) [32]. Namely, monomer activation and reactive site generation occur at the substrate surface during each short burst of plasma (via VUV irradiation, ion, or electron bombardment) followed by conventional carbon-carbon double bond polymerization proceeding in the subsequent extended offperiod (in the absence of any VUV-, ion-, or electron-induced damage to the growing film). High levels of precursor functional group structural retention within pulsed plasma deposited nanolayers can be achieved (as confirmed by ToF-SIMS [33] and NMR [34]). Furthermore, by programming the pulsed plasma duty cycle, it is possible to control (i.e. tailor) the surface density of desired chemical groups. Strong covalent attachment of the deposited functional layers to the underlying substrate occurs via free radical sites created at the interface during the onset of plasma exposure (this has allowed for the preparation of flexible substrate heterogeneous catalyst systems [35]). Other distinct advantages include the fact that the plasmachemical approach is quick (single-step), solventless, energy-efficient, and the reactive gaseous nature of the electrical discharge provides conformality to the host substrate membrane material [36,37].
The palladium-poly(ionic liquid) catalyst membranes fabricated in Scheme 1. Preparation of anisotropic palladium-poly(ionic liquid) catalyst membrane by pulsed plasma deposition of poly(vinylbenzyl chloride) layer (diagonally hatched shading) onto PTFE membrane followed by solution phase quaternisation with N-butylimidazole and then complexation to palladium(II) catalyst.
the present study have been employed for the Suzuki-Miyaura carbon-carbon coupling reaction, Scheme 2.
Preparation of palladium-poly(ionic liquid) catalyst membrane
A cylindrical glass reactor (5.5 cm diameter, 475 cm 3 volume) housed within a Faraday cage was used for plasmachemical deposition. This was connected to a 30 L min −1 rotary pump (model E2M2, Edwards Vacuum Ltd.) via a liquid nitrogen cold trap (base pressure less than 2 × 10 −3 mbar and air leak rate better than 6 × 10 −9 mol s −1 ) [38]. A copper coil wound around the reactor (4 mm diameter, 10 turns, located 10 cm downstream from the gas inlet) was connected to a 13.56 MHz radio frequency (RF) power supply via an L-C matching network. A signal generator (model TG503, Thurlby Thandar Instruments Ltd.) was used to trigger the RF power supply. Prior to film deposition, the whole apparatus was thoroughly scrubbed using detergent and hot water, rinsed with propan-2-ol (+99.5 wt.%, Fisher Scientific UK Ltd.), oven dried at 423 K, and further cleaned using a 50 W continuous wave air plasma at 0.2 mbar for 30 min. Silicon substrate preparation comprised successive sonication in propan-2-ol and cyclohexane (+99.7 wt.%, Sigma-Aldrich Co.) for 15 min prior to insertion into the centre of the chamber. Further cleaning entailed running a 50 W continuous wave air plasma at 0.2 mbar for 30 min prior to film deposition. Polytetrafluoroethylene (PTFE) membrane film (180 ± 10 μm thickness, 5 ± 2 μm surface pore size determined by SEM, Mupor Ltd) was used following rinsing in a 1 : 1 vol. ratio mixture of cyclohexane and propan-2-ol. Vinylbenzyl chloride (mixture of 3and 4-isomers, 97 wt.%, Sigma-Aldrich Co.) precursor was loaded into a sealable glass tube, degassed via several freeze-pump-thaw cycles, and then attached to the reactor. Monomer vapour was then allowed to purge the apparatus at a pressure of 0.15 mbar for 15 min prior to electrical discharge ignition. Pulsed plasma deposition was performed at 293 K using a duty cycle on-period (t on ) of 100 μs and a duty cycle off-period (t off ) of 4 ms in conjunction with a RF generator power output (P on ) of 30 W [39]. Upon plasma extinction, the precursor vapour was allowed to continue to pass through the system for a further 15 min, and then the chamber was evacuated to base pressure followed by venting to atmosphere. Deposited layer thicknesses were approximately 2.3 ± 0.2 μm (deposition rate 160 ± 10 nm min −1 ).
Characterisation
Film thickness values of pulsed plasma poly(vinylbenzyl chloride) deposited onto silicon wafers were measured using a spectrophotometer (model nkd-6000, Aquila Instruments Ltd.). Transmittance-reflectance curves (350-1000 nm wavelength range) were acquired for each sample and fitted to a Cauchy model for dielectric materials [41] using a modified Levenberg-Marquardt algorithm [42].
Reflection-absorption infrared (RAIRS) spectra of pulsed plasma poly(vinylbenzyl chloride) deposited onto silicon wafers were acquired using a FTIR spectrometer (Spectrum One, Perkin-Elmer Inc.) fitted with a liquid nitrogen cooled MCT detector operating at 4 cm −1 resolution across the 400-4000 cm −1 range. The instrument included a variable angle reflection-absorption accessory (Specac Ltd.) set to a grazing angle of 66°for silicon wafer substrates and adjusted for ppolarization. Attenuated total reflectance (ATR) infrared spectra of vinylbenzyl chloride precursor were obtained using a Golden Gate accessory (Specac Ltd.).
Surface elemental compositions of pulsed plasma poly(vinylbenzyl chloride) deposited onto silicon wafers and PTFE membrane were measured by X-ray photoelectron spectroscopy (XPS) using a VG ESCALAB II electron spectrometer equipped with a non-monochromated Mg Kα 1,2 X-ray source (1253.6 eV) and a concentric hemispherical analyser. Photoemitted electrons were collected at a take-off angle of 20°from the substrate normal, with electron detection in the constant analyser energy mode (CAE, pass energies of 20 and 50 eV for high resolution and survey spectra respectively). Experimentally determined instrument sensitivity factors were C(1s) : O(1s) : N(1s) : Cl (2p) : Pd(3d) : F(1s) equals 1.00 : 0.35 : 0.70 : 0.37 : 0.06 : 0.25 respectively. The core level binding energy envelopes were fitted using Gaussian peak shapes with fixed full-width-half-maxima (fwhm) and linear backgrounds [43,44]. All binding energies were referenced to the C(1s) -C x H y hydrocarbon peak at 285.0 eV [45]. Measurements were repeated at least 3 times.
Palladium loading on the catalyst membrane, and amount leached during multiple use studies was measured by ICP-OES (Vista MPX CCD Simultaneous axial ICP-OES, Varian Inc.). Calibration of detected palladium signal intensity to actual palladium content in solution was carried out to an accuracy of 0.01 ppm using reference samples at 1, 2, and 5 ppm, prepared from a 1000 ppm stock solution (26 X 1-Pd(a), MBH Analytical Ltd.) diluted in high purity water (resistance of 18.2 MΩ). Analyte solutions were digested in 5 mL of sulphuric and perchloric acids (95 wt.% Normapur ® , and 65 wt.% Normatom ® respectively, VWR International Ltd.) using a wet digestion method followed by dilution to 25 mL in high purity water. The detection limit of palladium in these catalysis experiment analyte solutions was 0.1 ppm on a mass basis. Palladium membranes were treated in the same manner to remove the palladium containing poly(ionic liquid)-plasma polymer layer from the PTFE membrane substrate.
Suzuki-Miyaura carbon-carbon coupling reaction
For palladium-poly(ionic liquid) catalyst membrane heated batch reactor studies of catalysis, B10 borosilicate sample flasks were rinsed with ethanol (+99.8 wt.%, Fisher Scientific UK Ltd.), thoroughly scrubbed using detergent and hot water, followed by immersion for 1 h in a solution comprising sodium hydroxide (99.2 wt.%, Fisher Scientific UK Ltd.), propan-2-ol (+99.5 wt.%, Fisher Scientific UK Ltd.), and high purity water (mass ratio 1 : 20 : 5) in order to remove any organic residue. The flasks were then thoroughly scrubbed using detergent and hot water, rinsed in propan-2-ol, and oven dried at 423 K. A final wash step consisted of immersion for 1 h in a 1 wt% nitric acid bath (70 wt.% in water, Fisher Scientific UK Ltd., further diluted in high purity water), followed by thorough rinsing with high purity water and oven drying at 423 K, to ensure that no palladium transfer occurred between solutions. This rigorous cleaning procedure was undertaken before each reaction. Scheme 2. Suzuki-Miyaura carbon-carbon coupling reaction of iodobenzene with phenylboronic acid using palladium-poly(ionic liquid) catalyst membrane. 0.50 ± 0.05 mmol of iodobenzene (98 wt%, Sigma-Aldrich Co.), 0.75 ± 0.01 mmol of phenylboronic acid (95 wt%, Sigma-Aldrich Co.), and 0.99 ± 0.01 mmol of potassium carbonate (98 wt.%, Sigma-Aldrich Co.) were weighed out into a borosilicate flask. 3 mL of a solution comprising ethanol (+99.8 wt.%, Fisher Scientific UK Ltd.) and high purity water in a 2 : 1 vol. ratio was added, the flask was agitated to dissolve the potassium carbonate, and then the catalyst membrane was added (47.9 ± 3.4 mg catalyst membrane, with 0.304 ± 0.022 μmol of palladium(II) or 0.067 wt.% (32.3 ± 2.3 μg) initial palladium loading as measured by ICP-OES analysis). The flask was fitted to a water cooled condenser, and immersed in a water bath at 343 K for 30 min for the reaction to proceed. The reaction solutions were not stirred in order to prevent abrasive damage to the membrane material (it should be noted this means the reported turnover frequencies (TOF, moles of product per mole of palladium per hour) are a lower estimate as diffusion may also limit reaction rates measured). Afterwards, the flask was removed from the water bath and allowed to cool to room temperature, followed by removal of the catalyst membrane and the solution decanted. The flask was then rinsed twice with 1 mL of chloroform (99.8 wt.%, Fisher Scientific UK Ltd.) and the washings were added to the decanted solution. Solutions for gas chromatography (GC) analysis were extracted three times with 3 mL of chloroform, spiked with 4 mg mL −1 decane (0.1 g, +99 wt.%, Sigma-Aldrich Co.), and made up to 25 mL with dichloromethane (99.99 wt.%, Fisher Scientific UK Ltd.). Solutions for ICP-OES analysis were sealed in screw topped borosilicate glass vials fitted with a PTFE/ silicone slit septum. Catalyst membranes were dried in air at 293 K for a minimum of 1 h before reuse with a fresh reactant solution each time. As a control experiment, 0.50 ± 0.05 mmol of 4-methoxyiodobenze (98 wt.%, Sigma-Aldrich Co.) was substituted for iodobenzene to rule out homocoupled by-product formation.
For Suzuki-Miyaura carbon-carbon coupling reactions under sonicated flow conditions at room temperature, a custom gravity-fed flow cell was used with the membrane sealed using a compression fitting, Fig. 1 and Supplementary information Fig. S1. 1.0 ± 0.1 mmol of iodobenzene, 1.50 ± 0.01 mmol of phenylboronic acid, and 2.00 ± 0.01 mmol of K 2 CO 3 were added to the reactor along with 6 mL of a solution comprising ethanol and high purity water in a 2 : 1 vol. ratio (the area of exposed catalyst was 3.6 cm 2 , with 0.61 μmol of palladium(II) (64.6 ± 2.3 μg) initial palladium loading as measured by ICP-OES analysis). The reactor was then immersed in an ultrasonic bath (Clifton Ultrasonic Bath, Nickel-Electro Ltd.) at 20 ± 2°C for 1 h. Afterwards, the reactor was removed from the ultrasonic bath, the product solutions and residual reaction solutions were decanted and stored separately. As reported in the Results and Discussion sections, the membrane setup preferentially separates biphenyl product (and some remaining iodobenzene reactant) from the phenylboronic acid reactant and reaction solvents. The product solution glassware was rinsed twice with 1 mL of chloroform and the washings were added to the decanted product solution. Residual reaction solutions were extracted for GC analysis three times with 3 mL of chloroform, spiked with 4 mg mL −1 decane (0.1 g), and made up to 25 mL with dichloromethane. Product solutions (which did not require extraction) were spiked with 4 mg mL −1 decane (0.1 g), and made up to 25 mL with dichloromethane. As a control experiment, 1.0 ± 0.1 mmol of 4methoxyiodobenze (98 wt.%, Sigma-Aldrich Co.) was substituted for iodobenzene to rule out homocoupled by-product formation.
GC (Bruker Corp. Scion 456 gas chromatograph with a flame ionization detector (FID) fitted with a siloxane capillary column (5% phenyl / 95% dimethylpolysiloxane BP-5), length of 30 m, internal diameter of 0.25 mm, coating thickness of 0.25 μm) was conducted using highperformance liquid chromatography (HPLC) autosampler vials with a PTFE/silicone slit septum at a starting temperature of 373 K, a hold time of 4 min, a ramp rate of 20 K min −1 , and a final temperature of 473 K with a hold time of 9 min. Product yield was calculated from GC traces as the percentage conversion of haloarene to desired coupled product in the recovered reaction solution, all other reagents were used in excess. GC-MS (Shimadzu Europa Gmbh, GCMS-QP2010 Ultra fitted with an Rxi ® -5Sil column, length of 10 m, internal diameter of 0.15 mm, column coating thickness of 0.15 μm) was conducted using high-performance liquid chromatography (HPLC) autosampler vials with a PTFE/silicone slit septum at a starting temperature of 303 K and a hold time of 1 min, a ramp rate of 50 K min −1 , and a final temperature of 573 K, with a hold time of 5 min.
Pulsed plasma deposited poly(vinylbenzyl chloride)
Infrared spectroscopy of pulsed plasma deposited poly(vinylbenzyl chloride) films confirmed a high level of benzyl chloride functional group structural retention [46][47][48], Fig. 2. Disappearance of the monoalkyl vinyl ]CH 2 wag vibration mode (906 cm −1 ) associated with the precursor molecule confirmed selective vinyl group polymerisation during pulsed plasma deposition [49]. Characteristic para substituted benzene ring absorbances can be found at 1603 cm −1 , and 1490 cm −1 [49]. The band at 1263 cm −1 for both the precursor and plasma deposited polymer corresponds to the CleCH 2 e wag mode [49]. This halogen-containing group is a prerequisite for quaternisation leading to the formation of a poly(ionic liquid) layer.
XPS analysis of pulsed plasma deposited poly(vinylbenzyl chloride) onto PTFE membrane detected carbon, chlorine, and low levels of oxygen (attributed to a small amount of atmospheric water absorption [50]), Table 1 and Supplementary information Fig. S2. The absence of fluorine signal confirmed complete coverage of the underlying PTFE membrane (no pinholes).
Palladium-poly(ionic liquid) catalyst membrane
Quaternisation of the pulsed plasma deposited poly(vinylbenzyl chloride) films with N-butylimidazole resulted in the appearance of nitrogen XPS signal at the surface, Table 1 and Supplementary information Fig. S3. The N(1s) binding envelope of the quaternised films could be fitted to a main nitrogen environment at 401.9 ± 0.1 eV corresponding to two equivalent nitrogen centres in positively charged imidazolium rings [39,51,52], Fig. 3 and Scheme 1. The slight shoulder towards lower N(1s) binding energy can be attributed to the reaction of N-butylimidazole with trapped free radicals contained within the plasma deposited layer [36,53]. The Cl(2p) peak envelope could be fitted to two different chlorine atom environments with Cl(2p 3/2 ) binding energy values of 197.3 ± 0.1 eV and 200.6 ± 0.2 eV corresponding to chloride anions and non-quaternised unreacted benzyl chloride groups respectively [54], Fig. 4. Based on these two Cl(2p 3/2 ) binding energy environments, the level of surface quaternisation was calculated to be 52 ± 9% (this is most likely to be an underestimate due to the XPS sampling depth (2-5 nm) also probing the sub-surface [55,56]). Infrared spectroscopy of the quaternised membranes did not detect any contributions from characteristic positively charged imidazolium ring absorbances at 1350 cm −1 and 1180 cm −1 [57], thereby indicating that only near-surface quaternisation had occurred (i.e. very low concentration relative to the bulk underlying pulsed plasma deposited poly(vinylbenzyl chloride) layer), Fig. 2. This is consistent with the shallower sampling depth for XPS.
Immersion of the quaternised films into aqueous palladium(II) chloride solution gave rise to the appearance of palladium XPS signals, signifying surface complexation, Table 1. No significant change in binding energy was observed in the N(1s) XPS signal at 401.9 ± 0.1 eV, which is consistent with previous studies for palladium(II) containing ionic liquids, Fig. 3 [58]. This was accompanied by the relative Cl(2p 3/2 ) chloride anion peak component at 197.3 ± 0.1 eV within the overall Cl(2p) envelope increasing (as well as a shift towards higher binding energy) due to the incorporation of additional chloride anions accompanying the palladium(II) catalyst complexation process, Fig. 4.
Heated batch catalysis
The Suzuki-Miyaura carbon-carbon coupling reaction product yield at 343 K for the palladium-poly(ionic liquid) catalyst membrane was measured to be 77 ± 7% (apart from unreacted iodobenzene and Table 1 XPS relative atomic compositions at each stage of palladium-poly(ionic liquid) catalyst membrane preparation, and for a control sample comprising non-quaternised poly(vinylbenzyl chloride) functionalised PTFE membrane exposed to PdCl 2 solution and then rinsed in high purity water. . Due to spin-orbit coupling, Cl(2p 1/2 ) components are shifted by 1.6 eV to higher binding energy relative to the Cl(2p 3/2 ) components, with a Cl(2p 3/2 ):Cl(2p 1/2 ) peak area ratio equal to 2:1 [54]. Signal intensity is counts per second.
phenylboronic acid, no other compounds exceeded 1% of the GC biphenyl product peak area), with a catalyst turnover frequency of 3097 ± 323 h −1 (TOF, moles of product per mole of palladium per hour). Over 4 cycles, palladium leaching into the reaction solution was measured to be 83 ± 33 ppb h −1 cm −3 (for the initial membrane equivalent reaction solution loading of 11.1 ± 8 ppm on a mass basis (or 0.061 ± 0.004 mol%)this is equivalent to ∼1% of the Pd present leaching from an already sub 0.1 mol% catalyst loading during one reaction cycle at 343 K). This is consistent with the negligible drop in product yield with reaction cycle number, Supplementary information Fig. S4. As a control, the Suzuki-Miyaura coupling reaction was run using 4methoxyiodobenzene and phenylboronic acid reactants under similar reaction conditions in order to rule out the possibility of homocoupled by-product formation. GC-MS analysis of the obtained products showed the presence of only 4-methoxybiphenyl, and an absence of homocoupled biphenyl by-product.
Room temperature sonicated membrane flow catalysis
The practical viability of these palladium-poly(ionic liquid) catalyst membranes for continuous flow Suzuki-Miyaura carbon-carbon coupling reactions was demonstrated by allowing the reaction solution to permeate into the membrane during sonication at room temperature (sonication speeded up the liquid flow rate). After 1 h of reaction time, 17 mol% (0.17 mmol of 1.0 mmol) of the iodobenzene reactant present in the starting solution had been transported through the catalytic membrane as either product or unreacted iodobenzene. The collected solution contained only aromatic organic compounds (54 mol% biphenyl product, 42 mol% iodobenzene, and 4 mol% phenylboronic acid no other components exceeded 3% of the GC biphenyl product peak area). This indicates that iodobenzene (either unreacted or as carbon-carbon coupled biphenyl product) preferentially passes through the membrane relative to phenylboronic acid. The TOF (calculated as before) for biphenyl product formation was found to be 154 h −1 .
For this system also, a control Suzuki-Miyaura coupling reaction run using 4-methoxyiodobenzene and phenylboronic acid reactants under similar experimental conditions showed an absence of homocoupled biphenyl by-product (only 4-methoxybiphenyl detected), thereby ruling out the possibility of homocoupled by-product formation.
Discussion
Pulsed plasmachemical functionalisation of solid surfaces using polymerisable functional precursors is a well-established, solventless, single-step, conformal, and substrate-independent technique, which offers the advantage of high levels of functional group retention [59], thus making it well-suited for the preparation of membrane supported poly(ionic liquid) catalysts. Infrared spectroscopy and XPS analyses have shown that there is a high level of chloro-group and benzene ring retention during pulsed plasma deposition of vinylbenzyl chloride precursor, thereby facilitating the subsequent step of quaternisation with N-butylimidazole to form a poly(ionic liquid) layer, Scheme 1, Figs. 2-4. Complexation of this surface to palladium chloride yields a palladium containing poly(ionic liquid) catalyst membrane.
In the case of the flow membrane reactor mode of operation, ambient temperature sonicated Suzuki-Miyaura reactions gave a calculated TOF value for biphenyl product formation of 154 h −1 . This is comparable with previously reported Suzuki-Miyaura flow reactors (TOF 10 1 to 10 4 h −1 ) [24,[26][27][28]67], however such systems require high pressures or elevated temperatures. The beneficial preferential separation of iodobenzene reactant and biphenyl product from the phenylboronic acid and reaction solvents in the present study can be ascribed to the selective solubility of the prepared membrane system, Scheme 3. Ionic liquids tend to solvate a wide range of species including unsubstituted benzene and haloarenes [68,69], therefore iodobenzene can diffuse directly through the poly(ionic liquid) layer, accounting for its high concentration in the product solution. Comparatively, phenylboronic acid is insoluble in some imidazolium ionic liquids and will therefore predominantly remain behind in the reactant ethanol:water solvent phase [70]. The absence of transportation for the water and ethanol reaction mixture constituents through the catalyst membrane is most probably due to the immiscibility of ethanol and water with imidazole containing ionic liquids and the pulsed plasma deposited poly(vinylbenzyl chloride) interfacial layer at ambient temperature [71][72][73], as well as liquid repellency from the underlying PTFE membrane (surface tension of water = 72.8 mN m −1 [74], surface tension of ethanol = 22.3 mN m −1 [74], and surface energy of PTFE = 20.0 mN m −1 [75]). Therefore, the outlined approach not only allows for the palladium catalysed Suzuki-Miyaura carbon-carbon coupling reaction to proceed at room temperature under flow conditions, but also concurrently separates the solvent mixture from the aromatic product phase, thereby eliminating any need for post reaction separation of product from reaction solvents.
Conclusion
Plasmachemical surface functionalisation with benzyl chloride groups provides a quick, low cost approach for fabricating anisotropic palladium-poly(ionic liquid) catalyst membrane systems. This Scheme 3. Selective separation of iodobenzene and Suzuki-Miyaura carbon-carbon coupling reaction biphenyl product from phenylboronic acid and solvent mixture using palladium-poly(ionic liquid) membrane flow reactor. comprises pulsed plasma deposition of a poly(vinylbenzyl chloride) layer onto a membrane to generate surface benzyl chloride groups followed by quaternisation with N-butylimidazole to form a surface tethered poly(ionic liquid) which subsequently is complexed to palladium(II) catalyst species. These palladium-poly(ionic liquid) catalyst coated membrane substrates have been evaluated in a heated batch reactor for the Suzuki-Miyaura carbon-carbon coupling reaction, and shown to exhibit 77 ± 7% product yield (343 K, 0.5 h, 0.06 mol % Pd loading) and > 99% selectivity, as well as retaining catalytic activity over extended periods in conjunction with low levels of palladium catalyst leaching (from an already small (sub 0.1 mol% Pd) catalyst loading). Their usage in a sonochemical anisotropic membrane flow reactor setup operating at ambient temperature has shown that this facilitates the selective separation of the desired Suzuki-Miyaura carbon-carbon coupling reaction biphenyl product (and some remaining iodobenzene reactant) from the solvent mixture containing unreacted phenylboronic acid.
Conflicts of interest
There are no conflicts of interest to declare.
Data access
Data created during this research can be accessed at https:// collections.durham.ac.uk. | 5,700.8 | 2018-05-20T00:00:00.000 | [
"Chemistry"
] |
TorchSparse++: Efficient Training and Inference Framework for Sparse Convolution on GPUs
Sparse convolution plays a pivotal role in emerging workloads, including point cloud processing in AR/VR, autonomous driving, and graph understanding in recommendation systems. Since the computation pattern is sparse and irregular, specialized high-performance kernels are required. Existing GPU libraries offer two dataflow types for sparse convolution. The gather-GEMM-scatter dataflow is easy to implement but not optimal in performance, while the dataflows with overlapped computation and memory access (e.g.implicit GEMM) are highly performant but have very high engineering costs. In this paper, we introduce TorchSparse++, a new GPU library that achieves the best of both worlds. We create a highly efficient Sparse Kernel Generator that generates performant sparse convolution kernels at less than one-tenth of the engineering cost of the current state-of-the-art system. On top of this, we design the Sparse Autotuner, which extends the design space of existing sparse convolution libraries and searches for the best dataflow configurations for training and inference workloads. Consequently, TorchSparse++ achieves 2.9x, 3.3x, 2.2x and 1.7x measured end-to-end speedup on an NVIDIA A100 GPU over state-of-the-art MinkowskiEngine, SpConv 1.2, TorchSparse and SpConv v2 in inference; and is 1.2-1.3x faster than SpConv v2 in mixed precision training across seven representative autonomous driving benchmarks. It also seamlessly supports graph convolutions, achieving 2.6-7.6x faster inference speed compared with state-of-the-art graph deep learning libraries.
INTRODUCTION
Sparse convolution [12,18] plays a crucial role in a variety of cutting-edge applications, including augmented/virtual reality (AR/VR), autonomous driving, and recommendation systems.For instance, in advanced driver assistance systems (ADAS) and autonomous driving technology, data is collected from 3D sensors in the form of 3D point clouds.These point clouds often exhibit an exceptionally high spatial sparsity, with up to 99.99% spatial sparsity.In such cases, employing dense 3D convolutions for point cloud processing becomes inefficient.Likewise, social media graphs, like those found on platforms such as Twitter, exhibit even greater sparsity.As an illustration, the adjacency matrix of Twitter's social graph contains only a minuscule fraction, approximately 0.000214%, of the possible connections [56].Therefore, there is a urgent need for efficient inference and training system for these sparse workloads.
Sparse convolution modifies the definition of regular convolution by only performing computation at a sparse set of output locations rather than the entire feature map.It is arguably the most important building block for almost all state-of-the-art 3D perception models (e.g.3D semantic segmentation [10,31,41], 3D object detection [1,6,8,17,50,51,53,58], 3D reconstruction [9], Another recently used large dataset riving [57], but with fewer classes, is not .
ITTI dataset [17] provides synthetically tial images with depth information and annotation.The depth information can nerate point clouds.However, these point w the same characteristics as a real rotatding defects like reflections and outliers.ese datasets, our dataset combines a large points, a large variety of classes, and senerated by a commonly employed sensor us driving, which is distinct from all pubasets, also shown in Table 1.
ticKITTI Dataset based on the odometry dataset of the nchmark [19] showing inner city traffic, but also highway scenes and countryside lsruhe, Germany.The original odomets of 22 sequences, splitting sequences ng set, and 11 to 21 as test set.For conoriginal benchmark, we adopt the same aining and test set.Moreover, we do not original odometry benchmark by providr the training data.Overall, we provide ans for training and 20 351 for testing, y a wide margin the largest dataset pubuse the KITTI dataset as a basis for our lae it allowed us to exploit one of the largest ns of raw point cloud data captured with a re expect that there are also potential synr annotations and the existing benchmarks le the investigation and evaluation of adirections, such as the usage of semantics ometry estimation.other datasets (cf.Table 1), we provide tial point clouds generated with a comotive LiDAR, i.e., the Velodyne HDLly available datasets, like Paris-Lille-3D [6], also use such sensors, but only proed point cloud of the whole acquired seindividual scans of the whole sequence, e we provide the individual scans of the one can also investigate how aggregating ive scans influences the performance of entation and use the information to recjects.8 classes, where we ensured a large overth the Mapillary Vistas dataset [39] and t [10] and made modifications where nec- essary to account for the sparsity and vertical field-of-view.
More specifically, we do not distinguish between persons riding a vehicle and the vehicle, but label the vehicle and the person as either bicyclist or motorcyclist.
We furthermore distinguished between moving and nonmoving vehicles and humans, i.e., vehicles or humans gets the corresponding moving class if they moved in some scan while observing them, as shown in the lower part of Figure 2. All annotated classes are listed in Figure 3 and a more detailed discussion and definition of the different classes can be found in the supplementary material.In summary, we have 28 classes, where 6 classes are assigned the attribute moving or non-moving, and one outlier class is included for erroneous laser measurements caused by reflections or other effects.
The dataset is publicly available through a benchmark website and we provide only the training set with ground truth labels and perform the test set evaluation online.We furthermore will also limit the number of possible test set evaluations to prevent overfitting to the test set [55].
Labeling Process
To make the labeling of point cloud sequences practical, we superimpose multiple scans above each other, which conversely allows us to label multiple scans consistently.To this end, we first register and loop close the sequences using an off-the-shelf laser-based SLAM system [5].This step is needed as the provided information of the inertial navigation system (INS) often results in map inconsistencies, i.e., streets that are revisited after some time have differ- Visualization of 3D auto labels on the Waymo Open Dataset val set (best viewed in color with zoom in).Object points are colored by object types with blue for static vehicles, red for moving vehicles and orange for pedestrians.Boxes are colored as: green for true positive detections, red for false positives and cyan for ground truth boxes in the cases of false negatives.transform segmentation iterative tta<EMAIL_ADDRESS>Comparing with alternative designs of dynamic object auto labeling.Metrics are box accuracy with 3D IoU thresholds 0.7 and 0.8 for vehicles on the Waymo Open Dataset val set.
Effects of temporal context sizes for object auto labeling Table 8 studies how the context frame sizes influence the box prediction accuracy.We also compare with our singleframe (S-MVF++) and multi-frame detectors (M-MVF++) to show extra gains the object auto labeling can bring.We can clearly see that using large temporal contexts improves the performance while using the entire object track (the last row) leads to the best performance.Note that for the static object model, we use the detector box with the highest score for the initial coordinate transform, which gives our auto labeling an advantage over frame-based method.
Qualitative Analysis
In Fig. 6, we visualize the auto labels for two representative scenes in autonomous driving: driving on a road with parked cars, and passing a busy intersection.Our model is able to accurately recognize vehicles and pedestrians in
Method
Context frames static dynamic Acc<EMAIL_ADDRESS>Effects of temporal context sizes for object auto labeling.Metrics are the box accuracy at 3D IoU=0.7,0.8 for vehicles in the WOD val set.Dynamic vehicles have a higher accuracy because they are closer to the sensor than static ones.challenging cases with occlusions and very few points.The busy intersection scene also shows a few failure cases including false negatives of pedestrians in rare poses (sitting), false negatives of severely occluded objects and false positive for objects with similar geometry to cars.Those hard cases can potentially be solved with added camera information with multi-modal learning.
Conclusion
In this work we have introduced 3D Auto Labeling, a state-of-the-art offboard 3D object detection solution using point cloud sequences as input.The pipeline leverages the long-term temporal data of objects in the 3D scene.Key to our success are our object-centric formulation, powerful offboard multi-frame detector and novel object auto labeling models.Evaluated on the Waymo Open Dataset, our solution has shown significant gains over prior art onboard 3D detectors, especially with high standard metrics.A human label study has further shown the high quality of the auto labels reaching comparable performance as experienced humans.Moreover, the semi-supervised learning experiments have demonstrated the usefulness of the auto labels for student training in cases of low-label and unseen domains.Semantic segmentation and scene completion of 3D point clouds are usually studied separately [2,3], but with the emergence of large-scale datasets such as ScanNet [4] and SemanticKITTI [1], researchers have discovered a deep intertwining of an object's semantics with its underlying geometry, and since, have begun exploiting this with the joint learning of semantic segmentation and scene completion to boost model performance [5].For instance, speculating that an object occluded by vehicles and surrounded by leaves is a trunk simplifies the task of inferring it's shape.Conversely, inferring the shape of a pole-like object forms a prior on it's semantic class being a trunk rather than a wall.While previous semantic scene completion methods built on dense 2D or 3D convolutional layers have done well in small-scale indoor environments, they have struggled to maintain their accuracy and efficiency in outdoor environments for several reasons.For one, dense 2D convolutional methods that thrived in the feature rich 2D image space are no longer sufficient when tackling large and sparse LiDAR scans that contain far fewer geometric and semantic descriptors.Furthermore, the dense 3D convolution becomes extremely wasteful in terms of computation and memory since the majority of the 3D volume of interest is in fact empty.Thereby, our main contributions are listed as the following: (a) a sparse tensor based neural network architecture that efficiently learns features from sparse 3D point cloud data and jointly solves the coupled scene completion and semantic segmentation problem; (b) a novel geometric-aware 3D tensor segmentation loss; (c) a multi-view fusion and semantic post-processing strategy addressing the challenges of distant or occluded regions and small-sized objects.Given a single sparse point cloud frame, our model predicts a dense 3D occupancy cuboid with semantic labels assigned to each voxel cell (as shown in Fig. 1), generating rich information of the 3D environment that is not contained in the original input such as gaps between LiDAR scans, occluded regions and future scenes.
In order to effectively complete occluded voxel regions from LiDAR scans, we focus on exploiting the geometrical relationship of the 3D points both locally and globally.In this work, we utilize point-wise normal vectors as a geometrical feature encoding to guide our model in filling the gaps according to the object's local surface convexity.We also leverage a LiDAR-based flipped Truncated Signed Distance Function (fTSDF [5]) computed from a spherical range image as a spatial encoding to differentiate free, occupied and occluded space of a scene.As for future scenes, because these regions are far from the vehicle and are primarily road or other forms of terrain, we propose a 2D variant of the sparse semantic scene completion network to support the construction of the 3D scene via multi-view fusion with Bird's Eye View (BEV) semantic map predictions.To tackle sparsity, we leveraged the Minkowski Engine [6], an auto-differentiation library for sparse tensors to build our 2D and 3D semantic scene completion network.We have also adopted a combined geometric inspired semantic segmentation loss to improve the accuracy of semantic label predictions.Since our network is trained in a complex real-world autonomous driving dataset with 20 classes of dynamic and static objects, and the input data is simply a voxelized LiDAR point cloud appended with geometrical and spatial feature encodings, our model can be deployed on-the-go with various LiDAR sensors.We demonstrate this by applying our method to unseen real-world voxel data, which yields reasonable qualitative results.Our experiments show that our model outperforms all baseline methods by a large margin, with exceptional performance in the prediction of small, under-represented class categories such as bicycles, pedestrians, traffic signs and more.
Related Works
We review the related works across four major areas: volume reconstruction, point cloud segmentation, semantic scene completion, and multi-view fusion.
Volume Reconstruction.There are several approaches to inferring complete volumetric occupancy of shapes and scenes from partial or sparse geometric data.Efficient methods based on object symmetry [7,8] and plane fitting [9] apply for small non-complex completion tasks.In larger Semantic segmentation and scene completion of 3D point clouds are usually studied separately [2,3], but with the emergence of large-scale datasets such as ScanNet [4] and SemanticKITTI [1], researchers have discovered a deep intertwining of an object's semantics with its underlying geometry, and since, have begun exploiting this with the joint learning of semantic segmentation and scene completion to boost model performance [5].For instance, speculating that an object occluded by vehicles and surrounded by leaves is a trunk simplifies the task of inferring it's shape.Conversely, inferring the shape of a pole-like object forms a prior on it's semantic class being a trunk rather than a wall.While previous semantic scene completion methods built on dense 2D or 3D convolutional layers have done well in small-scale indoor environments, they have struggled to maintain their accuracy and efficiency in outdoor environments for several reasons.For one, dense 2D convolutional methods that thrived in the feature rich 2D image space are no longer sufficient when tackling large and sparse LiDAR scans that contain far fewer geometric and semantic descriptors.Furthermore, the dense 3D convolution becomes extremely wasteful in terms of computation and memory since the majority of the 3D volume of interest is in fact empty.Thereby, our main contributions are listed as the following: (a) a sparse tensor based neural network architecture that efficiently learns features from sparse 3D point cloud data and jointly solves the coupled scene completion and semantic segmentation problem; (b) a novel geometric-aware 3D tensor segmentation loss; (c) a multi-view fusion and semantic post-processing strategy addressing the challenges of distant or occluded regions and small-sized objects.Given a single sparse point cloud frame, our model predicts a dense 3D occupancy cuboid with semantic labels assigned to each voxel cell (as shown in Fig. 1), generating rich information of the 3D environment that is not contained in the original input such as gaps between LiDAR scans, occluded regions and future scenes.
In order to effectively complete occluded voxel regions from LiDAR scans, we focus on exploiting the geometrical relationship of the 3D points both locally and globally.In this work, we utilize point-wise normal vectors as a geometrical feature encoding to guide our model in filling the gaps according to the object's local surface convexity.We also leverage a LiDAR-based flipped Truncated Signed Distance Function (fTSDF [5]) computed from a spherical range image as a spatial encoding to differentiate free, occupied and occluded space of a scene.As for future scenes, because these regions are far from the vehicle and are primarily road or other forms of terrain, we propose a 2D variant of the sparse semantic scene completion network to support the construction of the 3D scene via multi-view fusion with Bird's Eye View (BEV) semantic map predictions.To tackle sparsity, we leveraged the Minkowski Engine [6], an auto-differentiation library for sparse tensors to build our 2D and 3D semantic scene completion network.We have also adopted a combined geometric inspired semantic segmentation loss to improve the accuracy of semantic label predictions.Since our network is trained in a complex real-world autonomous driving dataset with 20 classes of dynamic and static objects, and the input data is simply a voxelized LiDAR point cloud appended with geometrical and spatial feature encodings, our model can be deployed on-the-go with various LiDAR sensors.We demonstrate this by applying our method to unseen real-world voxel data, which yields reasonable qualitative results.Our experiments show that our model outperforms all baseline methods by a large margin, with exceptional performance in the prediction of small, under-represented class categories such as bicycles, pedestrians, traffic signs and more.
Related Works
We review the related works across four major areas: volume reconstruction, point cloud segmentation, semantic scene completion, and multi-view fusion.
Volume Reconstruction.There are several approaches to inferring complete volumetric occupancy of shapes and scenes from partial or sparse geometric data.Efficient methods based on object symmetry [7,8] and plane fitting [9] apply for small non-complex completion tasks.In larger multi-sensor fusion [7,27,30], end-to-end navigation [29]).It also exhibits similar computation pattern to (relational) graph convolutions [19,36].Despite achieving dominant performance, the sparse and irregular nature of sparse convolution makes it harder to be processed on GPUs and there is no vendor library support.Dedicated libraries [18,21,40,49,50] with specialized high-performance kernels or even specialized hardware accelerators [14,15,28] are required for sparse convolution.As a result, many industrial driving assistance solutions still prefer pillar-based models [25], which flatten LiDAR points onto the BEV space and process them with a 2D CNN.These approaches cannot take full advantage of 3D geometry from LiDAR and tend to have much worse accuracy.Several pioneering implementations of sparse convolution have adopted different dataflows for this operator.For instance, Spar-seConvNet [18] and SpConv v1 [50] use the vanilla gather-GEMMscatter dataflow.It was improved by TorchSparse [40] that optimizes the gather-scatter paradigm through fusing memory operations and grouping computations adaptively into batches to improve device utilization.Dataflows based on gather-scatter can be implemented using vendor libraries with relative ease.However, they are fundamentally restricted in performance due to the inability to overlap memory access and computation.MinkowskiEngine [12] proposes the fetch-on-demand dataflow, which is optimized by PCEngine [21].Recently, SpConv v2 [49,50] has adapted the implicit GEMM dataflow for dense convolution to the sparse domain, achieving state-of-the-art performance on real-world workloads.Nevertheless, the best representative of these memory-computation overlapped dataflows, implicit GEMM, is extremely hard to implement.The metaprogrammer for SpConv v2 has more than 40k lines of code, making it hard for the community to further improve upon it.
To address the significant challenge of achieving both ease of implementation and state-of-the-art performance, we present TorchSparse++ (Figure 1), a high-performance GPU library that combines the best of both worlds through the Sparse Kernel Generator and the Sparse Autotuner.Tackling a fundamentally sparse and dynamic workload, we propose a general method to adapt existing tensor compilers that are optimized for dense and static workloads, Figure 2: Sparse convolution (Equation 1) on Δ 2 (3): computation is performed only on nonzero inputs.
unlocking their potential to generate kernels that can deal with sparsity and variable workload shapes.On top of the generated kernels, we further extend the design space of existing point cloud libraries.We design a Sparse Autotuner to efficiently search for the best dataflow configurations through group-based tuning for a diverse set of workloads within the enlarged design space.The results of our Sparse Autotuner challenged the conventional design wisdom of using amount of computation, DRAM access or even total runtime for computation kernels as the indicator for end-to-end performance.
BACKGROUND AND MOTIVATION
Without loss of generality, we use point cloud workloads to illustrate the computation pattern of sparse convolution.A point cloud sparse tensor can be defined as an unordered set of points with features: {( , )}. is the quantized coordinates for the th point in the -dimensional space Z . is its -dimensional feature vector in R .Coordinate quantization is done through = ⌊ (raw) /⌋, where is the voxel size vector.Unique operation is further applied to all quantized coordinates.For example, in CenterPoint [53], the point clouds on Waymo [38] are quantized using = [0.1m,0.1m, 0.15m].This means that we will only keep one point within each 0.1m×0.1m×0.15mgrid.
Definition of Sparse Convolution
Following the notations in [40], we define the -dimensional neighborhood with kernel size as Δ () (e.g.x i/o (Figure 2) on the th output point is defined as: where is a binary indicator, is the stride and ∈ R in × out corresponds to the weight matrix for kernel offset ∈ Δ ().
Sparse Convolution Dataflows on GPUs
Current implementations of sparse convolution on GPUs can be categorized into three distinct dataflows (Figure 3).The first is the gather-GEMM-scatter approach, which is weight-stationary and was inspired by early explicit im2col attempts [23] for convolution implementation.The second dataflow is the fetch-on-demand approach, which is a kernel fusion version of gather-GEMM-scatter.Finally, the implicit GEMM approach is an output-stationary alternative inspired by its dense counterpart [11].
Gather-GEMM-Scatter Dataflow.
Early sparse convolution implementations utilized a gather-GEMM-scatter dataflow [18,50].This dataflow is weight-stationary and features an outer host loop over kernel offsets.For each offset ∈ Δ (), we compute maps M = {( , )| = + }, as shown in Figure 4. We gather all input features in , resulting in a |M | × in matrix in DRAM, and multiply it by weight ∈ R in × out .Finally, we scatter the results back to output positions out according to M .For example, since We gather in 0 and in 4 , multiply them by −1,−1 , and scatter the results back to out 1 and out 5 .A variant of this dataflow [40] aims to reduce both computation and data movement time by fusing and reordering memory accesses and grouping computation for different weights.
Gather-GEMM-scatter is straightforward to implement.Following feature gathering, computation for each offset involves a dense matrix multiplication, which can be handled by existing vendor libraries like cuBLAS and cuDNN.Only scatter and gather operations need to be optimized in CUDA.However, this dataflow is fundamentally inefficient due to the lack of overlap between computation and memory access, as illustrated in Figure 3a,b.It is thus impossible to hide data orchestration latency with pipelining.
2.2.2 Fetch-On-Demand Dataflow.The gather-GEMM-scatter implementation requires three separate CUDA kernel calls in each host loop iteration over .An alternative fetch-on-demand dataflow [12,50] (named by [40]) merges the gather, matrix multiplication, and scatter kernel calls into a single CUDA kernel.Instead of materializing the |M | × in gather buffer in DRAM, it fetches { in |( , ) ∈ M } on demand into the L1 shared memory, performs matrix multiplication in the on-chip storage and directly scatters the partial sums (resided in the register file) to corresponding outputs { out |( , ) ∈ M } without first instantiating them in a DRAM scatter buffer.Hong et al. [21] further improve the vanilla fetch-on-demand dataflow by introducing block fusion, where the sequential host loop over is converted to a parallel thread block dimension.As such, the computation of all s is merged into a single kernel.Similar to gather-GEMM-scatter (without adaptive grouping in [40]), the fetch-on-demand dataflow has zero redundant computation.It further overlaps computation with memory access and saves DRAM writes to gather and scatter buffers.
However, it cannot save any DRAM write to the final output tensor, which means | M | out × (4×-10× in real workloads, since each point typically has 4-10 neighbors) larger write-back traffic than the theoretical optimal value.Furthermore, the block-fused fetch-on-demand dataflow [21] suffers from write-back contentions between different threads.For example, both −1,0 and −1,1 in Figure 4 may attempt to write back to out 3 .Therefore, it is necessary to introduce atomic operations to serialize all DRAM writes to the same location.Since gather and scatter operations are now combined into GEMM, the entire computation kernel in the fetchon-demand dataflow must be implemented in CUDA.This is more complex than the gather-GEMM-scatter approach.
Similar to fetch-on-demand, implicit GEMM overlaps computation with memory access (Figure 3).This allows us to hide the memory latency through pipelining.Like im2col in 2D convolution, an implicit GEMM implementation is output-stationary.So it achieves the theoretical minimum DRAM write-back traffic.However, despite having lower DRAM traffic compared to fetch-on-demand, implicit GEMM has non-negligible redundant computation.As shown in Figure 5, we assume that each warp contains four threads.All GPU threads within a warp execute in lockstep.Whenever a thread has a non-empty neighbor at weight , all threads in the warp will either perform computation or waste cycles for that weight.This leads to 34 redundant computation in Figure 5, which is even more than 22 effective MACs in this example.
To address this issue, SpConv v2 excludes unsorted implicit GEMM in their design space and utilizes bitmask sorting to minimize computation overhead.Following the approach taken by DSTC [45], each output point is assigned a -dimensional bitmask that indicates the presence of its neighbors.These bitmasks are treated as numbers and sorted, and the order of computation for different outputs is adjusted accordingly.For instance, warp 0 calculates out 0−4 in Figure 5, but it calculates out 4,5,0,2 in Figure 6b instead.Thanks to sorting, computation overhead is reduced from 34 MACs to 26 MACs.In practical applications, sorting can reduce redundant computation by up to 3×, but it remains unclear whether this reduction translates into proportional speedups.
Motivation
As mentioned above, gather-GEMM-scatter is easy to implement but has poor performance.The more performant dataflows with overlapped computation and memory access cannot be implemented with the help of vendor libraries.Implementing the stateof-the-art implicit GEMM dataflow alone is a daunting task, as demonstrated by the SpConv v2 authors who had to painstakingly re-implement the entire CUTLASS framework from scratch with a custom Python-based template metaprogrammer [48].The resulting code base has over 40,000 lines of code which increases the risk of errors for developers.This also makes it challenging for the community to explore a wider design space for sparse point cloud convolution kernels, hindering further performance improvements.
Therefore, in TorchSparse++, we want to first demonstrate in Section 3 that highly efficient dataflows with overlapped computation and memory access can be generated with a relatively low engineering complexity (comparable to implementing gather-GEMMscatter).With the efficient kernel generator as a cornerstone, we further showcase in Section 4 that the design space for sparse point cloud convolution could be significantly extended, and there exists solutions that are up to 1.7× faster in inference, 1.3× faster in training compared with the incumbent state-of-the-art within this vast space.Tackling a fundamentally sparse workload, we also challenge traditional thinking on dense GPU kernel design.Our research reveals that typical first-order performance indicators, such as total computation, DRAM access, or even total runtime for all sparse convolution computation kernels, cannot accurately reflect the end-to-end runtime of sparse point cloud workloads.This is because sparse workloads require expensive mapping operations.On top of this observation, we will further demonstrate that end-to-end We introduce Sparse Kernel Generator, a code generator that integrates on-chip MMA subroutines from [4] directly at the source code level, unlocking the potential of using dense, fixed shape tensor compiler to generate programs for sparse, dynamic shape workloads.Gray: constant code, red: fixed metaprogramming template, blue: generated automatically by existing tensor compiler for each tile size.optimal dataflows could sometimes choose configurations with up to 6× computation overhead and 4× larger DRAM footprint.
SPARSE KERNEL GENERATOR
In this section, we introduce the Sparse Kernel Generator, which is a metaprogrammer that can efficiently generate sparse convolution GPU kernels.Existing metaprogrammers, such as TVM [4], are designed to generate optimized GPU computing schedules for dense and fixed-shape workloads.However, point cloud workloads are naturally sparse and have dynamic shapes.
Dense to Sparse Adaptation
Leveraging the information from Section 2, we establish the relationship between sparse convolution and dense GEMM kernels, as summarized in Table 1.We show that the fetch-on-demand and implicit GEMM dataflows with their overlapped memory access and computation can be seen as generalized GEMM kernels with sparse DRAM loading and write-back iterators.Take implicit GEMM as an example, we start from its equivalent-sized dense GEMM workload in Section 2.2.3.We notice that position (, ) in im2col-in is mapped to position (M ,/ in , % in ) in in .Here M out × |Δ ( ) | is the output-stationary representation of the maps defined in Section 2.2.1.For the th output point, if its th neighbor is non-empty, then M , is the index of this neighbor; otherwise M , = −1.For example, in Figure 5, M 2,3 = 1 since the fourth neighbor of Here we assume indices start from 0. By introducing this one level of indirect addressing, we can easily transition from a dense GEMM to a sparse implicit GEMM when loading data from DRAM to L1 shared SRAM.Since the DRAM→L1 memory access to is dense, one can reuse the CUDA code segment for 2 nd operand loading in dense GEMM.Based on this formulation, as in Figure 7, a sparse convolution kernel can then be decomposed into three parts.The gray code is always constant.Blue code depends on the tile sizes and can be automatically generated by the existing compilers [4].The red code cannot be generated by existing dense tensor compilers due to sparsity, but it can be generated from a fixed template that only takes in tiling sizes as input parameters.Consequently, we only need to manually implement the short red code template and a TensorIR [13] template that outputs the blue onchip MMA subroutine, which only takes hundreds of lines of code (orders of magnitude cheaper than the SpConv v2 code generator).
For simplicity, we did not visualize performance optimization techniques such as double buffering and pipeling in Figure 7.However, these techniques will not impact the design of our code generator.Similar analysis and code transformation can also be applied to the fetch-on-demand dataflow.
Static to Dynamic Adaptation
Thanks to the adaptation described in Section 3.1, we can now easily implement sparse convolutions in dataflows with overlapped computation and memory access.However, the simplicity of the code generator comes at the cost of a reduced design space.Our Sparse Kernel Generator only allows the tiling sizes to be tuned, while leaving most of the dimensions in the tensor program design space to be fixed (e.g. the order and split of the loop nests).Fortunately, we argue that such reduced design space does not compromise the performance.We present an idealized experiment in Figure 8.We manually traverse all possible tile sizes for different layers in MinkUNet [12] on SemanticKITTI [2] and apply compiletime constant folding to maximize performance.We benchmark the resulting sparse kernel with the lowest latency against cuBLAS, which runs an equivalent-sized GEMM problem due to the lack of sparsity support.It turns out that we can achieve > 100% cuBLAS utilization on average by only tuning tile sizes.Notably, for the last workload, the equivalent-sized dense GEMM problem can run at ≈90% device utilization on RTX 3090.If we ignore redundant computation (Figure 5), it is safe to assert that extending the design space beyond tile sizes will not significantly improve final performance on this workload.
Despite achieving encouraging results in the idealized experiment, it remains challenging to transfer the performance to real systems.Unlike dense workloads, each sparse point cloud sample has a different shape in terms of the number of points.Precompiling constant-folded kernels for all possible workloads, as is done by TVM and TensorRT in the dense domain, is impossible for us.Naively unfold the constants in fixed shape kernels and revert them back to workload shape parameters will degrade the performance by up to 1.7×.This totally undermines the good results achieved in Figure 8. Worse still, the first red instruction in Figure 7 now requires explicit boundary check in flexible shape kernels, which brings up to 1.35× performance overhead as well.
To this end, we present two simple yet effective strategies to address these two performance roadblocks.
We first pinpoint that the slow addressing of in is the reason why constant unfolding ruins the performance.Unlike in dense GEMM, accessing in requires two inefficient division and modulo operations with in as an operand, which are necessary just for addressing.This impacts the efficiency since in is stored in the RF and has an access latency no shorter than L1 on GPUs.Worse still, accesses to in are located in the innermost layer of the long loop (|Δ ()| × in , ranging from 1728 to 6912 in Figure 8).Fortunately, we notice that most of the addressing computation is irrelevant to the innermost loop variable ldA in Figure 7. Therefore, it is possible for us to lift the loop invariants out of the loop.For real tiling sizes with LD_A_THR=4 and 8, this at least reduces addressing cost by 4-8×.We further analyze the template and perform loop invariant hoisting wherever possible.Ablation studies in Section 6.2 shows that addressing simplification can fully close the up to 1.7× constant unfolding overhead.
Likewise, among all boundary checks in the dynamic shape kernel, the one for accessing map within the innermost ldA loop is the most time-consuming.Although loop invariant hoisting does not apply in this case, we can solve this issue by padding the first dimension of map to be a multiple of cta_M.With this simple modification, no boundary check on map access in Figure 7 is required since we can ensure that every access stays within bounds.With that reduced control flow overhead, we close the final 1.14-1.35×performance gap between fixed and dynamic shape kernels.
SPARSE AUTOTUNER
Based on the simple yet powerful Sparse Kernel Generator, we present Sparse Autotuner.It first significantly enlarges the design space of existing libraries (illustrated in Figure 9) and then applies group-based configuration tuning across this enlarged space.
Design Space Augmentation
Thanks to the simplicity of Sparse Kernel Generator, we can easily expand our design space.Since the generator can produce fetch-ondemand kernels, we can effortlessly incorporate this dataflow in our designs.Besides, for implicit GEMM, number of splits (Figure 10) is an important tunable dimension in the implicit GEMM dataflow that was previously overlooked.Similar to the SplitK technique [24] in dense GEMM kernel design, one could split the sequential loop in Figure 7 into parts.By doing so, each split (whose loop is now × shorter) can compute in parallel and write to a separate DRAM buffer.These partial sums are later reduced by a summation kernel to produce the final result.We also reorder the computation in each split following Figure 6, which involves argsorting individual bitmasks and reordering the map accordingly.For example, after reordering, the first row calculates part of out 0 , out 3 , out 3 , while the full feature of out 0 is calculated in the 1 st , 4 th , 6 th rows by two thread blocks collaboratively.As such, there are more common zero neighbors for each thread block and the redundant computation is further reduced from 26 in Figure 6 to 22 in Figure 10.When integrating support for arbitrary split implicit GEMM, we notice that it is beneficial to reorder the map in an offline manner for a similar reason in Section 3.2.
Group 1 Tuning (End-to-end latency) iGEMM (s=0) iGEMM (s=1) iGEMM (s=3) FoD Group 2 Tuning (End-to-end latency, group 1 fixed) Time Figure 12: Group-based autotuning: Layers using the same maps will be assigned to the same group.After group partition, we exhaustively traverse all choices in our design space in a group-by-group manner and selects the best group configuration that leads to the lowest end-to-end latency.Reusing tuner in Figure 12 (a) Binding fwd-dgrad (for low-end devices) Reusing tuner in Figure 12 (b) Binding dgrad-wgrad (for high-end devices) Conventionally, dense GPU kernel design is often guided by firstorder performance approximation (e.g.computation and DRAM footprint).Following these proxies, it seems to be reasonable to eliminate split = 0 (unsorted implicit GEMM in Figure 5) due to its large redundant computation.Split > 2 should also be eliminated since it incurs much larger DRAM write back traffic.In fact, such prematured optimizations lead to the restricted design space in SpConv v2.However, we argue in Figure 11 that it is beneficial to have a larger design space that includes many first-order suboptimal solutions.On the one hand, the redundant computation in both segmentation and detection workloads keeps dropping until = 5.The difference in computation overhead between = 2 and = 4 can still be up to 1.2× for detection and 1.3× for segmentation.Thus, for devices with limited parallelism, it is beneficial to increase the number of splits despite increased DRAM traffic.On the other hand, when running detection workloads on devices with high parallelism, a 2.4-2.9×computation overhead for the unsorted dataflow in Figure 5 is completely acceptable.We will demonstrate in Table 3 and Table 4 that kernels for detection will not run faster despite having ∼ 2× lower computation overhead on RTX 3090, which has an ample 71 TFLOPS FP16 peak throughput.
Group-Based Configuration Tuning
To this end, we designed a sparse and dynamic shape kernel generator with minimal help from dense and fixed shape tensor compilers.By doing so, we obtain high-performance sparse convolution kernels with different dataflows (e.g.fetch-on-demand and implicit GEMM) and augments the design space of implicit GEMM itself by introducing arbitrary number of mask splits.However, no dataflow is perfect for all workloads.As in Section 2, fetch-on-demand has zero redundant computation but suffers from large DRAM scattering traffic, while implicit GEMM has the exact opposite property.Similarly, there is no single set of parameters that works for each dataflow.For example, the number of splits in implicit GEMM reflects the tradeoff between redundant computation and control flow overhead (e.g.sorting individual bitmasks and reordering the maps).Therefore, the enlarged design space necessitates the design of an autotuning system that can automatically determine the optimal dataflow and dataflow-specific parameters for different workloads.
To determine the optimal dataflow for different layers, we divide all layers into different groups (illustrated in Figure 12).All layers within each group use the same input-output mappings (maps) and are forced to execute the same dataflow.This is because different dataflows require different map structures.Implementations such as gather-GEMM-scatter and fetch-on-demand require the maps to be stored in a weight-stationary order, represented as M = {( , )| = + , ∈ out }, which makes it difficult to infer all the neighbors of an output point (required by implicit GEMM).On the other hand, the implicit GEMM implementation, stores the maps in an output-stationary order, represented as which makes it difficult to infer all the inputs that use the same weight (required by the other two dataflows).It would incur significant overhead (∼ latency of up to 3-4 sparse convolution layers within each group!) if we generate maps for all dataflows but only use one of them at runtime.Therefore, allowing intra-group heterogeneous dataflow selection is not desired.After group partition, we apply a group-level exhaustive search on a random subset of the target workload (e.g. 100 scenes on the Waymo dataset).Since the execution time of each group is independent of the others, we tune the dataflow parameters in a greedy manner.We iterate over all possible choices for the th group based on the optimally-tuned configurations for the 1 st to ( − 1) th groups, using default parameters for all subsequent groups.This approach effectively reduces the tuner complexity from exponential to linear * and allows us to complete tuning within 2 minutes for most workloads.Considering that the tuned schedule could be reused for millions of scenes in real-world ADAS applications during inference, the cost is clearly justifiable.
We further extend Sparse Autotuner to support training workloads.The most straightforward design assumes that the back propagation kernels (i.e.dgrad for feature map gradient calculation and wgrad for weight gradient calculation) share the same dataflow parameters as the forward kernel.However, as analyzed in Section 6.1, such design incurs up to 10% performance regression in end-to-end training.Naively decoupling the tuning process for training workloads leads to an unacceptable ( 3 ) tuning complexity, with being the size of our design space.To address this complexity issue, we partially bind dataflow parameters for forward, dgrad, and wgrad kernels.We propose two binding schemes: the workload-pattern oriented scheme binds the dataflow parameters for forward and dgrad kernels while allowing wgrad kernels to be tuned separately, reducing the tuning complexity to ( 2 ) and minimizing the total latency for all sparse convolution kernels.We also propose the sparse-mapping oriented scheme, which binds dgrad and wgrad kernels together since they share the same maps, minimizing the overhead for map computation.Similar to our observations in inference kernel autotuning, the high-parallelism devices (e.g.A100) is far less sensitive to redundant computation than to mapping overhead, while the low-parallelism devices (e.g.2080 Ti) behaves in the exact opposite way.This explains our design choice to use scheme 1 for low-end devices and scheme 2 for more powerful GPUs.As a final remark, we further notice in Figure 13 that the tuning time could be further reduced from ( 2 ) to () if we reuse the group-based tuner in Figure 12 twice and skip different parts of the kernels with dummy initializations during tuning.
Results
Inference.We compare our results with the baseline designs including MinkowskiEngine, SpConv 1.2.1, TorchSparse and Sp-Conv 2.3.5 in Figure 14.All evaluations are done in unit batch size.TorchSparse++ consistently outperforms all baseline systems on GPUs with all architectures under three numerical precisions by a large margin.On cloud Ampere GPUs (A100 and 3090), it achieves 2.9-3.7×,3.2-3.3×,2.0-2.2× and 1.4-1.7×measured endto-end speedup over the state-of-the-art MinkowskiEngine, SpConv 1.2.1, TorchSparse and SpConv 2.3.5, respectively.We also compare TorchSparse++ with SpConv 2.3.5 on NVIDIA Jetson Orin, an edge GPU platform widely deployed on real-world autonomous vehicles.Our TorchSparse++ is 1.25× faster than SpConv 2.3.5 on average, while achieving 1.3-1.4×consistent speedup across all detection workloads that are most time-critical in real ADAS applications.In addition, TorchSparse++ is competitive on legacy GPU architectures (Turing and Pascal), achieving at least 1.4×,We scale up the systolic array of PointAcc [28] to match the peak performance of RTX 3090 and compare our TorchSparse++ against the ASIC accelerator.
MinkowskiEngine.Notably, recent advances in point cloud transformers [32,39,43] often claim superior accuracy-latency tradeoffs over sparse convolutional backbones implemented with the Sp-Conv v2 backend.With the much faster TorchSparse++ backend, assuming that the 2D part is deployed with TensorRT, the 3-frame CenterPoint model on Waymo is 1.5× faster than FlatFormer [32] with higher accuracy on Orin.
Training.We also compare the training performance of our TorchSparse++ and existing systems on A100 and 2080 GPUs in Figure 15.We run the forward and backward pass of all workloads with batch sizes of 2 in mixed precision training (i.e.all gradients are calculated in FP16 precision) except for MinkowskiEngine that does not support FP16.We make sure that all workloads evaluated in Figure 15 can reach the same accuracy using the TorchSparse++ backend compared with TorchSparse (for segmentation workloads) and SpConv 2.3.5 (for detection workloads) with FP32 precision.Given the fact that A100 FP16 tensor core arithmetics has 16× higher throughput compared with FP32 (non-tensor core) computation (312 TFLOPS vs. 19.5 TFLOPS), we do not perform FP32 evaluation.As a result, TorchSparse++ is 4.6-4.8×,2.5-2.6× and 1.2-1.3×faster than MinkowskiEngine, TorchSparse and SpConv 2.3.5 on both Ampere and Turing GPUs.TorchSparse++ paves the way for rapid model iteration for real-world ADAS applications.
Comparison against Accelerators.We further compare the performance of TorchSparse++ on RTX 3090 against a scaled-up version of PointAcc [28] using the SemanticKITTI-MinkUNet workload.The systolic array in PointAcc is enlarged from 64×64 to 128×128 to roughly match the number of MACs (16384 vs. 20992) on RTX 3090.The PointAcc memory bandwidth is scaled up accordingly.Since the accelerator adopts IC-OC parallelism, we assume that the scaled PointAcc-L achieves linear speedup if the executed layer has large enough input and output channels.We also scale the measured TorchSparse++ latency by 1.7 (clock frequency difference) × 1.3 (peak MACs difference) = 2.2× for fair comparisons.As a result, TorchSparse++ achieves 56% of ASIC speed on a general-purpose hardware platform with similar computation budget.Notice that we also attempt to make a direct comparison with Mesorasi [15], which codesigns the point cloud convolution algorithm with the hardware architecture.However, its delayed aggregation scheme could only work for convolution operators with shared weights for all neighbors.The main workload accelerated in this paper, sparse convolution, is more complicated because it has different weights Orin SK-M (1x) SK-M (0.5x) NS-M (3f) NS-M (1f) NS-C (10f) WM-C (3f) WM-C (1f) Geomean for different neighbors (see Figure 2).Therefore, such comparison might be hard to achieve.
Results on Graph Workloads. .We also implement R-GCN [36] with TorchSparse++ and benchmark it on five representative heterogeneous graph datasets against state-of-the-art graph deep learning systems DGL [44], PyG [16] and the Graphiler [46]
ANALYSIS
In this section, we present in-depth analysis on the design choices of our Sparse AutoTuner and Sparse Kernel Generator and ablate the source of performance gains in Section 5.
Design Space of Sparse AutoTuner
As discussed in Section 3, the design space of TorchSparse++ is a superset of SpConv v2.We have added several new features to this space, including support for unsorted implicit GEMM, implicit Table 4: SparseConv Kernel Latency: Unsorted implicit GEMM kernels could be slower than their mask split counterpart, which is the exact opposite of Table 3 results.GEMM with an arbitrary number of mask splits (> 2), and the fetchon-demand dataflow.The flexibility of TorchSparse++ also allows us to explore different dataflow parameter bindings for forward, dgrad, and wgrad computation.As such, we challenge conventional designs that shares the same dataflow parameters across all kernels.
In the following two subsections, we will evaluate the effectiveness of all these new design choices in TorchSparse++.
Effectiveness of unsorted implicit GEMM.. We first demonstrate the efficacy of unsorted implicit GEMM dataflow (Figure 5) against the sorted implicit GEMM dataflow in SpConv v2.As in Table 3, the unsorted dataflow is consistently faster on both server and edge GPUs.We further present runtime comparison of all sparse convolution kernels between unsorted and sorted dataflows in Table 4. Interestingly, if we only consider the runtime of convolution kernels, the sorted dataflow is indeed faster.However, the latency difference between Table 3 and Table 4 reveals the fact that sparsityincurred mapping overhead (e.g.obtaining the bitmask, sorting the bitmask, performing bitmask reduction and reordering the maps) in the sorted dataflow is non-negligible.
Moreover, Figure 17 shows the layerwise comparison of these two versions of TorchSparse++, in which the gain from reduction in computation is overweighed by the overhead of sorting itself on Waymo object detection.However, sorting does show an advantage on a larger segmentation model (MinkUNet) on the SemanticKITTI benchmark.Our observation challenges the design principle of Sp-Conv v2, which is to use amount of computation as a first-order approximation for end-to-end performance.It also nullifies the assumption that faster computation kernel is equivalent to better end-to-end performance.Table 5: We evaluated the performance of a SemanticKITTI-MinkUNet workload on an RTX 3090 and found that expanding the design space of implicit GEMM by increasing the number of splits led to up to 1.4× improvement compared to the default setting (split=1) in SpConv v2.Effectiveness of larger split mask design space.We have shown the effectiveness of unsorted implicit GEMM.Additionally, we found that it's also beneficial to have a larger number of splits for segmentation workloads, as demonstrated in Table 5.The parallelism of an implicit GEMM kernel will be increased by × with splits.Because segmentation workloads usually have smaller number of input points, they are more prone to suffer from device under-utilization and increased parallelism will be beneficial.Similarly, the overhead for mapping and partial sum reduction kernels is smaller in segmentation workloads.Significantly reduced computation overhead (Figure 11) further supports the preference for a larger number of splits in these scenarios.
Effectiveness of adding fetch-on-demand.We then choose 1-frame MinkUNet on nuScenes running on RTX 2080 Ti and Orin as a benchmark to demonstrate the efficacy of fetch-on-demand dataflow.As in Figure 18, individually-tuned implicit GEMM and fetch-on-demand dataflows both achieve inferior performance compared with the hybrid dataflow TorchSparse++.We further present the layerwise latency breakdown of the best tuned implicit GEMM and fetch-on-demand configurations in Figure 18b, where we amortize the mapping time to all layers within each layer group (defined in Section 4).The end-to-end performance of fetch-on-demand is notably better than implicit GEMM in decoder layers (i.e.layer index > 18) but gets outperformed in downsampling layers, where maps M could not be reused.This is because implicit GEMM has lower mapping cost while fetch-on-demand computation kernels run faster for the given workload.
Effectiveness of tuner design for training.We finally demonstrate that decoupling dataflow parameters for forward, dgrad and wgrad SK-M (1x) SK-M (0.5x) NS-M (3f) NS-M (1f) NS-C (10f) WM-C (3f) WM-C (1f) Geomean Figure 20: Naively converting fixed shape dense tensor programs to flexible shape sparse convolution kernels will incur 1.5-1.7×runtime overhead due to repetitive pointer calculation.We bridge such huge performance gap via loop invariance hoisting and show that constant folding is unnecessary for high-performance sparse kernels.
kernels could improve the training performance by up to 10% in Figure 22.On both A100 and 2080 Ti, binding parameters for two of the kernels is better than using the same parameters for all three kernels.On A100, binding dgrad and wgrad is better.This is because such strategy could minimize mapping overhead and there is a drastic performance difference (16×) between tensor cores (which runs computation) and CUDA cores (which runs mapping) on A100.On 2080 Ti, binding forward and dgrad is better, since the two kernels share the same workload pattern.Given much smaller performance gap between tensor and CUDA cores on 2080 Ti (3×), the additional mapping overhead for decoupled wgrad and dgrad is acceptable.
Sparse Kernel Generator
In this section, we present an analysis of the effectiveness of the design choices outlined in Section 3. Our experiments were conducted on 3090 GPUs using FP32 precision for offline reordering and FP16 precision for all other experiments.Our results demonstrate that simplifying control flows and addressing is critical for achieving optimal performance in sparse kernels.Additionally, we found that the conventional wisdom of fusing GPU kernels as much as possible may not always be applicable in the context of sparse computing.
Effectiveness of offline reordering.We present the effectiveness of offline reordering in Figure 19.As described in Section 4, our approach involves reordering computations based on the values of bitmasks in the implicit GEMM dataflow with mask splitting.While conventional wisdom in GPU kernel design suggests fusing kernels as much as possible (including reordering in the sparse convolution kernel), our experiments demonstrate that this can lead to a 4-12% reduction in end-to-end performance compared to offline reordering.Specifically, when considering the wgrad kernels, it is necessary to iterate over the out dimension in the large and innermost loop.Online reordering introduces an additional level of indirect addressing to the memory access in the innermost loop.This will disrupt the continuous access pattern and results in a significant slowdown for wgrad.
Effectiveness of control flow simplification.We use MinkUNet on SemanticKITTI as an example to illustrate the importance of simplifying addressing and control flows.In Figure 20, we evaluate the benefits of loop invariance hoisting.The results show that a naively converted template can be very inefficient.It is up to 1.7× slower than the original fixed shape CUDA kernel.However, with the help of loop invariance hoisting in which we move all the common pointer offsets to the outmost possible loop, we can almost totally eliminate the pointer arithmetic overheads.After applying this technique, our templated CUDA kernel can even run slightly faster than the original fixed shape kernels in 5 of 7 sample workloads.Figure 21 shows the benefits of reducing control flow instructions by padding the map in Figure 7.The instructions performing boundary checking can make the kernel up to 1.3× slower.Whereas, after eliminating these control flow instructions, this problem can be well solved with the help of padding.
Effectiveness of adaptive tiling.We experiment with two sets of tiling sizes in TorchSparse++ dependent upon the MACs of the workload.Adaptive tiling provides up to 1.6× speedup to TorchSparse++, compared with fixed tiling version (either always using the small tile sizes or always using the larger tile sizes).
Discussions
Summary on performance gain.In Figure 23, we present a summary of the performance improvement achieved through the use of our Sparse Kernel Generator and the enlarged design space.Our SK-M (1x) SK-M (0.5x) NS-M (3f) NS-M (1f) NS-C (10f) WM-C (3f) WM-C (1f) Geomean generator produces high-performance sparse convolution kernels that are 1.1 − 1.2× faster than SpConv 2.3.5, even when using the same dataflow parameters.Remarkably, our code generator comprises only 5% of the lines of code of SpConv 2.3.5'smetaprogrammer, which significantly reduces system complexity and enhances programmer productivity.For the enlarged design space, more mask splits are very helpful for segmentation workloads and FP32 precision, while unsorted implicit GEMM is helpful for detection workloads and FP16 precision.The efficacy of fetch-on-demand is mainly demonstrated in smaller segmentation workloads (e.g.NS-M).These results reinforce that there is no one-size-fits-all strategy for sparse kernel design, and that relying on first-order approximations for end-to-end performance is unreliable.
Insights for microarchitectural improvements.Our TorchSparse++ also provides new insights for future microachitecture design.Our findings indicate that when memory bandwidth is halved on RTX 3090, the latency of the system increases by 1.2×.In contrast, reducing peak computation throughput by a factor of 2 results in a more substantial slowdown of 1.4×.Therefore, scaling computation units instead of off-chip memory bandwidth can provide more effective improvements.Moreover, it is apparent from Table 3 and Table 4 that mapping operations account for up to 50% of the total runtime.Leveraging the efficient ASIC design [28] for these operators could significantly enhance the performance of GPUs when executing sparse computation workloads.
Future applications.TorchSparse++ platform presents novel opportunities for enhancing machine learning workloads beyond point clouds and graphs.For instance, in image segmentation [26] and video recognition [33], not all pixels hold equal significance.Hence, the selective computation on a sparse subset of pixels using TorchSparse++ can potentially significantly enhance efficiency.Furthermore, masked autoencoders (MAEs) [20] exhibit inherent sparsity in input patterns during training.While existing approaches already attempt to exploit this sparsity using sparse convolution [22,42], we posit that TorchSparse++ has the potential to unlock even greater speedups for such workloads.
RELATED WORK
Compiler-Based Tensor Program Optimization.Our system benefits from recent advances in tensor program compilation.The pioneering research TVM [4] provides graph-level and operatorlevel abstractions for deep learning workloads based on the essence of Halide [35].Based on TVM, AutoTVM [5] automatically discovers the optimal mapping of a fixed-shape tensor program onto the target hardware.Nimble [37] and DietCode [57] are compilers stemmed from TVM that can generate tensor programs with dynamic-shape workloads, but they are still tailored for dense workloads (e.g.transformers with variable length input sequences) and cannot deal with the sparsity in point clouds.More recently, Ten-sorIR [13] proposed a new IR for tensor programs and allows easier tensorization of accelerator primitives.SparseTIR [52] further extended TensorIR to support sparse workloads.Bolt [47] combines the advantages of fully-automatically generated kernels [4] with hand-written subroutines [24] through graph matching.
Point Cloud Accelerators.Deep learning on point clouds has also generated considerable interest in domain-specific accelerator design.Zhu et al. [59] proposed a sparsewise dataflow that skips cycles for zero-weight computations and saves energy through gating.Mesorasi [15] co-designed its architecture with the delayed aggregation algorithm to reduce redundant computation in point cloud NNs.More recently, Point-X [55] exploited spatial locality in point clouds through clustering, mapping point clouds into distributed computation tiles.It maximized parallelism and minimized data movement.Additionally, PointAcc [28] mapped all mapping operators in point cloud NNs to a versatile bitonic sorter, making it the first specialized accelerator to support 3D sparse convolution computation.Crescent [14] tamed irregularities in point clouds through approximate neighbor search and selectively elided bank conflicts, while Ying et al. [54] pushed point cloud compression to edge devices through intra-and inter-frame compression.
CONCLUSION
We introduce TorchSparse++, a high-performance GPU sparse computation library designed for point cloud and graph deep learning.TorchSparse++ features a highly optimized Sparse Kernel Generator with less than one-tenth of the engineering cost compared with the state-of-the-art system.It further enables us to build an inputaware Sparse Autotuner that selects the best configuration for each layer.TorchSparse++ achieves 1.7-3.3×inference speedup and 1.2-3.7×faster training compared to state-of-the-art MinkowskiEngine, SpConv v1/v2, and TorchSparse on seven real-world perception workloads.TorchSparse++ also achieves 2.6-7.6×speedup over DGL, PyG and Graphiler when running R-GCNs.We hope that TorchSparse++ will facilitate future system and microarchitectural research in sparse computation on 3D data and graphs.
for 50 classes from which 9 are seion.
Figure 2 :
Figure 2: Single scan (top) and multiple superimposed scans with labels (bottom).Also shown is a moving car in the center of the image resulting in a trace of points.
Figure 3 :
Figure 3: Waterfall diagram for different dataflows for sparse convolution on GPU: weight-stationary dataflows (a, b) are easier to implement and maintain but they do not overlap memory access with computation.Both fetch-on-demand and implicit GEMM dataflows require custom MMA routines but are able to hide the memory access time with pipelining.
Figure 4 :
Figure 4: Illustration of the gather-GEMM-scatter dataflow for Figure 2 workload: we first gather input features according to M for each weight , then perform GEMM or batched GEMM, and finally scatter the results back to output locations given in M .
7 Figure 5 :
Figure5: Illustration of the unsorted implicit GEMM dataflow for Figure2workload: each gray grid corresponds to a indimensional input feature and blue grids correspond to redundant computation.The input feature matrix is not stored in DRAM.We assume that each thread block contains 4 threads (4 rows).
Figure 6 :
Figure 6: SpConv v2 sorts the input bitmasks and reorders the computation accordingly.White grids are skipped zero computation.Consequently, redundant computation is reduced from 34 MACs (Figure 5) to 26 for the Figure 2 example.
Figure 7 :
Figure7: We introduce Sparse Kernel Generator, a code generator that integrates on-chip MMA subroutines from[4] directly at the source code level, unlocking the potential of using dense, fixed shape tensor compiler to generate programs for sparse, dynamic shape workloads.Gray: constant code, red: fixed metaprogramming template, blue: generated automatically by existing tensor compiler for each tile size.
Figure 8 :
Figure 8: For sparse convolution workloads (MinkUNet on SemanticKITTI), it is possible for our template to achieve or even exceed cuBLAS utilization for the equivalent-sized GEMM problem by tuning only tiling size parameters.
Figure 10 :
Figure 10: We extend the implicit GEMM design space by introducing arbitrary number of mask splits.Compared with Figure 6b (1 split), splitting the mask into three parts further reduces redundant computation and increases parallelism.
Figure 11 :
Figure 11: A large design space on number of splits in implicit GEMM is beneficial: (a) redundant computation in segmentation workloads continues to drop quickly until splits = 5; (b) redundant computation in detection workloads at split = 0 (unsorted) is acceptable on high-parallelism devices.
Figure 13 :
Figure 13: Parameter binding in training tuner: we propose to partially decouple the dataflow parameters for forward, dgrad and wgrad kernels in training, which leads to up to 10% improvement in end-to-end training time.
Figure 17 :
Figure 17: Sorting is able to reduce the computation time, but its overhead outweighs the benefit on detection workloads.
Figure 18 :
Figure 18: Fetch-on-demand and implicit GEMM dataflows are complementary to each other on FP32 segmentation workloads.A hybrid dataflow is up to 1.06× faster than the best single dataflow.
Figure 22 :
Figure 22: Different from dense kernels, sparse forward, dgrad and wgrad kernels have different preferences for dataflow parameters.Binding hyperparameters for all kernels could hurt the training performance by up to 10%.
Figure 23 :
Figure 23: Summary of performance gain from different techniques and the enlarged design space in TorchSparse++.
Table 6 .
Ablation studies of the static auto labeling model.Metrics are the box accuracy at 3D IoU=0.7 and IoU=0.8 for vehicles in the Waymo Open Dataset val set.
Points and box sequence joint 85.67 / 65.77 Table
Table 1 :
Different sparse convolution dataflows in Section 2 can be mapped onto GPUs as dense GEMM with sparse global memory iterators.
Table 3 :
End-to-end Latency: Unsorted implicit GEMM is up to 1.2× faster with up to 1.7× redundant computation. | 13,703.2 | 2023-10-25T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Impact of SARS-CoV-2 infection on the recovery of peripheral blood mononuclear cells by density gradient
SARS-CoV-2 virus infection is responsible for coronavirus disease (COVID-19), which is characterised by a hyperinflammatory response that plays a major role in determining the respiratory and immune-mediated complications of this condition. While isolating peripheral blood mononuclear cells (PBMCs) from whole blood of COVID-19 patients by density gradient centrifugation, we noticed some changes in the floating properties and in the sedimentation of the cells on density medium. Investigating this further, we found that in early phase COVID-19 patients, characterised by reduced circulating lymphocytes and monocytes, the PBMC fraction contained surprisingly high levels of neutrophils. Furthermore, the neutrophil population exhibited alterations in the cell size and in the internal complexity, consistent with the presence of low density neutrophils (LDNs) and immature forms, which may explain the shift seen in the floating abilities and that may be predictive of the severity of the disease. The percentage of this subset of neutrophils found in the PBMC band was rather spread (35.4 ± 27.2%, with a median 28.8% and IQR 11.6–56.1, Welch’s t-test early phase COVID-19 versus blood donor healthy controls P < 0.0001). Results confirm the presence of an increased number of LDNs in patients with early stage COVID-19, which correlates with disease severity and may be recovered by centrifugation on a density gradient together with PBMCs.
Results
Density gradient qualitative evaluation. Patients were stratified according to their clinical status and the specimens were grouped in accordance with the time of sampling i.e. within 24 h from admission (early phase COVID-19) or 14 days after discharge. Healthy control samples were provided by blood donors at the time of the donation. As expected, the typical mononuclear population from control blood donors was recovered at the interface between the density medium and the sample (Fig. 1A). By contrast, the vast majority of PBMCs derived from patients with an early phase of COVID-19 floated quite differently. Some of these samples showed a thicker and fluffier band of mononuclear cells, mostly lower than usual, sometimes stickier, often decorated underneath by a dispersed fine crown of red blood cells or with erythrocyte aggregates (Fig. 1B). A number of times the density medium also appeared rather cloudy or less transparent than usual. The buoyant density was restored in PBMCs obtained from blood samples collected from patients at 14 d post-discharge (recovered COVID-19 cohort; Fig. 1C), although a slightly thicker band was often observed. Comparison between one sample at day 14 post-discharge and four samples collected at the admission, processed simultaneously and under the same conditions, are shown in Fig. 1D. For a better appreciation, magnification of a 14 d post-discharge sample (recovered COVID-19 cohort) and a sample taken within 24 h from admission (early phase COVID-19 cohort) are displayed in Fig. 1E, while Fig. 1F is a closer view of two samples taken at the time of admission into hospital. Occasionally two bands, one right at the interface between the sample and the medium, and a second within the density medium itself, were also observed (Fig. 1, panels G, H and I).
Population dynamics study. An automated haematology cell analyser was used to assess the cell populations (neutrophils, lymphocytes, monocytes) present in the band isolated on the density gradient. To have an estimate of the variation in the cell composition in the collected fraction we determined also the cell ratios.
The neutrophils in the samples of early phase of COVID-19 (taken within 24 h from hospitalisation) heavily outnumbered in percentage those of the convalescent patients ( www.nature.com/scientificreports/ controls ( Fig. 2A). In patients from the 24 h cohort, the amount of neutrophils found in the PBMC band was of 35.4 ± 27.2% (mean ± SD; t-test versus healthy controls P < 0.0001), with a median value of 28.8% (IQR 11.6-56.1), thus indicating that the number of neutrophils varied extensively among the different COVID-19 patients, probably depending from the severity of the disease at the time of hospitalisation. The neutrophil distribution data was less spread out in the cohort of patients who had the samples taken 14 days after being released from hospital. The mean value in the discharged patients group was 7.4 ± 9.2% (t-test versus blood donor controls P = 0.0469; median of 3.8%; IQR 2.5-7.9) while a range much narrower was displayed by the healthy controls i.e. 5.4 ± 5.2% (with a median of 3.3%; IQR 2.3-6.8).
Conversely, the percentage of lymphocytes showed an opposite trend across the different cohorts as compared to what mentioned above for the neutrophil populations (Fig. 2B). Indeed, at admission, the average percentage of the lymphocytes displayed a mean value of 47.3 ± 2.4% (t-test vs healthy controls P < 0.0001; median 46.9%; IQR 28.2-67.4), while the samples from post-discharged patients showed an average value of 73.0 ± 10.8% (t-test vs blood donor controls P = 0.8120; median 74.7%; IQR 70.1-80.1) and the mean value for the lymphocytes from healthy blood controls was 72.6 ± 8.3% (median 74.3%; IQR 68.0-78.2). Thus, the percentage of lymphocytes, collected post-gradient, returned to normal levels at 14 d post-discharge as suggested by the comparison with the values found for the blood donors.
Although still statistically meaningful, the differences in the monocyte population among the three groups were less remarkable (Fig. 2C) To determine the efficiency of the cell enrichment in the PBMC fraction from the samples of the three different cohorts, we measured the total cell counts that somehow provide the yield of the separation method. The total number of cells recovered from early phase COVID-19 patients was 22.6*10 6 ± 21.5 (t-test vs blood donor controls P = 0.2935; median 15.4; IQR 8.8-29.9), while 24.9*10 6 ± 15.5 cells (t-test vs healthy controls P = 0.7595: median 22.4; IQR 15.8-32.8) were collected in samples from 14 d post-discharge patients and 25.5*10 6 ± 10.9 cells (median 24.2; IQR 18.4-32.2) were retrieved from the healthy control samples (Fig. 2D).
The enrichment in terms of total number of cells recovered was not too dissimilar in the three groups and an optimal yield of the mononuclear cells should not account for more than 5% of granulocytes. To assess the efficiency of the recovery of the PBMCs after the isolation and to evaluate the distribution pattern of the prevailing cell types post-separation through the density gradient, we determined the neutrophil-to-lymphocyte ratio (NLR-PBMC, Fig. 2E) and lymphocyte-to-monocyte ratio (LMR-PBMC, Fig. 2F) in the samples of the three cohorts. Although the three group of patients were statistically different (Fig. 2E), the cells collected from the early phase COVID-19 group were highly unbalanced towards neutrophils. The NLR-PBMC of 2.1 ± 4.2 (mean ± SD; early phase COVID-19 versus blood donors t-test P < 0.0001) demonstrated that, in the early phase of the disease, the PBMC fraction was heavily contaminated by neutrophils, while the ratios of the other two groups were much more close to each other. The cells collected from the 14 d post-discharge cohort displayed a mean of 0.13 ± 0.21 (14 d discharged patients versus blood donors t-test P = 0.0165) whereas the cell ratio from the blood donor group had values ranging from 0.08 ± 0.08. The LMR-PBMC ( Fig. 2F) of the control cohort showed a mean of 3.7 ± 1.3, while the mean ratio of cells collected from COVID-19 patients at 24 h were 3.5 ± 2.7 (Welch's t-test vs blood donors P = 0.6203) and those from patients 14 days after dismissal were 4.3 ± 2.2 (t-test vs healthy controls P = 0.0132). Thus indicating that the distribution pattern of mononuclear cells in the density band was similar and the collected fractions contained similar amount of lymphocytes and monocytes. However it is worth to mention that in the present context, the above NLR-PBMC and LMR-PBMC are calculated on recovered cells and provide only a parameter of the cell distribution within the enriched fractions following the separation medium centrifugation.
Flow cytometry analysis. Whole blood leftovers were used to analyse the leukocyte population of COVID-19 patients (N = 13) and healthy blood donors (N = 10) by flow cytometry. The cell populations were assessed after red blood cell lysis. Representative forward scatter (FSC) versus side scatter (SSC) dot plots from the three cohorts are depicted in Fig. 3A, B and C. In particular, FSC parameter is proportional to the cells size, while SSC reflects the internal cell complexity (i.e. nuclear morphology and granularity). The leukocyte morphological properties from patient samples taken 24 h from hospitalisation ( Fig. 3B) clearly exhibited a shift down in the distribution of the granulocyte population, probably indicating a lack/loss of granularity, when compared to those from both healthy controls (Fig. 3A) and patients at discharge (Fig. 3C). In addition, a diminished number of lymphocytes and an alteration of the monocyte population were observed in the early phase of infection. The post-dismissal specimen (Fig. 3C) showed that the cell distribution pattern returned almost back to normal, though with still a slight increased number of events in the granulocyte region.
The box-and-whisker chart (Fig. 3D) of the median channel intensity (MCI) in the forward scatter (FSC) indicated that there are some significant differences in the cell sizes, indeed the MCI was lower in the 24 h cohort (159,221 MCI, Mann-Whitney U test vs blood donors P = 0.0070) and tended to raise again in the 14 d discharged patient group (165,465 MCI, Mann-Whitney U test vs healthy blood donor controls P = 0.0236) to levels close to normality (healthy controls median = 168,870 MCI), although not completely back to normal. The most striking difference was observed instead in the granularity (Fig. 3E), where the internal macromolecular complexity of the cells had a massive impact despite the fact that the side scatter (SSC) is measured at a ninety degree angle on the laser light and its signal is weaker than the FSC. The cells from early phase of COVID-19 patients (within 24 h) showed a marked decrease in the SSC compared with the blood donors (U test vs blood healthy controls www.nature.com/scientificreports/ . The neutrophil-to-lymphocyte ratio (NLR-PBMC) of cells recovered after gradient separation is shown in panel (E), while lymphocyte-to-monocyte ratio (LMR-PBMC) of the same cell density fraction is displayed in panel (F). Each dot represents a patient sample and the values are expressed as mean ± standard deviation (SD). Statistically significant differences were expressed as *for P ≤ 0.05, **for P ≤ 0.01, ***for P ≤ 0.001 and ****for P ≤ 0.0001, while n.s. signifies not statistically significant for P > 0.05.
Morphological inspection of COVID-19 samples. Blood smears and PBMCs from density gradients
spotted onto glass slides were fixed and stained with May-Grünwald Giemsa for a simple cytology evaluation. In Fig. 4, (panels A-D) we showed the blood smears from four different COVID-19 patients within 24 h from hospitalisation. The samples confirmed the presence of excessive number of mix-shaped neutrophils in the peripheral blood. Some of these neutrophils had either a ribbon-shaped or a horseshoe-shaped nucleus suggesting that these are immature forms. Neutrophils are still present in the blood smears (Fig. 4E, F and G), from specimen taken after 14 days post-discharge, though mostly in the mature form. The panels H and I in Fig. 4 are examples of the mononuclear cell enrichment prior to the last centrifugation to remove cell debris and blood residues. Fig. 5A).
The whole blood monocytes showed fewer fluctuations (Fig. 5C) than the enriched fractions in Fig. 3C. The populations were more compact in all three cohorts and displayed mean percentage values of 7.3 ± 3.3 (Welch's Each dot represents a single patient and the values are expressed as mean ± standard deviation (SD). Statistically significant differences are shown as *for P ≤ 0.05, **for P ≤ 0.01, ***for P ≤ 0.001 and ****for P ≤ 0.0001, not statistically significant differences are indicated as n.s. www.nature.com/scientificreports/ t-test vs healthy controls P = 0.0055; median 6.8%; IQR 4.9-9.5) at 24 h from admission, with a slight increase in the discharged patients to 8.7 ± 1.9% (t-test vs blood donors P = 0.5707; median 8.7%; IQR 7.3-10.2) and to 8.5 ± 1.8% (median 8.3%; IQR 7.0-9.6) in the healthy blood donors. White blood cell (WBC) counts measure the number of leukocytes in the blood and showed minor differences (Fig. 5D). As expected the percentage of the circulating white blood cells was slightly higher in the early phase of the infection 7.1 ± 3.2% (t-test vs blood donors P = 0.0077; median 6.7%; IQR 4.8-8.8) and in the recovered patients (14 d post-dismissal) 7.0 ± 2.7 (t-test vs healthy controls P = 0.0010; median 6.4%; IQR 5.5-8.0) while dropped to 6.0 ± 1.5% (median 5.7%; IQR 5.0-6.8) in the healthy blood donor controls.
Discussion
Monocytes usually float to a density between 1.079-1.089 g/ml followed by the lymphocytes, therefore the band we collected should have accounted mostly for mono/lymphocyte populations. The cloudy and sticky area might be due to activated monocytes, but also to a massive mobilisation of activated macrophages and mature dendritic cells, which have entered the circulation to reach peripheral tissues. What is the cause that led the erythrocytes to decorate the lymphocytes band is unclear. However, erythrocyte rosetting (E-rosetting) is an immunological reaction occurring between receptors or antibodies on the lymphocyte plasma membrane and epitopes on the erythrocytes, where red blood cells arrange themselves in petal-like flower. Immature B-lymphocytes may be also present in the lymphocyte pool and can express membrane-bound IgD able to capture foreign and self-antigens or to activate other immune effectors. Hemadsorption and hemagglutination are abilities not new to bacteria and parasites (e.g. Treponema pallidum, Plasmodium spp.). Furthermore many viruses, Orthomyxoviridae, Paramyxoviridae, Matonaviridae (i.e. Rubella) among others, can attach to molecules present on the surface of the erythrocytes, thus eventually leading them to agglutinate. CR1/CD35 and CR2/CD21 are complement receptors for C3b/C4b and C3d expressed by several cell populations within the PBMC pool. It is also possible that anti SARS-CoV-2 antibodies, appearing during an early seroconversion, may show some sort of cross-reactivity and recognition of self-epitopes. Indeed, it is worthwhile mentioning that the excessive cell death occurring during the infection may determine the release of unprecedented amount of DNA, histones and cell debris, which in turn could lead to a hyperproliferation of self-reactive lymphocytes and/or, as consequence of extensive cell damage, to the production of autoantibodies.
The abnormal amount of neutrophils collected in the density gradient may be misleading, because we are dealing with isolated PBMCs rather than with the whole blood. However, why neutrophils are present in such numbers in the gradient band remains an open question. They could have been trapped within the complex web of complement components, fibrinogen, fibrin and platelets. Neutrophils might also been kept floating while expelling DNA and proteins, which instead of baiting pathogens, might have been recognised by other immunological effectors e.g. lymphocytes. The excessive formation of NETs may act as parachute, thus keeping most of the neutrophils in suspension, allowing them to float in the density medium rather than sinking and sedimenting on top of the erythrocytes layer. Low density neutrophils are very efficient at generating neutrophil extracellular traps. Intravascular neutrophil aggregation and NET formation may play a pivotal role also in the occlusion of microvessels. The NETs can be responsible for the activation of the vascular endothelium, can promote the vascular damage of endothelial cells, foster platelet aggregation and trigger intravascular coagulation. These low density neutrophils belong to a subpopulation yet not well-defined.
Last, but not least, they might well be exhausted neutrophils after degranulation or, consistent with early stage responses to infection and inflammation where the bone marrow is stimulated to release blood-forming stem cells, immature (low density) neutrophils. The recruitment of immunological effectors at the active site of the infection i.e. the lungs, might have triggered an enhanced mobilisation of immature or naïve neutrophils to compensate the depletion of the circulating primed and activated forms. Incomplete granulopoiesis might confer to immature neutrophils a lower density allowing the flotation onto or within the separation medium.
Density is defined as mass per volume and the blood density is somehow proportional to haematocrit. Changes in the circulating cell populations in COVID-19 patients may affect the total protein concentration of blood, may determine a variation in the blood density and could inhibit an optimal mononuclear cell separation through the density gradient. Therefore the simplest explanation for the modification of the buoyant density of PBMCs is the result of the massive alteration in the cell composition of the circulating blood that occurs in COVID-19 patients.
Although the traits of the outliers were more pronounced and their features appeared, up to a certain extent, amplified, the population dynamics of the cells enriched after fractionation on density medium, traced in essence the distribution pattern of the circulating blood. The ratios, i.e. NLR and LMR, in the whole blood allow a better resolution and provide a better appreciation of the cell distribution. However, despite the fact that the patterns www.nature.com/scientificreports/ of the ratios above mentioned differed between the groups and appeared to be significant, further studies with a much larger number of samples are required to validate their potential use as predictive markers of the disease severity.
In conclusion, our results show that the infection of SARS-CoV-2 strongly affects the sedimentation of neutrophils on density gradients. The imbalanced recovery of "alleged" mononuclear cells in COVID-19 patients, containing a large amount of LDNs and a diminished number of lymphocytes, confirm what seen by us and others in the whole blood.
Patient population. The study was conducted after obtaining the Fondazione IRCCS Ca' Granda Ospedale
Maggiore Policlinico Ethics Committee approval (COVID-19_Network IRB #241_2020) and the patient informed consents for research studies and biobanking. All methods were performed in accordance with the relevant guidelines and regulations.
Two hundred eighty-one samples were collected and harvested at the Processing Facility and Biobank POLI-MI from April the 1st until April the 30th 2020, during the SARS-CoV-2 pandemic. Seventeen samples gave insufficient material or the patients had duplicate samples, and have been excluded from the data analysis. In that period we received samples from two hundred seventy patients ranging between 24 and 99 years of age and, as some progressed towards the recovery, for ten patients we had also follow-up samples. All the patients were diagnosed with COVID-19, had SARS-CoV-2 positive nasopharyngeal swab detected through real-time reverse transcription-polymerase chain reaction (real-time RT-PCR) and were hospitalised in the different COVID-19 Units of IRCCS Ca' Granda Ospedale Maggiore Policlinico. Blood samples were taken either within 24 h from the admission into the Unit and/or at day 14 after hospital discharge.
The healthy control samples were kindly provided by registered blood donors visiting our Blood Transfusion Centre for their regular blood donations (after written informed consent and within the approved protocol CoDS).
Demographics of the patients and of the blood donors are shown in Table 1.
Isolation of peripheral blood mononuclear cells.
Peripheral blood samples were drawn into EDTA vacutainer tubes. PBMCs were isolated by polysaccharide density gradient centrifugation (Lymphoprep™-Axis-Shield Alere Technologies AS, Oslo, Norway) as described previously 63 with minor modifications. Briefly, after centrifugation at 1560 g for 10 min at 20 °C, the plasma was collected and snap-frozen in LN 2 for further studies. The remaining buffy coats (i.e. leukocyte concentrates) from the same patient samples were pooled together and then diluted 1:2 (vol:vol) in 0.9% sodium chloride injection solution, USP (Baxter S.p.A., Rome, Italy) prior to be layered onto the density gradient medium. The samples were centrifuged at 900 g for 20 min without brake. PBMC isolation procedure was performed at 20 °C to avoid any possible variation in the density of the density gradient medium. The PBMC rings were collected, resuspended and washed twice with saline by centrifugation at 450 g and 350 g, respectively. Cell pellets were suspended again in freezing medium for storage.
Cell populations and flow cytometry analysis. Cell counts were performed by running the samples through an automated haematology cell analyser (Sysmex XN-1000™ DASIT S.p.A., Cornaredo, Italy), in body fluid or complete blood count (CBC; for the clinical samples) mode. Peripheral blood samples leftover were analysed within 24 h from the withdrawal, after lysing the erythrocytes with lysing solution (BD Pharm Lyse™, BD Biosciences). After washing, the samples were acquired with a BD FACSLyric flow cytometer equipped a 405 nm violet laser, a 488 nm blue laser and a 647 nm red laser. For each tube 50,000 events in the lymphocyte gate, forward scatter (FSC) versus side scatter (SSC), were acquired and the data were analysed using FACSSuite software (BD Biosciences). An automatic standard setup was applied for each acquisition. Internal quality assurance procedures included BD cytometer setup and tracking beads, according to the manufacturer's instructions.
May-Grünwald Giemsa staining. Whole blood smears and cell suspensions spotted onto glass slides were air dried under the biological safety cabinet class II and fixed in 100% methanol for 30 min prior to staining. The slides were stained with May-Grünwald working solution for 15 min, rinsed with water, immerged in Giemsa working solution for 20 min, rinse thoroughly with water and air dried. The glass slides were imaged Table 1. Patient and blood donor demographics. A total number of 90 early phase COVID-19 and 174 COVID-19 recovered patients were finally included in the analysis. *Ten of patients had follow-up samples i.e. at the admission and 14 days post-discharge. www.nature.com/scientificreports/ with an Olympus BX S3 microscope equipped with a Sony CCD 3 camera, objective magnification at 10×, 20×, 40×, 60× oil and 100× oil. All images were acquired with Sysmex LAFIA, version 2, blood image filing software.
Statistical analysis. All graphs display mean and standard deviation (SD). Statistical analysis was performed using the Welch's t-test, for unequal sample sizes, or Mann-Whitney U test and one-way ANOVA to calculate statistically significant differences and individual P values using GraphPad Prism version 5.00 (GraphPad Software, San Diego California, USA, www.graph pad.com). Statistically significant differences were expressed as *for P ≤ 0.05, **for P ≤ 0.01, ***for P ≤ 0.001 and ****for P ≤ 0.0001, not statistically significant differences are indicated as n.s. for P > 0.05.
Data availability
The datasets generated during the current study are available from the corresponding author on reasonable request. | 5,171.6 | 2020-09-10T00:00:00.000 | [
"Biology",
"Medicine"
] |
Effects of a Nonuniform Tip Clearance Profile on the Performance and Flow Field in a Centrifugal Compressor
This paper presents a numerical investigation of the effects of a nonuniform tip clearance profile on the performance and flow field in a centrifugal compressor with a vaneless diffuser. This study focuses in particular on the magnitude and location of the wake. Six impellers with different tip clearance profiles were tested in the flow simulations. The accuracy of the numerical simulations was assessed by comparing the experimental data with the computational results for a system characterized by the original tip clearance. Although the performance improved for low tip clearances, a low tip clearance at the trailing edge improved the compressor performance more significantly than a low tip clearance at the leading edge. The flow field calculated for a system characterized by a low tip clearance at the trailing edge produced a more uniform velocity distribution both in the circumferential and in the axial directions at the impeller exit because the wake magnitude was reduced. As a consequence, this impeller provided a better potential for diffusion processes inside a vaneless diffuser.
Introduction
Centrifugal compressors used in aeronautical and industrial applications are required to function at high pressure ratios and over wide operating ranges.A better understanding of the flow mechanism underlying the secondary flow fields inside compressors is essential for the design of centrifugal compressors with improved performance and extended operational ranges [1].The main flow in centrifugal compressors is sensitive to secondary flows generated by the channel curvature as well as the centrifugal and Coriolis forces.As a result, jet-wake flow patterns form at the impeller exit, and these patterns govern the flow field in the diffuser [2,3].In the jet-wake model, flow in the jet region near the pressure surface of the blade is nearly loss-free, whereas flow in the wake region near the suction surface of the blade generates large total pressure losses.The classical jet-wake model ignores the influence of the tip clearance flow and the spanwise flow variations.The model, therefore, does not adequately describe the outlet flow of an unshrouded impeller with tip clearance, and the model must be revised.In unshrouded impellers, the tip clearance flow significantly affects the performance and flow field because it causes pressure losses over the tip clearance and strengthens the secondary flows.
The influence of the tip clearance flow on the performance and flow structure in centrifugal compressors has been studied by many researchers [4][5][6][7][8][9][10][11][12][13][14].In previous studies, it was found that the swirling flows and vortex motions within unshrouded impellers are sensitive to the tip clearance flow.Consequently, these flows produce high losses and degrade performance if the tip clearance is large.The nonuniformity of the impeller outlet flow due to the tip clearance flow significantly affects the diffuser inlet flow conditions and causes large flow separations inside the diffuser.The overall compressor performance may be improved by enhancing the uniformity of the impeller outlet flow in the circumferential and axial directions.Most previous numerical studies of the tip clearance effects have assumed a uniform International Journal of Rotating Machinery tip clearance with a constant height.Few studies have considered a nonuniform tip clearance.
The main objective of this work is to numerically investigate the effects of various nonuniform tip clearance specifications on the performance and flow field in a centrifugal compressor, particularly focusing on the magnitude and location of the wake region.Numerical simulations were conducted for six centrifugal compressor impellers in which the tip clearance height varied linearly from the leading to the trailing edge.The numerical results were compared with experimental data to assess the accuracy of the numerical predictions.
Test Compressor Description
The compressor used in this study is known as the "Radiver" in the literature.Radiver test case measurements were conducted at the Institute of Jet Propulsion and Turbomachinery at RWTH Aachen, Germany.The investigations were funded in part by the Deutsche Forschungsgemeinschaft (DFG).The experimental data were made available to the public for broad use in computational fluid dynamics (CFD) validation studies.The compressor stage consists of an unshrouded impeller with 15 backswept blades and a vaneless diffuser.Under the design condition with a specific speed of 0.69, the maximum total pressure ratio and the maximum corrected mass flow rate through the impeller are 4.07 and 2.5 kg/s, respectively.The tip clearance height is 0.7 mm for the stationary impeller, and the vaneless diffuser has a constant
Numerical Procedure
Three-dimensional simulations were performed using the commercial CFD code, ANSYS CFX 12.0.Steady calculations were conducted to minimize the computational effort.As numerical scheme for these calculations, a second-order upwind scheme was used in space.The computational There was no significant difference (less than 0.2%) between the medium and large grid size regarding total pressure ratio and total efficiency.As a result of the grid-independence study, the medium grid size was selected to perform flow analysis.15 cells in the spanwise direction were used to study the flow field within the impeller tip clearance region.The total number of computational cells is approximately 450 000 in the impeller passage and 150 000 in the diffuser.The computational grid over the entire domain, including the detailed surface grid near the leading and trailing edges of the impeller blade, is shown in Figure 2.
The k-ω SST model was applied to obtain turbulent quantities, assuming that the flow in the compressor is fully turbulent.In most of the blade surfaces and end walls, y+ values are less than 2 as required for the use of k-ω SST model.The design rotational speed was 35 200 rpm.The total pressure, total temperature, and flow direction were specified at the impeller inflow boundary (P 01 = 101300 Pa and T 01 = 288.15K), and the mass flow rate was specified at the diffuser outflow boundary.Periodic boundary conditions were applied in the circumferential direction, and the walls were treated with the no-slip and adiabatic conditions.
Numerical Test Cases
The influences of a nonuniform tip clearance on the performance and flow field were investigated by conducting simulations of impellers with six tip clearance shapes.The tip clearance heights at the leading and trailing edges were selected, and the distribution of the tip clearance varied linearly along the tip.To account for reductions in the tip clearance due to centrifugal forces and temperature variations under hot-running conditions [17], the tip clearances at the leading and trailing edges were varied from 0.7 mm to 0.48 mm or 0.24 mm.The relative clearance ratios, defined as the ratio of the tip clearance height to the blade height at the impeller exit, are 6.4%, 4.4%, and 2.2% for 0.7 mm, 0.48 mm, and 0.24 mm, respectively.Table 2 shows the numerical test cases.The tip clearance at the trailing edge was varied in cases 2 and 3, whereas the tip clearance at the leading edge was varied in cases 4 and 5.The tip clearance both at the leading and trailing edges was varied in case 6.The computational conditions, grid size, and boundary conditions were constant for all test cases except for the tip clearance profile.
Validation for Uniform Tip Clearance
Prior to comparing the performance of the test cases, the numerical result for the original tip clearance (0.7-0.7 mm) was validated with the experimental data.Because no experimental data is available under 100% speed condition, all calculations were performed at 80% speed.Measurements were carried out with steady probes and time resolving laser-2-focus velocimeter at sections 2 M and 2M , respectively.Numerical results were validated by comparing the computed characteristic curves of the mass-averaged static and total pressures in section 2 M with the experimental data, as shown in Figure 3.The CFD results showed satisfactory agreement with the experimental results over the full range of operating conditions.The flow field at section 2M near the impeller exit is shown in Figure 4 at φ/φ d of 1.0.The overall flow structure predicted by the numerical simulations, including the jet-wake flow pattern, was qualitatively well captured, as observed by comparison with the experimental results.Some differences were present in the low meridional velocity region near the shroud.The extension of the low meridional velocity region along the axial direction away from the shroud was overpredicted by CFD due to a higherintensity tip clearance flow, which was caused by a uniform tip clearance.Improved results may be obtained by applying a nonuniform tip clearance height (0.7-0.48 mm) [13].conditions to choked points.The overall performances were compared.
Performance
The static and total pressure ratios at the impeller exit are shown in Figure 5.The static pressures over the entire operating range for all impellers with reduced tip clearances were higher than those in the original case, indicating that reductions in the tip clearance at the leading or trailing edge improved the static pressure rise.The impeller exit total pressure also increased for the impellers with reduced tip clearances.Case 6 had the highest static and total pressures among the test cases, because the tip clearance was reduced both at the leading and trailing edges.At φ/φ d of 1.0, cases 3 and 6 showed 2.5% and 3.9% improvements in the static pressure rise compared to case 1, respectively.The static and total pressure curves also indicated that a smaller tip clearance at the trailing edge was more effective than a smaller tip clearance at the leading edge.Large total pressure losses generally result from leakage flow through the tip gap [5].As the tip clearance height at the leading or trailing edge is reduced in size, reductions in the tip leakage flow decrease losses and improve pressure rises.In the original case, at φ/φ d of 0.8, the tip leakage flow rate was 5% lower than that at φ/φ d of 1.0, while the mass flow rate was 20% lower than that at φ/φ d of 1.0.Because the mass flow rate through the impeller was reduced much more than the mass flow rate through the tip, the influence of the tip clearance flow was not reduced at φ/φ d of 0.8.Therefore, the decreased tip clearance was still effective even at the lower mass flow rate.
The performance of a centrifugal compressor is significantly affected by tip clearance in two ways.First, tip leakage flow causes large pressure losses due to mixing with the main passage flow, as mentioned above.Second, the impeller cannot transfer its momentum to the fluid across the tip clearance, which decreases the total work input.Consequently, small changes in the tip clearance could have large influences on the compressor performance.The work input coefficient (μ = C θ2 /U 2 ) at the impeller exit is shown in Figure 6.Higher work input coefficients were associated with all impellers with tip clearances smaller than that of the original case.Additionally, decreasing the tip clearance at the trailing edge rather than at the leading edge effectively increased the work input.
The shapes of the total efficiency curves for all cases resembled the original case, with a maximum peak at φ/φ d of 1.0, as shown in Figure 6.The impellers with reduced tip clearances yielded better efficiencies because the mixing loss caused by the tip clearance flow was reduced.In particular, cases 3 and 6 showed 1.2% and 2.2% improvements in efficiency compared to case 1 at the design point, respectively.At low mass flow rates, the total efficiency difference between cases 1 and 3 decreased compared to the peak-efficiency point, probably because the mixing loss due to the tip clearance was less prominent.
The diffuser performance was investigated by measuring the static and total pressure ratios at the vaneless diffuser exit, as shown in Figure 7.The overall characteristics of the diffuser exit pressures were similar to those of the impeller exit pressures.It was evident that case 6 had higher static and total pressure ratios than any other cases.In addition, cases 2 and 3, which featured reduced tip clearances at the trailing edge, were clearly superior to cases 4 and 5, which featured reduced tip clearances at the leading edge, with regards to the static and total pressures at the diffuser exit.
Flow Field Analysis.
The flow field was analyzed to investigate the performance variations caused by the reduced tip clearance at the leading or trailing edges.The flow field comparison was confined to the operating condition at φ/φ d of 1.0, under which a higher efficiency was observed for cases 1, 3, 5. the trailing edge are shown in Figure 8. Relative to case 1, the flow velocity along the blade chord clearly decreased for cases 3 and 5.In case 1, the tip leakage flow velocity distribution had local maxima at 20% and 80% chord lengths.In case 3, the flow velocity increased from the leading edge to a maximum at 20% chord length, then it retained a nearly constant value until 80% chord length.This result indicated that case 3 experienced a significant reduction in the tip leakage flow in the rear region due to the smaller tip clearance at the trailing edge.On the other hand, case 5 displayed a slow increase in the tip leakage flow from the leading edge to 80% chord length, where the tip leakage flow velocity reached a local maximum value.In contrast with case 5, case 3 displayed a larger flow velocity across the gap in the front region but a smaller flow velocity in the rear region.The total flow rate integrated along the chord length in case 3 was lower than that in case 5.The flow rate across the tip in cases 3 and 5 were reduced to 69% and 73%, respectively, of the value in case 1.These flow velocity variations significantly affected the flow field and were closely related to the lower total pressure loss.
Blade Tip Loading.
The effects of the tip clearance distribution on the blade tip loading were investigated by comparing the blade pressure distributions near the shroud in the three cases.Figure 9 shows a plot of the blade tip loading values at 90% span from the hub.All three cases showed similar pressure distributions along the blade chord.However, at 30% chord length from the leading edge, case 5 showed the highest blade loading, suggesting that the generation of a tip leakage vortex was delayed because the mass flow rate across the tip clearance decreased significantly in the front part of the blade due to the small tip clearance at the leading edge.On the other hand, after 75% chord length, the static pressure increase on the pressure surface for case 3 was much larger than it was for the other cases, and it generated higher blade tip loading in the rear region.This result arose from the fact that case 3 had a small tip clearance at the trailing edge, which produced a weak tip leakage flow and decreased the mixing loss.
Loss Generation Process.
The loss distribution in the impeller was investigated by calculating the entropy function using (1), as suggested by Whitfield and Baines [18].The entropy function distributions from the leading edge to the trailing edge are shown in Figure 10, The slope of the entropy function was nearly constant for the impeller with a uniform tip clearance (case 1), whereas the entropy function for impellers with nonuniform tip clearances (cases 3 and 5) was non-linear.Variations in the entropy slope were related to the tip clearance loss changes caused by the nonuniform tip clearance.Stronger interactions between the tip leakage flow and the main passage flow occurs at the point at which the tip leakage flow is stronger.At such points, the flow interaction is strong, which increases the mixing loss and deteriorates the compressor performance.The entropy function of case 3 had a steep slope in the front region but a shallow slope in the rear region rather than that for case 5.As a consequence, the entropy function of case 3 was lower at the trailing edge than it was in case 5.Because the entropy function at the impeller exit represents the accumulated total loss through the flow passage, the low entropy in case 3 produced a higher impeller efficiency, as shown in Figure 6.This result indicated that the positive effects of loss reduction in the rear region of case 3 were more dominant than the loss gains in the front region.The tip clearance losses were quantitatively compared by calculating the total pressure loss coefficient induced by the tip clearance using the following equation [19]: The overall total pressure loss in case 3 was 3.4% lower than that of case 5 and 29% lower than that of case 1.Consequently, the tip clearance loss can be decreased more effectively by reducing the tip clearance at the trailing edge.
Impeller Outlet Flow.
The total efficiency distributions at the impeller outlet planes are shown in Figure 11.The efficiency distributions of all three cases showed similar high efficiency flow patterns near the pressure side and low efficiency flow patterns near the suction side of the shroud, which are typical properties of centrifugal compressors [20].The magnitude and extent of the low efficiency regions differed to some degree.The low efficiency region near the shroud suction side in case 1 was most extended both in the circumferential and in the axial directions compared with the other two cases because the tip leakage flow was the International Journal of Rotating Machinery strongest in case 1.The magnitude and extent of the low efficiency region in case 5 were significantly lower than in case 1, and further reductions in the low efficiency region were achieved in case 3. Variations in the high loss region contributed significantly to the reduced performance at the impeller exit.It should be noted that the tip clearance height at the trailing edge was a main factor in determining the high loss region near the shroud at the impeller exit.
A comparison of the circumferentially averaged radial velocity distributions from the hub to the shroud at the impeller exit is plotted in Figure 12.At the impeller exit, the flow near the shroud experiences secondary flows that block the flow passage.In addition, the tip clearance flow influences the secondary flow structure and further blocks the flow passage, thereby generating a large total pressure loss near the shroud.The blockage resulted in a rapid decrease in the radial velocity near the shroud in all three cases.The low radial velocity near the shroud indicated a separated zone in which a high loss fluid accumulated.A blocked zone with a negative radial velocity was present above 98% span from the hub in cases 1 and 5, suggesting that a strong reverse flow was present near the shroud because low momentum flow could not overcome the pressure gradient in the radial direction.However, in case 3, no reverse flow was observed near the shroud, suggesting that the blockage was smaller in case 3 than in the other two cases.For this reason, case 3 included the smallest high-loss region near the shroud, as shown in Figure 11.The increased radial velocities near the shroud in case 3 were explained by the reduced tip leakage flow in the rear region, as confirmed in Figure 8.The radial velocity distributions in cases 1 and 5 were almost identical, even though these systems included different tip clearances at the leading edge.This result suggested that the radial velocity distribution in the spanwise direction at the impeller exit depended strongly on the tip clearance at the trailing edge.
Wake Flow.
The wake formation at the impeller exit was investigated by comparing the relative velocity distributions in the three cases, as shown in Figure 13.All three cases showed a strongly nonuniform flow both in the circumferential and in the axial directions, and the flow structure was highly three-dimensional at the impeller exit.In addition to the axial flow variations due to the skewed shear layer, the flow was affected in the circumferential direction by the blade loading.The relative velocity distributions in cases 1 and 5 showed similar qualitative characteristics.Case 3 presented a different flow structure in the wake region, which included an area of low relative velocity.
The circumferential nonuniformity at the impeller exit was investigated by comparing the relative deviations from the mass-averaged relative velocity at 90% span for the three cases, as shown in Figure 14.The absolute magnitude of the deviations in case 3 was smaller than that in the other cases in the region from the pressure side to the mid-pitch, which led to an enhanced uniformity in the flow field in the circumferential direction.Another important feature is the location of the wake region, where the relative velocity deviation has the minimum value.The wake region appeared near the pressure side in cases 1 and 5, but it was closer to the center of the blade pitch in case 3.
To determine the source of movement of the wake region, the secondary vorticity distribution at the impeller exit was calculated using the following equation, and the results are shown in Figure 15: The sign of the secondary vorticity indicated the rotational direction of the secondary flow, where positive values corresponded to vortices rotating in the counter-clockwise direction and negative values corresponded to vortices rotating in the clockwise rotation.Two distinct vortices were observed in all three cases.The vortex with a positive secondary vorticity generated by the Coriolis force, called the passage vortex, moved low momentum fluid above the mid-span from the pressure to the suction side.On the other hand, the tip leakage vortex with a negative secondary vorticity moved low momentum fluid near the shroud from the suction to the pressure side.Interactions between these two vortices rotating in opposite directions affected the location of the wake region near the shroud.In cases 1 and 5, the tip leakage vortex was stronger than the passage vortex, so that the transport of low-energy fluid to the suction side was limited.This explained why the wake region was located near the pressure side in cases 1 and 5, as shown in Figure 14.In case 3, however, the intensity of the tip leakage vortex decreased due to the reduced tip clearance at the trailing edge.Therefore, the passage vortex became large and was not hampered by the tip leakage vortex, which further transported the low-energy fluid toward the suction side.In other words, the larger passage vortex played a key role in moving the wake region further toward the suction side for a system with a small tip clearance at the trailing edge.The relative deviations from the mass-averaged relative velocity from the hub to the shroud at the impeller exit are further compared in Figure 16.All cases showed similar nonuniform flow patterns in the hub-to-shroud direction.That is, high relative velocities were present near the hub, whereas low relative velocities were found near the shroud.A 7% reduction in the flow deviation at 95% span was achieved in case 3 compared to the other two cases.This observation indicated that case 3 had a more uniform flow pattern in the axial direction due to the smaller wake region.In addition, reductions in the relative velocity gradient from the hub to the shroud produced a smaller secondary flow, which increased the impeller efficiency for case 3.
Because the impeller outlet flow field significantly influences the diffuser inlet flow conditions, the flow inside the vaneless diffuser experiences severe distortions in the axial direction.The flow angle distribution from the hub to the shroud at the impeller exit for a compressor with a vaned diffuser must be considered in the design of advanced threedimensional diffusers [21].
Figure 17 shows variations in the flow angles from the hub to the shroud at a 110% impeller exit radius.Because the flow angle is measured from the circumferential direction, flows with small flow angles tend to be more circumferential.All three cases presented distorted flow patterns because the upstream impeller flow was strongly nonuniform in the axial direction.Differences in the flow angle between the hub and the shroud were at least 10 • .Near the shroud, the wake flow, which had a small flow angle, remained in the diffuser inlet downstream of the impeller because the wake mixing process was incomplete.The diffuser flow pattern in case 3 was more homogeneous than in the other two cases in terms of the flow angle distribution in the axial direction due to the improved International Journal of Rotating Machinery uniformity of the flow field at the impeller exit, as confirmed in Figure 16.Therefore, a smaller tip clearance at the trailing edge provided a better potential for diffusion processes inside a vaneless diffuser than did a smaller tip clearance at the leading edge.
Conclusions
The effects of a nonuniform tip clearance on the flow fields and performance of a centrifugal compressor were analyzed.Six impellers with different tip clearance distributions were modeled numerically, and the flow fields were investigated in detail to identify the factors that contributed to variations in the performance and efficiency.
(1) Although impellers with reduced tip clearances at the leading or trailing edges performed better than the impeller with a original uniform tip clearance, a smaller tip clearance at the trailing edge reduced the overall tip leakage mass flow rate more effectively than a smaller tip clearance at the leading edge.Accordingly, a smaller tip clearance at the trailing edge lowered the mixing loss caused by interactions between the tip leakage flow and the main passage flow.
(2) A nonuniform tip clearance influenced the location of the wake region by modulating the interaction between the passage and tip leakage vortices, which rotated in the opposite directions and transported low-momentum fluid near the shroud.A small tip clearance at the trailing edge reduced the tip leakage vortex and increased the passage vortex, which consequently increased the transport of the wake region toward the suction side.
(3) A reduced tip clearance at the trailing edge produced a more uniform flow at the impeller exit both in the circumferential and in the axial directions due to a reduced wake region.Because the diffuser inlet flow was significantly affected by the upstream impeller outlet flow, decreasing the tip clearance at the trailing edge significantly improved diffusion processes inside the vaneless diffuser.
Figure 2 :
Figure 2: Computational grid for the entire domain, with the leading and trailing edges presented in detail.
Figure 3 :Figure 4 :
Figure 3: Comparison of the static and total pressure characteristic curves at section 2 M.
Figure 12 :
Figure 12: Radial velocity distribution from hub to shroud at the impeller exit.
Figure 16 :
Figure 16: Relative deviation of relative velocity from hub to shroud at the impeller exit.
Figure 17 :
Figure 17: Flow angle distribution from hub to shroud at a 110% of the impeller exit radius.
Flow.Variations in the normalized flow velocity through the tip clearance from the leading edge to Figure 5: Impeller exit static and total pressure ratio.
Figure 6: Impeller work input coefficient and total-to-total efficiency. | 6,153 | 2012-08-07T00:00:00.000 | [
"Engineering",
"Physics"
] |
Multi-elliptic rogue wave clusters of the nonlinear Schrödinger equation on different backgrounds
In this work, we analyze the multi-elliptic rogue wave clusters of the nonlinear Schrödinger equation (NLSE) in order to understand more thoroughly the origin and appearance of optical rogue waves in this system. Such structures are obtained on uniform backgrounds by using the Darboux transformation scheme for finding analytical solutions of the NLSE under various conditions. In particular, we solve the eigenvalue problem of the Lax pair of order n in which the first m evolution shifts are equal, nonzero, and eigenvalue dependent, while the imaginary parts of all eigenvalues tend to one. We show that an Akhmediev breather of order n-2m\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n-2m$$\end{document} appears at the origin of the (x, t) plane and can be considered as the central rogue wave of the so-formed cluster. We show that the high-intensity narrow peak, with the characteristic intensity distribution in its vicinity, is enclosed by m ellipses consisting of the first-order Akhmediev breathers. The number of maxima on each ellipse is determined by its index and the solution order. Since rogue waves in nature usually appear on a wavy background, we utilize the modified Darboux transformation scheme to build such solutions on a Jacobi elliptic dnoidal background. We analyze the vertical semi-axis of all ellipses in a cluster as a function of an absolute evolution shift. We show that the cluster radial symmetry in the (x, t) plane is broken when the shift value is increased above a threshold. We apply the same analysis on the Hirota equation, to examine the influence of a real parameter and Hirota’s operator on the cluster appearance. The same analysis can be applied to the infinite hierarchy of extended NLSEs. The main outcomes of this paper are the new multi-rogue wave solutions of the nonlinear Schrödinger equation and its extended family on uniform and elliptic backgrounds.
ious conditions. In particular, we solve the eigenvalue problem of the Lax pair of order n in which the first m evolution shifts are equal, nonzero, and eigenvalue dependent, while the imaginary parts of all eigenvalues tend to one. We show that an Akhmediev breather of order n − 2m appears at the origin of the (x, t) plane and can be considered as the central rogue wave of the so-formed cluster. We show that the high-intensity narrow peak, with the characteristic intensity distribution in its vicinity, is enclosed by m ellipses consisting of the first-order Akhmediev breathers. The number of maxima on each ellipse is determined by its index and the solution order. Since rogue waves in nature usually appear on a wavy background, we utilize the modified Darboux transformation scheme to build such solutions on a Jacobi elliptic dnoidal background. We analyze the vertical semi-axis of all ellipses in a cluster as a function of an absolute evolution shift. We show that the cluster radial symmetry in the (x, t) plane is broken when the shift value is increased above a threshold. We apply the same analysis on the Hirota equation, to examine the influence of a real parameter and Hirota's operator on the cluster appearance. The same analysis can be applied to the infinite hierarchy of extended NLSEs. The main outcomes of this paper are the new multi-rogue wave solutions of the nonlinear Schrödinger equation and its extended family on uniform and elliptic backgrounds.
Keywords Nonlinear Schrödinger equation · Rogue waves · Circular and triangular rogue wave clusters · Darboux transformation
Introduction
The cubic nonlinear Schrödinger equation (NLSE) [1][2][3] is one of the most studied partial differential equations in nonlinear sciences since it was first introduced in 1960s. The extensive research on NLSE solutions and their dynamical stability is still being conducted, due to its huge importance in various fields of physics, such as nonlinear optics [4][5][6][7][8], Bose−Einstein condensates [9,10], oceanography [11,12], and plasmas [13]. The NLSE is a general equation for the nonlinear wave propagation that can describe a variety of phenomena in nonlinear regimes of different physical systems. However, due to its simple form, it cannot be used for more accurate explanation of some higher-order effects in nonlinear optics. To this end, in recent times an extended family of nonlinear Schrödinger equations (ENLSEs) that may include an arbitrary number of higher-order dispersion terms with additional nonlinearities, has been proposed and investigated in [14,15]. The extension of NLSE to the hierarchy of higherorder equations originated from the need to explain the propagation of ultrashort pulses through optical fibers [16,17]. So far, attention was mostly focused on the Hirota [18,19] (with the third-order dispersion) and quintic equations (containing the dispersions up to the fifth-order) [20][21][22][23].
Although NLSE is a well-known equation, it remains a subject of broad interest for several reasons. First, both NLSE and its extended variants are completely integrable in one dimension. The possibility of deriving exact analytical NLSE solutions has motivated a number of experimental studies in many branches of physics where NLSE appears. Second, the same mathematical procedure used for deriving NLSE solutions can be applied to the ENLSE as well. Finally, the characteristics of NLSE solutions are similar to those of the more complicated equations emanating from the ENLSE. Therefore, one can analyze the simpler NLSE solutions and still predict the properties of the same solution class of the extended family.
The one-dimensional NLSE that will be mostly considered in this work has the form The transverse spatial variable is denoted by t, the retarded time in the moving frame by x, while the slowly varying envelope of the electric field corresponds to the wave function ψ ≡ ψ(x, t). This form of NLSE is appropriate for the propagation of light pulses in fibers. However, if pulses are very short, additional operators have to be introduced that add finer details to the basic NLSE solutions. Thus, in this work we will also deal with the Hirota equation, comprising of a new operator (with a third-order dispersion along taxis and additional nonlinearities) added to the basic NLSE, multiplied by a real parameter α: Both the NLSE and its extended variants exhibit similar classes of localized solutions, among which the most important seem to be Akhmediev breathers (ABs) [24,25] and different solitons [26]. An AB consists of a series of intensity maxima on a finite background that are localized in time and periodic in space. The term soliton in general describes a solitary wave packet that propagates in some direction in the (x, t) plane on a zero background, without any distortion in its shape. The technique that is often used to derive exact analytical solutions is the Darboux transformation (DT) [27]. It utilizes the Lax pair formalism and recursive relations to calculate higher-order solutions of the NLSE, starting from the trivial zeroth-order seed function which satisfies Eq. (1.1).
The importance of DT for this work is its ability to provide higher-order Akhmediev breathers on uniform [28] and periodic backgrounds [29,30]. The breather emerges as a high-intensity narrow peak with a complex intensity distribution at its base. Such structures can be considered as rogue waves (RWs), which "appear from nowhere and disappear without a trace." The RW is localized both in space and time and is defined by one dominant peak. The simplest example of a RW is the Peregrine soliton [31]. The notion of rogue waves is now widely spread around, in studies of deep ocean waves [12,32], nonlinear optics [33,34], superfluidity [35], Bose-Einstein condensates [36], and others. The current hot topic in the nonlinear science is to investigate the cause and nature of optical rogue waves [37].
In this paper, we add new results to the field of RWs by investigating the special multi-elliptic rogue wave clusters of the NLSE. These solutions are also periodic along t-axis and throughout the paper we consider intensity distributions within a single transverse period. They consist of a rogue wave peak (ABs of the second order or higher), surrounded by the first-order ABs positioned on a number of concentric ellipses centered on the peak (see Fig. 1). We obtain these structures on uniform and Jacobi elliptic dnoidal backgrounds, by using the DT of order n, having the first m evolution shifts equal, nonzero, and eigenvalue dependent. We show that the order of the central rogue wave and the number of ellipses are determined by the two mode numbers, n and m.
Various multi-rogue wave solutions have been previously analyzed as triplets [50], triangular cascades [51][52][53], and circular clusters [28,[54][55][56]. The classification of various hierarchies of multi-RW structures into different families was presented in [57,58]. Although our results are similar to those in [56], where the authors used the determinant representation of DT, we believe that the first important contribution of our study is the simple method for generating elliptical clusters by using the evolution shifts in the Darboux transformation scheme. The second novelty in our results is the analysis of semi-axes of ellipses as functions of the evolution shifts and an estimate when the radial symmetry of the cluster will break up. Our third contribution to the field is the generation of elliptic RW clusters on a dn background, which was not presented before, to the best of our knowledge. Finally, the significance of our work is also the determination of new solutions of the Hirota equation in the form of elliptic RW clusters. We also point to ways how to generalize this analysis to the infinite hierarchy of NLSEs.
We stress out that the solutions presented in this paper are in the form of two-dimensional (2D) or threedimensional (3D) color plots. The clusters on the uniform background are calculated by using the exact analytical DT procedure. However, the mathematical expressions of higher-order DT solutions are extremely lengthy and complicated, and would require many journal pages to be written down. We therefore omit the derivation of such expressions. In turn, we pick the convenient grid and calculate the numerical values at each point with infinite accuracy (that is, up to the machine precision limit).
To ensure the correctness of our calculations, our DT method has been extensively validated by: (1) comparing it to directly solved NLSE, verified by the conservation of energy [38], (2) showing that our DT algorithm exactly satisfied the Peak-height formula [30], and (3) always halving the grid-size when doing the DT iterations and confirming that the results are stable and unchanged.
The paper is organized as follows. In Sect. 2, we briefly discuss the main properties of higher-order Akhmediev breathers. In Sect. 3, we present various NLSE solutions in the form of multi-elliptic rogue wave clusters on a uniform background. In Sect. 4, we analyze the properties of such clusters, in particular the lengths of semi-axes of elliptical rings, by going up to the four ellipses in a cluster. In Sect. 5, we exhibit the NLSE cluster solutions on a Jacobi elliptic dnoidal background. In Sect. 6, we generalize our findings to the Hirota equation that includes the third-order dispersion and additional nonlinearities. In Sect. Conclusion, we summarize our results.
Higher-order Akhmediev breathers
Here, we briefly describe Akhmediev breathers of the NLSE and how to use DT scheme to generate higherorder RW solutions. The first-order AB is a singleperiodic function along the t-axis [24,25]: that rides on a finite background and is localized along the x-axis. The period L and the angular frequency ω of an AB of any order are determined by a single parameter a, with 0 < a < 0.5 [38]: AB turns into the Peregrine RW at a = 0.5 and becomes the Kuznetsov-Ma soliton when a > 0.5. An arbitrary AB can be derived using DT, starting from the seed solution ψ 0 = e i x . The n-th-order AB (its wave function ψ n (x, t)) turns out to be a nonlinear superposition of n first-order ABs, each characterized by the complex eigenvalue λ j = r j +iν j and the evolution x j , and spatial shifts t j ( j = 1, . . . , n). The existence of such abundance of relevant parameters offers an incredible variety of possible RW solutions. The Lax pair procedure and the recursive relations in the DT scheme that are used to calculate ψ n (x, t) from ψ 0 are described in details in [28]. Here, we briefly mention that computational complexity for calculating the n-th-order DT solution exhibits a quadratic growth (≈ O(n 2 )). Therefore, by increasing n, the number of iterations and the complexity of analytical expressions rise significantly. For this reason, we do not write down these expressions explicitly here, but only represent them graphically.
It is important to note that the imaginary part ν of an AB is simply related to the parameter a: ν = √ 2a. By taking into account relation (2.3), one can see that the imaginary part of AB's eigenvalue is completely determined by its angular frequency: ν = 1 − ω 2 /4.
Multi-elliptic rogue wave clusters on uniform background
To generate multi-elliptic RW clusters, we require that the frequencies of constituent single-order breathers are all different, but close to zero. This goal can be achieved by defining them as harmonics of ω 1 = ω → 0, so that ω j = jω, where j ≥ 2 [28]. In this work, we take the simplest possibility that all real parts are zero: r j = 0. The n ABs are thus formed by using their imaginary parts calculated from the corresponding frequencies: It is easy to see that all ν j tend to 1. Having set the eigenfrequencies, it remains to choose the evolution and spatial shifts. Different choices lead to very different solutions. We introduce a slight modification with respect to [28]: The first m evolution shifts x j are set to be equal, nonzero, and eigenvalue dependent. We assume them to be given via an expansion for j ≤ m, and x j = 0 for j > m. In addition, we simply set all t j shifts to be zero. We also assume that all X jl = 0 except one particular value that is explicitly stated in the text. Although seemingly an oversimplification, this choice of parameters nonetheless leads to an interesting family of new RW clusters. And, as mentioned, all this is provided for by an incredible richness in the choice of four sets of parameters. It turns out that such an n-th-order Darboux solution with m nonzero shifts x j is characterized by a single Akhmediev breather of order n − 2m placed at the origin (0, 0) (a central rogue wave, labeled as RW n−2m ) and m ellipses (rings) around the RW. The outer ellipse contains 2n − 1 ABs of order 1 (AB1), and each following ring toward the center has four AB1s less, as analyzed in [56,57]. We term this Darboux solution as the multi-RW cluster.
This remarkable intensity pattern can be described as follows. If all x j shifts are zero, the nonlinear superposition of all n DT components will arise at the origin of (x, t) plane, forming an Akhmediev breather of order n. If only one shift is applied, say x 1 = 0, the central AB of lower order and intensity will remain, but it will partially break up and a ring structure of 2n − 1 rational solutions centered at (0, 0) will be displayed [28]. The minimal n value for this picture is n = 3. As mentioned above, if one chooses m > 1 components shifts to be nonzero, they will split and decrease the intensity of the central structure even more and produce more rings. The exact splitting mechanism could be understood if one applies the mathematical analysis of exact analytical DT expression. However, deriving and analyzing such complex expressions for big n is a very tedious job that offers little insight and was not performed anywhere before; hence, it will not be performed here either.
In Fig. 1, we present the multi-rogue wave cluster on uniform background having 2 ellipses. Hence, we set m = 2 and vary the value of n in order to change the order of the central RW. The main frequency ω is set to 10 −1 so that the imaginary part ν is getting close to one, according to Eq. (3.1). Also, we choose X j4 = 10 6 , to set evolution shift x j in the order of 1 (Eq. 3.2). In Fig. 1a, we set n = 6 and obtain the second-order RW at the center. The outer and inner ellipses consist of c 1 = 11 and c 2 = 7 AB1s, respectively. In Fig. 1b, we set n = 7 to get a third-order RW. The number of AB1s on two (x, t) plane. The shifts are calculated for X j4 = 10 6 . The orders of Darboux transformation and the Akhmediev breather representing the high-intensity central peak are: a n = 6 with the second-order rogue wave, b n = 7 with the third-order rogue wave, c n = 8 with the fourthorder rogue wave, and d n = 9 with the fifth-order rogue wave ellipses is c 1 = 13 and c 2 = 9. One can further increase n to get higher-order RWs that are rarely or never seen before. For n = 8, the RW4 is obtained with c 1 = 15 and c 2 = 11 (Fig. 1c). For n = 9, the RW5 is formed with c 1 = 17 and c 2 = 13 (Fig. 1d). It is seen that higher the order of the central RW, the narrower and stronger the RW peak at (0, 0). The highest intensities in Fig. 1a-d are, respectively: 22.98, 44.45, 77.26, and 105.81. We have also computed solutions with other frequencies, for instance ω = 0.05. The appearance of this RW cluster was very similar to the 10 −1 case (not shown), so we proceeded with the 0.1 value.
We last show the results for m = 4. The analysis is analogous to the previous two cases. When n = 10, we get RW2 and 4 rings surrounding the central peak. The number of AB1 on four ellipses, from outer to the inner, is c 1 = 19, c 2 = 15, c 3 = 11, and c 4 = 7, respectively (Fig. 3a). Next, we take n = 11 and obtain RW3 with c 1 = 21, c 2 = 17, c 3 = 13, and c 4 = 9 (Fig. 3b).
In general, under the conditions for DT computation presented in this section, our conjecture is that the RW of n − 2m order is obtained at (0, 0) with m ellipses around the peak for n ≥ 2m + 2. If we index the rings from 1 to m, going from the outer to the inner one, then the number of AB1 on each ring is c i = 2n − 4i + 3.
The semi-axes of ellipses in clusters
In paper [28], dealing with a single circular rogue wave cluster, the authors proposed a formula for the radius of the ring depending on Darboux shifts along the xand t-axes. Having ellipses at hand, we present how the length of the vertical semi-axis depends on an absolute origin (0, 0) of the (x, t) plane. The orders of Darboux transformation and the Akhmediev breather representing the high-intensity central peak are: a n = 8 with the second-order rogue wave, b n = 9 with the third-order rogue wave, and c n = 10 with the fourth-order rogue wave Fig. 3 2D color plots of rogue wave clusters on the uniform background having four ellipses (m = 4) around n − 2m order rogue wave, formed at the origin (0, 0) of the (x, t) plane. Shifts are obtained for X j4 = 10 6 . The orders of Darboux transforma-tion and the Akhmediev breather representing the high-intensity central peak are: a n = 10 with the second-order rogue wave, and b n = 11 with the third-order rogue wave evolution shift for all ellipses, up to four rings (m = 4) in the cluster. In Fig. 4, we show the n = 10 and m = 4 case and indicate the AB1s on an i-th ellipse with numbers i = 1 to i = 4 (from the inner-most toward the outer-most ring). It turns out that all rings, for any m, have AB1s at t = 0 with alternating positions along this vertical line: the inner-most AB1 is positioned above the central RW. The next AB1 with index 2 is below the maximum at (0,0), the AB1 marked 3 is in the upper half of the (x, t) plane, and so on. We therefore define the length of the vertical semi-axis R xi as the distance between the central RW at the origin and AB1 indexed with i. Since the higher-order DT solution is expressed by a very complicated and cumber-some analytical expression, which is difficult to write and analyze, we applied the numerical calculation of AB1 positions along the t = 0 line.
In Fig. 5a, we show the R x dependence on x shift = x 1 = ... = x m for two rings surrounding RW2 at the center (n = 6, m = 2) at two main frequencies: ω = 0.1 and ω = 0.05. We see that the position of AB1 at the first ring is increasing as the evolution shift becomes larger, in contrast to AB1's x-coordinate on the outer ring, which first increases but then saturates and finally starts to decline slowly. Therefore, one can differ two regions in the (x, t) plane: the first one (I) is roughly estimated as the half-plane before the interception of R x1 and R x2 curves. In this region, the cluster has a Fig. 4 2D color plots of the quadruple-elliptic rogue wave cluster, obtained for n = 10 and m = 4, with numbers 1, 2, 3, and 4 indicating the single-order Akhmediev breathers located at t = 0 on each ellipse. The distance along x-axis between the RW center at (0, 0) and the maximum of the breather labeled with i corresponds to the vertical semi-axis R xi of the i-th ellipse regular "concentric ellipses" -like shape. For ω = 0.1, the region I is determined by x shift < 11.7. When ω = 0.05, the region I is given by x shift < 23. The example of a RW cluster in the second region (II) is shown in Fig. 5b. It is clearly seen that the two rings are deformed and thus no longer elliptical in the shape. In Fig. 5c and 5d, the R x1 and R x2 dependence is shown for the case of RW2 and RW3 at the center, respectively, only in the region I, where the radial symmetry is preserved.
In Fig. 6a, we plot the graph of R x as a function of x shift in the case of three rings around a RW2 cluster (n = 8 and m = 3). We see that the vertical semi-axis of the first and third ellipses is an increasing function of the evolution shift, in contrast to R x2 . In Fig. 6b, the RW2 with four rings is analyzed (n = 10 and m = 4). Graphs in both figures are computed in region I. Our conclusion is that the x position of AB1 (with t = 0) on odd-labeled ellipses (1, 3) grows constantly with the increasing shift, while R x of even indexed rings (2, 4) first increases, then saturates and finally slowly declines until the symmetry is broken.
Multi-elliptic rogue wave clusters on Jacobi elliptic dnoidal background
In this section, we demonstrate that multi-elliptic RW clusters can be obtained on a periodic background defined by Jacobi elliptic dnoidal function dn, by using the modified DT scheme for the NLSE [29]. The seed where g is the elliptic modulus and m dn = g 2 is the elliptic modulus squared. The choice of eigenvalues and shifts is the same as described in previous sections, but the procedure for calculating ψ(x, t) of order n is different. As explained in [29], the exact analytical values of wave function can be obtained only when t = 0. In order to compute ψ(x, t = 0) over the entire (x, t) grid, the numerical calculation is required. In this work, we use the fourth-order Runge-Kutta algorithm. We manage to obtain a two-ring cluster around RW2 (n = 6 and m = 2) on an elliptic background (m dn = 0.4 2 ) using this numerical procedure. The result is shown in Fig. 7a. We also present the 2-elliptic cluster around RW3 (n = 7, m = 2, m dn = 0.4 2 ) in Fig. 7b. By a careful look at both 3D plots, one can observe the low-amplitude background waves on which the AB1 structures and high-intensity AB2/AB3 peaks are generated.
Multi-elliptic rogue wave clusters for extended NLSE family
Finally, we generalize our results to Hirota equation, which is a first member of the extended family of nonlinear Schrödinger equations [14,15]. It is important to note that the DT technique retains the same recursive relations for the Lax pair and higher-order ψ functions as before. We therefore can generate solitons and The R x1 and R x2 as functions of x shift for n = 9 in the region of undeformed elliptic cluster breathers of any order using the sets of eigenvalues and transverse/evolution shifts as explained above. The intensity distribution of such solutions will differ from the simple cubic NLSE, due to free parameters in the extended families and a bunch of additional dispersion and nonlinear terms, but the procedure for their analytical buildup remains the same. In other words, one can utilize the same algorithm and take identical sets of shifts to compute multi-elliptic RW clusters for any equation from the extended NLSE hierarchy. The only difference in the DT scheme between NLSE and Hirota equation is in analytical expressions for the Lax pair functions r and s, but all recursive relations remain the same, as stated above. The Hirota DT scheme is presented in detail in [19]. Here, we generate the multi-elliptic RW cluster on a uniform background with ψ 0 = e i x as the seed. Our main goal is to investi- are calculated for X j4 = 10 6 . The free parameter is: a α = −0.07 and b α = 0.07 gate the cluster appearance when Hirota operator (the term in Eq. 1.2 multiplied by α) sets in. For this purpose, we take α = ±0.07 and build two clusters having RW2 at the (x, t) center, surrounded by two rings (n = 6 and m = 2). When α is negative, the entire cluster is tilted toward the positive direction of t-axis. In addition, the radial symmetry of the central RW2 is broken, since the local maxima in the vicinity of RW2 are more pronounced in the tilted direction.
The intensity distribution for α = −0.07 is shown in Fig. 8a. If one changes the sign of α then the skew angle (between the vertical axis of the cluster and x-axis) will just change the sign. The results for α = 0.07 are shown in Fig. 8b. The measured skew angle for α = ±0.07 is θ ≈ ∓18.6 • . In addition, both ellipses are stretched for nonzero α, since the distances between AB1s on inner and outer rings (marked in Fig. 4 with 1 and 2) and the central RW2 are bigger than in the NLSE case. For the cubic NLSE (α = 0: Fig. 1a) the semi-axis lengths are R x1 = 10.1 and R x2 = 22. In the Hirota case, shown in Fig. 8, the semi-axis lengths are R 1 = 10.76 and R 2 = 23.7. Here we claim that larger the α, the bigger the skew angle and the amount of cluster stretching (results not shown).
The analysis can be further applied on even higherorder equations of the extended NLSE hierarchy using the same procedure-with similar results. The subject of ongoing research is the influence of higher-order dispersions, additional nonlinearities, and real parameters on the overall shape of multi-elliptic RW clusters for this highly nonlinear systems. The limitation of this study is the complication for producing the rogue wave clusters dynamically. Even using the single period box and appropriate initial conditions from DT, the modulation instability sets in during numerical integration, decreasing the AB1 intensities on the rings and introducing additional peaks. The challenge for obtaining RW clusters numerically for NLSE and ENLSE remains the topic for the next research. In addition, the verification and analysis of the conservation laws of such numerical solutions using different methods, such as structure-preserving method [59][60][61][62][63][64] could be considered.
Conclusion
In this paper, we have presented the multi-elliptic rogue wave clusters of the nonlinear Schrödinger equation on uniform and nonuniform (Jacobi elliptic dnoidal) backgrounds. We showed that the Darboux transformation scheme of an arbitrary order n (up to n = 11) with m equal and nonzero evolution shifts, can produce Akhmediev breathers of n − 2m order positioned at the origin of the (x, t) plane. We showed that this high-intensity narrow peak, a central rogue wave, is encircled by m concentric elliptical rings of the firstorder Akhmediev breathers. The number of AB1s on each ring depends on the ring index and the solution order.
In order to better understand the cluster geometry, we have numerically investigated the lengths of semiaxes of all rings around the central RW. We provided the graphs where distances from RW at (0, 0) to AB1 on the vertical t = 0 line at each ellipse were plotted as functions of the absolute evolution shift. Our results suggest that the radial symmetry is destroyed for large evolution shifts.
We next used the modified Darboux transformation scheme to numerically build RW clusters on a periodic background. Although the intensity of higherorder Akhmediev breather at the center significantly surpasses the amplitude of elliptic waves, we were able to observe the weak background oscillations on which the cluster is constructed.
We concluded our analysis by applying the Darboux transformation scheme on the Hirota equation. We exhibited that the Hirota operator introduces the titling and stretching of the entire cluster in a direction determined by the sign of the single real Hirota parameter.
We believe that further research of multi-rogue wave clusters on the cubic NLSEs is warranted in the future, owing to many degrees of freedom offered by the Darboux transformation scheme (the choice of eigenvalues and the spatial and temporal shifts). Research possibilities grow even more if one considers the cluster solutions for the infinite hierarchy of extended nonlinear Schrödinger equations. The analysis and results presented in this paper could be used to understand better the origin and generation of rogue waves in physical systems governed by cubic and extended nonlinear Schrödinger equations. Data availability All data generated or analyzed during this study are included in the published article. | 7,278.6 | 2021-12-01T00:00:00.000 | [
"Physics"
] |